id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
196244771
|
pes2o/s2orc
|
v3-fos-license
|
Características de formação e trabalho de professores de nível médio em enfermagem Training and work characteristics of mid-level nursing teachers
This study aimed to identify teacher training and professional performance of teachers in mid-level nursing training schools. A quantitative study performed in 2012 through questionnaires with 41 teachers from eleven mid-level nursing training schools in the states of Rio Grande do Norte and Santa Catarina. General characteristics of the teacher, initial and continued education and work characteristics are presented. It is concluded that it is necessary to for the of mid-level and it is also to value not those acting in the
Introduction
According to Law No. 7498 of 1986, mid-level nursing corresponds to the auxiliary and technical categories (1) , and from the Decree nº2208 of 1997, it is now only a recommended technical graduation.These mid-level and professional-level personnel correspond to 45.50% of the Brazilian nursing workforce and have the prerogative of participating in the nursing care program, running nursing care actions (except for specialized nursing functions), participating in guidance and supervision of lower level nursing functions and participating in the health team (1)(2) .
Trained/taught by nurses, mid-level professionals have the curricular organization under the guidance of the Advisory Board/Commission of National Education/Basic Education Chamber N.16 of 1999 in transition with Resolution N.06 of 2012, which provides curriculum guidelines for vocational education; their formation/training also takes under consideration the National Curriculum Guidelines for Health Technical Education.These guidelines consider instruction for development of functions and sub-functions, designed for: diagnosis support, health orientation, protection and prevention, recovery/rehabilitation and health management (3) .
Still referring to curriculum proposal of mid-level nursing professional training, it is considered important to take into account the context of change promoted by Law No. 8080 of 1990 establishing the Unified Public Health System, which directs changes in health management, assistance and training, as well as the recent expansion of professional nursing education in the context of private Technical Schools Network associated with market expansion driven by the increase in jobs and demand for mid-level nursing professionals, especially in the Family Healthcare Program.
Based on the assumption that this scenario has expanded professional performance opportunity for nurses, a large portion of these professionals may possibly work as teachers during the beginning of their careers.Given the magnitude and importance of this workforce, the development of this research is relevant, questioning: what is the profile of mid-level nursing teachers with respect to teacher training and professional performance?
In this scenario of changes and challenges where teaching presents as a side challenge with demand for expertise that transcends disciplinary and technical-scientific knowledge, the objective of this study is to identify teacher training and professional performance training of medium level nursing teachers in the states of Santa Catarina and Rio Grande do Norte, Brazil.
Methods
A quantitative study of exploratory and descriptive approach conducted with eleven medium level nursing training school teachers located in large cities in the states of Rio Grande do Norte and Santa Catarina in the period from May to December 2012.
These states and their cities were highlighted because of the collaboration between two Graduate Programs in Nursing in Brazil: the Graduate Nursing Program at the Federal University of Santa Catarina and the Federal University School of Nursing of Rio Grande do North, to perform an inter-institutional doctorate, as well as the interface of schools with the work of masters and doctoral students in the line of research training and teacher development in health and nursing.
The data were collected from nursing school records provided by the sections of the Brazilian Association of Nursing of the respective states.To develop the study, fifteen schools that were registered were counted, however only eleven (n = 11) schools responded to the study, thereby constituting a convenience sample (4) composed of six public and five private schools.Teacher's emails who taught classes in nursing high school courses were obtained from the schools.
Data collection was conducted by e-mail throu-gh a questionnaire with open and closed questions built in Google Drive.The questionnaire had two parts; the first associated to personal characteristics and academic training for nursing practice and teaching.
The second part investigated the professional practice as a nurse and/or as a teacher.E-mails were sent to all teachers in order to broadcast, raise awareness and invite them to participate in the research.Concomitantly, information and deadlines of the guidelines for the return, term of consent and commitment, as well as the access link to the questionnaire were sent.129 professionals were able to participate in the research, and 41 teachers responded to the survey, constituting the universe analyzed.
In order to analyze the information, the study variables were divided into four groups: discrete quantitative variables, dichotomous variables, ordered qualitative variables and nominal qualitative variables.Groups of variables were applied to calculate absolute and relative frequency, basic descriptive statistics calculations were made in Excel®.In the first three groups of variables, frequency calculations were performed prior to organization of the material.The data of the group of nominal qualitative variables which constituted the open questions of the study were organized and a preliminary grouping of related answers prior to the frequency calculation was performed.
The results of frequency calculations were exposed to steps of pre-analysis, material exploration and interpretation were then proposed as an analytical process (5) .This movement led to the emergence of three categories: characteristics of mid-level nursing teachers, initial training and continuing education of mid-level nursing teachers for the teaching profession and characteristics of the work as a teacher in mid-level nursing.Due to the volume of data and variables, and to enhance visualization and understanding of the data, we chose to present the data in each category as tables.
Ethical guidelines have been met.When asked about their motivation to become teachers, most mention that they always had the desire and thought they had a vocation for this kind of work (26.8%), while some also pointed to professional achievement in the activity (17.1%).The motivation also came from experiences in their course work (14.6%) and financial need (14.6%).
With subtle percentage difference, participants usually work longer in mid-level teaching where it is likely they began their teaching activity.However, the more experienced teachers with over 15 years of experience appear as a minority, which can be an indicator that teaching at the secondary level may have an initial and transient characteristic.Another interesting finding is the fact that most teachers have other employment, whether in schools or other health institutions or educational institutions, as most pointed out in the second link.
Table 2, Initial and continuing education of midlevel nursing teachers for the teaching profession, we can see that the majority of teachers are qualified professionals graduated in the 2000s, in public institutions and with some kind of pedagogical preparation before entering the field.This preparation for most was their degree.It is relevant to highlight that the majority have interest in continuing training and education, especially through stricto sensu post-graduation programs.
Discussion
The first thing to point out is that because more than half of the schools in this study are public and research subjects have secure employment relationships, socioeconomic status, working conditions and continuing education presented in the data as mostly prevalent are differentiated and do not reflect the characteristics of training and work of teachers of professional nursing education demonstrated in other schools where they are predominantly paid by the hour (6) , with less previous pedagogical training (7) and little institutional incentive for continuing education (8) .Therefore, it is interesting to explore the contrasts in the data and scientific literature on the professional performance and the pedagogical training of mid-level teachers in public and private nursing institutions.
With the expansion of the Federal Technological Education Network, many teachers were incorporated into a university teaching career, where the professional base salary for exclusive dedication with a degree (9) is equivalent to 5.77 times the minimum wage at the time of the research period.Having as a parameter the average salary of Brazilians (10) , the income of most participants is above average, which shows value of teachers' work in the federal system and can be an important motivating factor for entering into teaching.However, this is not a corporate policy of the Technical Schools Network of the National Health System or of private education, and does not reflect the global reality of professional teaching education in nursing.
Given that pedagogical practice regardless of level of education is a complex activity that involves basic knowledge coming from academic education, literature in the field, educational materials and institutional and practical experience context (11) , the amount of time consumed not only in teaching activities but also with preparation and interaction with students is worrying.Especially when a significant part of teachers have more than one job and administer two to four different (theoretical and practical) subjects, and considering that continuing in a discipline contributes to better understanding and content domain for the development of educational skills and expertise.This is a fact that may occur due to the low remuneration and unstable employment, pointing out a precarious scenario in vocational education (6)(7) .What is the impact of these teaching job characteristics in the training of nursing staff, since teachers and students (12) have large volume demands and extracurricular work activities and only have precious few moments in the classroom or in the practice field for interaction?
It is possible that the fact that most teachers have graduated from public institutions is related to graduating prior to entry into the teaching profession, particularly with the significant mention of degrees, since most of the undergraduate nursing courses are offered in public schools.Public schools may also be related with the desire to continue training through entry into master's and doctorate programs.Once most of the country's post-graduate courses (13) are concentrated, these institutions may have more condition to introduce a continuity of training perspective to these professionals from the beginning of the course.
Another reason for desiring continuity in training is probably due to the fact that at the beginning of the teaching year the teacher feels the need to increase specific training for teaching (14) .It is necessary to emphasize that the current setting of the master's and doctorate courses are understood as privileged for teacher training where it is possible that this teacher does not meet initial expectations, as these courses have turned to the promotion of research competencies and skills, not education (15) .
Assigning value to the stricto sensu training, yet understanding that it needs to be revisited, along with the possible moments of reflection, collective discussion, and informal exchanges that occur by chance in schools, either in the corridors or in the direction of specific and educational evaluation from teacher me-etings and advice, it would be interesting to explore their schools training and development activities more, and develop programs and training (16) .
In this way it would be possible to obtain the most formative success, satisfaction and participation of teachers held more regularly and accompanied by other initiatives.An example is the monitoring of a more experienced teacher by an apprentice/beginner, constituting support space and institutional recognition (17) .Another possibility is to stimulate the creation of spaces for socialization and sharing among teachers beyond the formal spaces as subjects of meetings.
It is relevant when there is the report that most teachers value and participate in teacher training activities conducted by schools that work.This commitment and appreciation of space possibly lies in understanding that in these spaces teachers can share and learn from needs observed in everyday practice.Considering this, developing training activities at work is still challenging for schools (16) , infrequently conducting training activities incurs loss in the potential to foster the development of practice and teacher training.
Conclusion
The intent of this study was to identify the teacher training and professional performance of teachers in mid-level graduate nursing schools.Even with the limitations of an exploratory study confronting the scarce production of mid-level nursing education being necessary, some aspects are noteworthy.
Within the professional practice, there exists a perceived salary gap between public and private, and significant double employment in both.It can also be seen that it is common for teachers to teach multiple and different disciplines.It would be of important action for the State to control the initiation of courses and monitor and evaluate the working conditions of teachers so that they may be appropriate, reflecting policies that lift professional education to a relevant level and correspond to the number of care professio-nals graduating and also providing for action in health services.
For training, we can observe teachers with great educational preparation for mid-level teaching and disposal for lifelong learning, with a focus on teacher training and socialization with their peers, but the incentives or training are not always found in schools.It is noteworthy that in addition to the legitimate pursuit of graduate programs, an innovative and successful experience for teacher training would constitute the development of training programs within the schools of nursing training.
Given the above, it is recommended that Brazilian nursing be willing to investigate the average level of training.With regard to this study, it is recommended that studies of teacher practices are conducted, in order to evidence how age pedagogically reasons the teacher.There are still vast themes and subjects such as the curriculum of the courses, the profile of students, educational technologies, among others.The field is vast and there are few studies on the subject that address both performance in the field and social responsibility categories.
Collaborations
Backes VMS, Menegaz JC and Francisco BS participated in the study design, collection and analysis of data and writing of the article.Reibnitz KS and Costa LM participated in the study design and data analysis.
Table 1 ,
The study was approved by the Ethics Committee of the Federal University of Santa Catarina under Opinion No. 28715/2012.corresponding to characteristics of mid-level nursing teachers presents the results of the variables: age, work experience as a teacher in technical education and training, the presence or absence of other employment in addition to work as a mid-level teacher and the remuneration.
Table 1 -
Characteristics of mid-level nursing teachers
Table 2 -
Initial and continuing education for mid-level teachers
Table 3 ,
characteristics of the work as a teacher in mid-level nursing schools, highlights school work, employment contract agreement, as well as availability of school initiatives related to teacher training services.Most teachers work in a statutory regime and teach two to four theoretical and practical subjects.
Table 3 -
Characteristics of the work as a teacher in mid-level nursing schools
|
2019-01-02T15:08:14.840Z
|
2014-12-21T00:00:00.000
|
{
"year": 2014,
"sha1": "f82a0684ed39a70f16230c45d815f41f60987ffd",
"oa_license": "CCBY",
"oa_url": "http://periodicos.ufc.br/rene/article/download/3290/2529/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f82a0684ed39a70f16230c45d815f41f60987ffd",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256990471
|
pes2o/s2orc
|
v3-fos-license
|
Functional annotation of putative QTL associated with black tea quality and drought tolerance traits
The understanding of black tea quality and percent relative water content (%RWC) traits in tea (Camellia sinensis) by a quantitative trait loci (QTL) approach can be useful in elucidation and identification of candidate genes underlying the QTL which has remained to be difficult. The objective of the study was to identify putative QTL controlling black tea quality and percent relative water traits in two tea populations and their F1 progeny. A total of 1,421 DArTseq markers derived from the linkage map identified 53 DArTseq markers to be linked to black tea quality and %RWC. All 53 DArTseq markers with unique best hits were identified in the tea genome. A total of 5,592 unigenes were assigned gene ontology (GO) terms, 56% comprised biological processes, cellular component (29%) and molecular functions (15%), respectively. A total of 84 unigenes in 15 LGs were assigned to 25 different Kyoto Encyclopedia of Genes and Genomes (KEGG) database pathways based on categories of secondary metabolite biosynthesis. The three major enzymes identified were transferases (38.9%), hydrolases (29%) and oxidoreductases (18.3%). The putative candidate proteins identified were involved in flavonoid biosynthesis, alkaloid biosynthesis, ATPase family proteins related to abiotic/biotic stress response. The functional annotation of putative QTL identified in this current study will shed more light on the proteins associated with caffeine and catechins biosynthesis and % RWC. This study may help breeders in selection of parents with desirable DArTseq markers for development of new tea cultivars with desirable traits.
Functional annotation and pathway assignment. Functional annotation of all 1,421 DArTseq markers was performed on BLASTX search against the non-redundant GenBank protein sequence database with a threshold E-value of 10-6 25,26 . Functional annotation and mapping of gene ontology (GO) terms was done with the Blast2GO program (Blast2GO v3.2) 27 . Each BLAST hit associated with the GO terms were retrieved and annotated using the following annotation score parameters; E-Value Hit Filter (default = 1.0E-6), GO-Weight (default = 5), Hsp-Hit Coverage Cut Off (default = 0), Annotation Cut-Off (90). The contig sequences were also queried for conserved domains/motifs using InterProScan, which is an on-line sequence search plug-in within the Blast2GO program. The Kyoto Encyclopedia of Genes and Genomes (KEGG) mapping was used to determine metabolic pathways. The sequences with corresponding evidence code (EC) numbers obtained from Blast2GO were mapped to the KEGG metabolic pathway database.
Results
Functional annotation and pathway assignment. The identification of candidate genes for tea was conducted in all the 15 LGs derived from the DArTseq maps. Sequences of each marker on all the LGs were subjected to a BLAST search against tea genome BLAST database in an effort to compare the homology and identify the location of the gene of interest within the tea genome. The results of BLAST searches for markers derived from the DArTseq map are presented in Table 1. Of 1,421 DArTseq markers derived from DArTseq map, 53 markers were identified to be link to black tea quality and %RWC. A total of seven markers were dominant in TRFK 303/577 parent while 9 markers were dominant in GW Ejulu parent, respectively. All the 53 markers with unique best hits were identified in the tea genome. The 23 markers showed locations across ten chromosomes in tea, which were LG1 (1), LG3 (1), LG4 (8), LG6 (1), LG9 (1), LG10 (2), LG12 (2), LG13 (4) LG14 (2) and LG15 (1). A total of 14 markers were dominant, 8 were dominant in GW Ejulu and six were dominant in TRFK 303/577, respectively. The dominant markers in GW Ejulu were for caffeine, catechin, ECG, EGC and EGCG. On the other hand, the dominant markers in TRFK 303/577 were for caffeine, ECG, EGC, TF1 and %RWC. All the 13 dominant markers were identified in the tea genome for the two parents used in this study as reciprocal crosses in the two tea populations (Table 1) Annotation and mapping of gene ontology. All unigenes in this study were assigned GO terms based on BLASTX searches against the non-redundant (NR) databases ( Fig. 1 and 2). Our results showed that 86-100% similarity distribution of the top hits in the non-redundant (NR) database (Fig. 1). The E-value distribution of the mapped sequences with high homologies (smaller than 1e-6) was high as compared to homologous sequences that were greater than 1e-5 (Fig. 2). The E-value distribution of the best hits in the NR database showed that 26% annotated sequences had high identity with their best hits (smaller than between 1e-28 to 1-e26), whereas 15% ranged from 1e-21 to 1e-25 and another 18% ranged from 1e-15 to 1e-20 ( Fig. 1). The similarity distribution revealed that 43% of the query sequences had greater than 91-95% similarity, whereas 21% of the query sequences varied in similarity from 86-90%, and the other 3% varied from 81-85% (Fig. 2). A total of 5,592 unigenes were assigned GO terms, of these, biological processes (56%) comprised the largest category, followed by cellular component (29%) and molecular functions (15%), respectively (Fig. 3). In biological process (BP) category, the majority of unigenes were involved in regulation of biological process (35%), cellular process (16%), metabolic process (12%), response to stimulus (11%), developmental process (9%) and localization and signaling (6%), respectively (Fig. 3). The cellular and metabolic processes were the most highly represented groups in the biological process, which suggests that extensive metabolic activities taking place in the fresh tea flush. The best-represented groups of molecular function (MF) were binding activity (48%), catalytic activity (27%), signal transducer activity (10%), molecular transducer activity (8%) and transporter activity (7%), respectively (Fig. 3). Within the molecular function category, binding and catalytic activities were the most abundant groups. The cellular component (CC) included the cell (21%), cell part (21%), organelle (16%), membrane (11%) organelle part (7%), membrane part (7%), and macromolecular complex (6%), respectively (Fig. 3). The transcripts that corresponded to cell, cell parts, and cell organelles were the most abundant groups within the cellular component category. Among the top 20 functional sub-groups in biological processes, multicellular organism development, response to stress and response to endogenous stimulus represented the largest group ( Supplementary Fig. 1). The top 20 functional sub-groups for molecular functions, protein binding, receptor Table 2, Fig. 4). The pathways with most representations were metabolic pathways and biosynthesis of secondary metabolites, which was consistent with the GO categories of the biological process ( Fig. 3 and Supplementary Fig. 1). Of these, 52 unigenes were categorized into the metabolism groups, with most of them involved in purine metabolism (11%), cysteine and methionine metabolism (9.6%), pyruvate metabolism (9.6%), alanine, aspartate and glutamate metabolism (7.6%), arginine and proline metabolism (7.6%) and fructose and mannose metabolism (7.6%) and other sub-categories, respectively. The most abundant unigenes were for carbohydrate and amino acids biosynthesis, which are mostly involved in plant hormone signal transduction pathways in response to abiotic stress and biosynthesis of other secondary metabolites such as phenylpropanoid biosynthesis, flavonoid biosynthesis and alkaloid biosynthesis. The unigenes associated with flavone and flavonol biosynthesis, a key regulatory pathways in the metabolism of the main chemical components in C. sinensis leaves were present (Fig. 4). Unigenes associated with purine metabolism, also a key regulatory pathway which is involved in metabolism of adenine and guanosine nucleotides were present. Also, major enzymes that are involved in different metabolic pathways were further grouped into the six different categories; oxidoreductases, transferases, hydrolases, lyases, isomerases and ligases. The distribution of the three abundant enzymes was also consistent with the GO categories of molecular function ( Fig. 3 and Supplementary Fig. 2). The three major enzymes distribution included; transferase enzymes (38.9%), which are a class of enzymes that catalyze the transfer of specific functional groups (a methyl, or glycosyl group, or galloyl group) from a donor to acceptor molecules, hydrolase enzymes (29%), which catalyze the hydrolysis of compounds and oxidoreductase enzymes (18.3%), which catalyze the oxidation-reduction reactions ( Fig. 5 and Supplementary Fig. 3). The three other enzymes that were less abundant in distribution included; lyase enzymes (8.4%), which cleave carbon-carbon, carbon-oxygen, phosphorous-oxygen and carbon-nitrogen bonds without hydrolysis or oxidation reactions, isomerase enzymes (3.1%), convert a molecule from one isomer to another while ligase enzymes (2.3%), catalyze the formation of a new chemical bond by joining of two large molecules (Fig. 5).
Functional annotation of putative candidate genes 15 linkage groups of C. sinensis. The 28% of the mapped sequences (405 out of 1,471) were annotated. Functional annotations of detected QTL led to the identification of functions of 37 putative candidate genes in parental clones that could be involved in the expression of the targeted traits ( Table 1). The putative candidate genes involved in secondary metabolism (phenylpropanoid biosynthesis, flavonoid biosynthesis, alkaloid biosynthesis and volatile compounds biosynthesis) were located in five linkage groups LG04, LG09, LG10 and LG13. The putative candidate genes identified were involved in phenylpropanoid and flavonoid biosynthesis pathways included a group or class of enzymes namely; 2-oxoglutarate/ Fe(II)-dependent dioxygenases superfamily (2-ODDs), aminotransferase class I and II, acyltransferase, glycosyl hydrolase family 9. This was in agreement with KEGG pathway mapping where enzymes mapped clustered into six major groups according to different metabolic pathways they are involved with transferases, hydrolases and oxidoreductases as three abundant enzymes. The multidrug and toxic compound extrusion (MATE) transporter was also putatively identified in LG09. BT1 family protein located in LG04 was putatively annotated for qCAFF. The putative QTL and their corresponding putative candidate genes associated with phenylpropanoid biosynthesis, and flavonoid biosynthesis were identified in both parental clones. Proteins related with oxidative stress and ATPase family proteins or proteins related to abiotic stress were found in the confidence intervals of QTL for drought tolerance traits. Among the 53 examined putative QTL in parental clones (Table 1), 18 putative QTL were related to abiotic stress (ABA signaling, amino acids peptide degradation) and were mostly found on LG04 and LG13. However, the putative QTL related to abiotic stress, including percent RWC was also found on LG01, LG06, LG10, LG12, LG14 and LG09, respectively. The putative candidate genes identified included, pectinesterase, 14-3-3 protein, 2OG-Fe (II) oxygenase superfamily, protein kinase domain, transmembrane amino acid transporter protein, peptidase family M3, histone acetyltransferase subunit NuA4, DnaJ domain and MatE. ATPase family proteins associated with various cellular activities (AAA), and Armadillo/beta-catenin-like repeat protein were found to be associated with the heat stress response. The putative QTL in LG13 that was associated with disease stress response was NB-ARC domain protein. Putative candidate genes related with ABA signaling pathway were found on QTL for qCAT, qEC, qEGCG and qRWC (Table 1), respectively. The putative candidate gene related to degradation of peptides by targeting the amino acids residues that are formed during abiotic stress was found on QTL for qEC. The parental clone TRFK 303/577 which is known to be high yielding and drought-tolerant cultivar, exhibited most putative candidate genes related to drought stress response as compared to parental clone GW-Ejulu.
Discussion
Flavonoid biosynthesis pathway in tea begins with the amino acid phenylalanine, which undergoes deamination to form trans-cinnamic acid 28,29 . The oxidation process of trans-cinnamic acid yields p-coumaric acid, which undergoes transformation to form p-coumaroyl-CoA. The committed step in flavonoid biosynthesis is the subsequent condensation of p-coumaroyl-CoA and malonyl-CoA catalyzed by chalcone synthase enzyme to form chalcone. The subsequent isomerization of chalcone forms (2S)-flavonones. The identification of putative QTL, qCAT, for 2-ODDs superfamily protein in this study corroborates with other previous reports on the role of 2-ODDs in flavonoid biosynthesis. The 2-ODDs which are large superfamily of non-heme proteins, which facilitate numerous oxidative reactions, including hydroxylation, epimerization, C-C bond cleavage, ring formation and fragmentation. These 2-ODDs superfamily catalyze key oxidative reactions that facilitate the formation of different flavonoid subclasses, for example, the oxidation of (2S)-flavonones. The (2S)-flavonones are converted to dihydroflavonols through hydroxylation process which is then catalyzed by another 2-ODD, flavonone 3β-hydroxylase (F3H). The dihydroflavonols formed is a substrate for flavonol synthase (FNS I), which competes with dihydroflavonol 4-reductase (DR4) to form flavonols, anthocyanidins and procyanidins.
Glycosylation is an important process for the diverse functions of polyphenolic compounds in plants and is known to be able to increase the solubility and stability of hydrophobic flavonoids 30 . The glycosylation process is regulated by glycoside hydrolases, which are involved in hydrolysis and/or rearrangement of glycosidic bonds. We hypothesize the putative QTL, qBRT linked to glycosyl hydrolase family 9, a family of glycoside hydrolases in the current study to play a role in the synthesis and sequestration of some phenylpropanoids 31 . In tea, the galloylated catechins formed through the glucosylation activity of UDP-glucosyltransferase, have been found to confer astringent and bitter tastes, while flavonol 3-O-glycosides have been found to induce velvety, mouth-drying, and mouth-coating sensations 6 . In legume plant, bur clover (Medicago truncatula), epicatechin 3′-O-glucoside, has also been shown to be involved in the biosynthesis of proanthocyanidin in the seed coat 32,33 . In addition, the glycosylated polyphenols function also as acyl donors in biochemical reactions, for example, β-glucogallin, a glucose ester of gallic acid, functions as an acyl transfer donor in the biosynthesis of both galloylated tannins 34 and galloylated catechins 35 . Acyltransferases which participates in many secondary metabolism pathways was putatively identified in this current study (qECG) in LG10. Acyltransferases are involved in the biosynthesis of anthocyanidin and other flavonoid groups by transferring acyl groups to the sugar moiety of anthocyanins using acyl-CoA as the donor 36 . The process of modification of anthocyanin and other flavonoid groups with acyl groups is catalyzed by acyltransferases and is important in maintaining their stability. In LG04, qCAFF, was putatively annotated as BT1 family protein which is has been identified to be involved in export of adenine nucleotides, which are exclusively synthesized in plants plastids, and are precursors for caffeine biosynthesis 37 . Furthermore, nucleotides, adenine and guanosine are precursors in caffeine biosynthesis in tea plants 38 . The 14-3-3 proteins which bind to other proteins to induce target-site specific alteration and conformation of target proteins are important plant signal transduction pathways. Previous reports implicate plant 14-3-3s in key physiological processes, in particular, abiotic and biotic stress responses, metabolism (especially primary carbon and nitrogen metabolism), as well as various aspects of plant growth and development. In the current study, putative QTL, qCAT in LG13 was putatively annotated 14-3-3 protein, which is abiotic or biotic stress protein in tea. This, therefore, is in agreement with previous study by 18 where individual catechins were demonstrated as potential indicators in predicting drought tolerance in tea plant. Meanwhile, a number of recent studies in Arabidopsis have linked 14-3-3s to ABA signaling. Some of the main functions ascribed to ABA include response to abiotic/biotic stress.
Drought stress which usually affects the plant photosynthetic process also interferes with plant nutrient availability leading to ion intoxication. The reversible protein phosphorylation by protein kinases at the early and later stage of the signaling pathways is one of the mechanisms that plants use in response to abiotic stress. In addition, to maintain the osmotic homeostasis and tolerance due to drought stress, plants also do regulate gene expression of some specific genes of protein kinases. The presence of a putative QTL, qCAT, which was annotated for protein kinase domain in this study supports the argument of the previous studies on the role of tea catechins as potential indicators of drought tolerance in tea plants. A previous study by 39 indicated that catechins were higher in the drought-tolerant cultivars and therefore, corroborated with the presence of qCAT, annotated as protein kinase domain in drought-tolerant parental clone TRFK 303/577.
Transmembrane amino acid transporter protein functions as gateways to permit the transport of amino acids across the biological membrane. Amino acids are not only critical for the synthesis of proteins, but other many essential compounds, which act as signal molecules. For example, in Arabidopsis, water and salt stress induce a strong expression of proline transporter 1 (ProT1) and proline transporter two while the expression of amino acid permease 4 (AAP4) and amino acid permease 6 (AAP6) is repressed 40,41 . In tea, proline has been shown to act as an osmo-protectant during drought stress 23,24 and the presence of a putative candidate gene (transmembrane amino acid protein) in this current study, could hypothesize the functions of transmembrane amino acid transporter for proline presence during drought stress. Furthermore, the presence of transmembrane amino acid transporter proteins could be their involvement of transport of aromatic amino acids phenylalanine, tyrosine and tryptophan, which serve as precursors for a wide range of secondary metabolites (polyphenols) in tea. This is because the two identified putative QTL, qCAT and qEGC in LG13 in both parental clones (TRFK 303/577 and GW Ejulu) are products of shikimate and phenylalanine pathways.
The MATE proteins are a family of secondary active transporters, which utilize electrochemical gradient of membrane maintained by ATPases for their transport activity. MATE transporters are involved in transport of secondary metabolites into vacuoles, which are the major storage site of most of the conjugated flavonoids, which comprise of flavonols, anthocyanins and flavone glycoside 42 . In the current study, putative QTL, qRWC in LG09 was annotated for MATE proteins, which are involved in the sequestration of flavonoids in vacuoles in response to drought stress 43 . Secondary metabolites in plants, especially flavonoids play an important role in response to abiotic stress such as drought stress. A member of MATE gene family (TT12) has also been elucidated for sequestration of proanthocyanidins in seed vacuoles, which leads to pigmentation of the seed coat 44 . In yeast, it was revealed that TT12 proteins can specifically transport glycosidic form like epicatechin 3′-O-glucoside and cyanidin 3-O glucoside 45 . The MATE proteins from a legume, bur clover (Medicago truncatula) and grapevines (Vitis) have been characterized for transport of flavonoids through the sequestration of epicatechin 3-O-glucoside and cyanidin 3-O glucoside (proathocyanidins) into the vacuole for storage 33 . In another study, a part from the role of vacuolar sequestration flavonoids, MATE proteins have been recognized in transport of alkaloids into vacuoles in tobacco. Furthermore, MATE proteins have facilitated and modulated abscisic acid efflux and sensitivity in drought-tolerant Arabidopsis 46 . Since a putative QTL for %RWC annotated as MATE in this current study, it therefore, corroborates with previous reports 47 which relied on the percent %RWC to classify tea cultivars as either drought tolerant or drought susceptible based on tea leaves metabolites.
In this current study, putative QTL, qECG in LG06 was putatively annotated DnaJ protein. DnaJ proteins are essential components that contribute to cellular protein homeostasis and complex stabilization of proteins (protein folding, degradation and refolding) under stress conditions in plants 48 . They function as molecular chaperones either alone or in association with heat-shock protein 70 (Hsp70). In a previous study on the transcriptomic analysis of the effect of drought-induced-stress on tea leaf quality, revealed that the levels of ECG and EGCG increased during drought stress 49 . This was also reported by 39 on tea biomolecules in relation to tea quality. Furthermore, in transgenic tobacco, the overexpression of tomato chloroplast-targeted DnaJ enhanced drought tolerance 50 and maintained photosystem II under chilling stress 51 . Our findings, therefore, suggest that ECG can be used as a potential marker of drought tolerance or cold stress in tea cultivars. The putative QTL, qEGCG, in LG04 was annotated diacylglycerol kinase catalytic domain protein which we hypothesize as a marker for drought and cold tolerance in tea cultivars. Abiotic stress, including salinity, drought, and osmotic adjustments triggers the production of phosphatidic acid 52 . Phosphatidic acid is formed by phosphorylation of diacylglycerol by diacylglycerol kinase. Thus, phosphatidic acid rather than diacylglycerol has typically been implicated as a major plant secondary messenger. However, the presence of diacylglycerol is necessary for certain developmental processes and the response to particular environmental stimuli. For example, salinity stress in A. thaliana increased the activity of non-specific phospholipase C, which promoted the production of diacylglycerol. In addition, cold stress has recently been found to stimulate the formation of phosphatidic acid in suspension-cultured Arabidopsis cell 53 .
Histone acetyltransferases play a critical role in histone acetylation of plant chromatin, which is important in epigenetic control of gene expression. The transfer of acetyl groups to core histone tails by histone acetyltransferases, promote transcription of target genes involved in drought response and absisic acid signaling in several plants, including Arabidopsis 54 , rice 55 , and maize 56 . Furthermore, previous studies reported up-regulation of drought stress responsive genes and found to be correlated with changes in histone modification 57,58 . The putative QTL, qCAT, qEC and qEGCG in this current study were annotated histone acetyltransferase subunit NuA4 protein in drought-tolerant parental clone TRFK 303/577. Therefore, catechins could act as potential indicators or markers of drought tolerance in tea, which corroborates with previous reports 18,39 which investigated the biochemical changes in tea constituents as influenced by varying levels of water stress.
Aminotransferases, also known as transaminases, catalyze amino group transfers from amino donor to amino acceptor compounds. Aminotransferases play a major role in a variety of metabolic pathways, including, amino acid biosynthesis, secondary metabolites biosynthesis and photorespiration. Prephenate aminotransferase, a class of aminotransferase enzyme, catalyze the final step in the synthesis of phenylalanine, which is the central product of shikimic acid pathway and serves as a precursor for the synthesis of flavonoids 59 . The identification of putative QTL, qEC in the current study annotated as aminotransferase I and II is hypothesized as a marker linked to flavonoids biosynthesis in tea.
Conclusion
This current study is the first attempt to obtain more information on genomic and functional annotation of proteins/ enzymes involved in black tea quality and drought tolerance traits in tea plant based on DArTseq markers. The DArTseq markers used, and putative QTL identified in this current study will shed more light on the proteins/ enzymes associated with the biosynthesis of caffeine, individual catechins in green tea which are converted to individual theaflavins in black tea processing. In addition, the DArTseq markers will provide useful information, in particular for studies of the genetic determinants of black tea quality traits and drought tolerance in tea. Also, the DArTseq markers used in the current study can be useful markers for construction of linkage maps or may be used to discover new functions of genes. We successfully constructed a genetic linkage map using SNPs, but it was impossible to anchor it to other previously constructed linkage maps of tea since no anchoring markers were available. Furthermore, it was also difficult to deduce the relative genetic positions of the SNPs on the previous different linkage maps. Indeed, previous studies used RAPD, AFLP and SSRs markers to construct tea genetic linkage maps, while in our study, we used DArTseq markers. However, the information obtained through the annotation of putative gene functions and the alignment of DArTseq sequences relative to recently published reference tea genome, has set a milestone for a future and further researches on marker-assisted selection and breeding of tea plants. In addition, this work may help breeders in selection of parents with desirable DArTseq markers for development of new tea cultivars with desirable traits.
|
2023-02-19T14:14:25.107Z
|
2019-02-06T00:00:00.000
|
{
"year": 2019,
"sha1": "d5f6e628135f283a00018057d06761a577a598ab",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-37688-z.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "d5f6e628135f283a00018057d06761a577a598ab",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
269037279
|
pes2o/s2orc
|
v3-fos-license
|
Localization of scalar matter with nonminimal derivative coupling on braneworlds
The localization of scalar matter with nonminimal derivative coupling (NMDC) in the braneworld model is studied. Two types of brane models are considered: a thin brane (the Modified Randall-Sundrum) and a thick brane generated by the bulk scalar field. The 5-dimensional theory of scalar field with NMDC is reduced to the effective 4-dimensional theory by imposing the fulfillment of localization conditions. In this article, the localization properties of field with NMDC are examined in both models. We found that the massless and massive scalar field with NMDC can be localized on the MRS thin brane. In the thick brane, the massless scalar field is localized.
Introduction
A braneworld is a concept that emerges from string theory.It offers a solution to the hierarchy problem encountered in the realm of high-energy physics.It proposes the existence of additional spatial dimension(s) beyond the familiar 3+1 dimensions.Its connection to our familiar 1+3dimensions universe is governed by an exponential factor named a warp factor.Within this framework, any weak-scale fields or matter, characterized at the TeV scale, are assumed to be confined on a hypersurface known as a brane.This brane is located within a higher-dimensional spacetime referred to as the bulk.Various brane models have been proposed, with some of the most well-known ones being the Randall-Sundrum (RS) model [1,2] and its alternative model, the Modified Randall-Sundrum (MRS) [3].Apart from the RS and RS-like models, which represent brane scenarios without thickness, there have been developments in brane models featuring thickness.The thick brane model extends the application of the theory in general relativity, offering a more versatile framework.Unlike thin branes, the thick brane concept does not require junction conditions to achieve continuous Einstein field equations.The solution of a thick brane is smooth, distinguishing it from thin branes.A review paper on thick branes is available in [4].
From the definition of the braneworld theory, weak-scale matter fields are ideally localized on a brane.For this reason, the localization of the weak energy field is one of the important issues that needs to be examined in the brane model.In the MRS model, it is shown that the extra coordinate transformation, dz = e −A dy, gives significant result in the field localization test.The MRS model has better localization properties of the scalar, vector, and spinor fields than the RS model [3].Several cases of localization of interaction field systems have also been discussed in the MRS model [5,6,7,8].
The modified theory of gravity is currently being widely applied to several prominent topics, including inflation, expansion of the late-time universe, gravitational waves, and astrophysical objects.One well-known modified theory of gravity is the nonminimal derivative coupling model (NMDC) of a scalar field interacting with gravity through a curvature tensor, as introduced by Amendola [9].This model is interesting to study as a candidate for various contexts, such as inflation [10,11], expansion of the universe [12,13,14,15], gravitational waves [16], black holes [17,18], wormholes [19,20], and neutron stars [21].In the 5-dimensional braneworld theory, NMDC has been discussed in terms of cosmology with the universal extra dimension model [22] and scalar field localization with the MRS thin brane model [23].
In this article, we discuss the localization of a scalar field with an NMDC term of the form ξG M N ∂ M Φ∂ N Φ, where G M N is the 5-dimensional Einstein tensor.The formulation of the 5-dimensional matter field equation is reduced to the standard 4-dimensional Klein-Gordon equation by providing the localization conditions that must be fulfilled.Here, we will review two braneworld models.For a better brane model regarding field localization in thin brane scenarios, we will examine the MRS brane.It also as a comment on Reference [23].For a comparative analysis in the context of thick branes, considering a relatively simple model that can represent thick branes, we will also examine the scalar thick brane model [24,25].In Section 2, we discuss the reduction of the 5-dimensional to 4-dimensional field equations by obtaining the localization conditions of the scalar field with NMDC.We then discuss the scalar field localization on the MRS brane model, Section 3, and on the thick brane model, Section 4. The conclusion is given in Section 5.
Field equation of 5D scalar field with nonminimal derivative coupling
In this section, we will derive the field equations, explain the localization mechanism, and derive the localization conditions that must be satisfied.
Field equation
Consider an action of 5-dimension scalar field Φ(x M ) with a nonminimal coupling, where the second term is a nonminimal derivative coupling that contains Einstein tensor G M N and ξ is a coupling constant.A braneworld metric with z extra coordinate is given by the following line element where e 2A(z) is a warp factor, which relates the brane to the extra dimension.The M, N indices represent 5-dimension bulk coordinate, and the µ, ν indices represent 4-dimension brane coordinate.In this article, two cases of braneworld will be reviewed, namely thin and thick branes.The thin brane case considered is the MRS model with the warp function A(z) = −k|z|.
In the case of thick brane, the smooth warp factor e A(z) is used as the gravitational solution for braneworld system.From metric (2), some geometrical quantities can be derived, including the Ricci tensor components where prime is a derivative with respect to z, the Ricci scalar 10th Asian Physics Symposium (APS 2023) Journal of Physics: Conference Series 2734 (2024) 012071 and the Einstein tensor components, By varying the action of scalar field with NMDC in (1) with respect to Φ(x M ), the equation of motion for scalar field Φ(x M ) can be derived, that is The 5-dimensional scalar field can be decomposed into Φ(x M ) = ϕ(x µ )χ(z).By considering a spacetime with the metric (2) and the field decomposition, the above field equation can be written into The following is a comment to Taufani et al [23] regarding the separation of 4-dimension and extra-dimension field equations.In [23], the 4-dimensional Klein-Gordon equation still depends on extra coordinate in a term α µν ≡ √ g(g µν + ξG µν ).By considering the conformally flat metric with a static warp factor, exp(A(z)), the Einstein tensor depends on extra coordinate z only.So, the standard 4-dimensional Klein-Gordon equation is a proper form of reducing the 5-dimensional equation to 4-dimension.If the metric is supposed to contain a dynamic warp factor, exp(A(t, z)), then the Einstein tensor depends on time and extra coordinates.So, it is possible to obtain a reduced NMDC term in the brane.In case of static warp factor, to satisfy the 4-dimensional Klein-Gordon equation, ∂ µ ∂ µ ϕ + m 2 ϕ = 0, the 5-dimensional field equation (7) reduces to an extra dimensional field equation By explicitly providing the Einstein tensor in equation ( 5), the extra dimensional field equation (8 For any thin brane model represented by the warp function A(z), the field solution will be found from the above equation of motion.
Localization conditions
From metric (2), action (1) can be written as follows where The dependence of Einstein tensor G M N on z coordinate only, enables reduction of the 5dimensional action (1) to the 4-dimensional scalar field action with the following localization conditions It turns out that the coupling constant ξ plays the role in shifting the integrands in the localization conditions.If the solution of the scalar field χ(z) in ( 9) satisfies the localization conditions in ( 12) and ( 13), then the field Φ is said to be localized on a brane characterized by the warp function A(z).
Localization on Modified Randall-Sundrum
As stated earlier, the MRS model demonstrates better field localization compared to the RS model.Therefore, for the field localization test in thin branes, we opt for the MRS model [3].
Field equation
The MRS model is a thin brane model with the warp function where k is a constant.The scalar field equation with NMDC in equation ( 9) becomes Since in the MRS thin brane model is Z 2 -symmetric, the integration limits in the localization condition equations can be chosen from zero to infinity [3].So the equations ( 12) and ( 13) can be written as For a massless scalar, the mass term should be m 2 = 0.
Field localization
The localization of the field on the MRS brane for both massless and massive modes is provided as follows.
3.2.1.Massless mode For the case of a massless scalar field, m = 0, the field equation ( 15) can be written into It gives a general solution where erfi is the imaginary error function.Then, an examination is conducted to determine whether this solution satisfies the normalization ( 16) and mass equation (17).If the solution satisfies the conditions N = 1 and m 2 = 0, then a massless scalar field is localized on the MRS thin brane.The simplest case is the case when c 1 = 0.The corresponding solution is which is in accordance with the mass equation ( 17), χ 0 gives m 2 = 0.
Massive mode
For the case of a scalar field with m = 0, the solution of equation ( 15) is where the constants is the Kummer confluent hypergeometric function, which is defined by the following series where (a) n = a(a + 1)(a + 2) • • • (a + n − 1).It is convergent for all finite z.The simplest solution corresponds to choosing a = 0 and thus m 2 = −(6k + 4).The function χ(z) = Be −2z , where for k > − 4 3 .
Localization on a scalar thick brane
To compare the localization of the scalar field with NMDC in thick and thin brane scenarios, we will investigate a thick brane generated from a bulk scalar.
Scalar thick brane
Consider a thick brane generated by a scalar bulk φ(x M ).The action is given by where V (φ) is a scalar bulk potential.The braneworld metric is defined by the following line element, ds 2 = e 2A(y) η µν dx µ dx ν − dy 2 , ( where A(y) is the warp function, that depends on the extra coordinate y.In this case, the bulk scalar is assumed to be a function of y only, φ = φ(y).By varying the action with respect to the scalar and metric tensor, the equation of motion for bulk scalar field and
|
2024-04-11T15:10:47.864Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "0e13bce870da80e4bc67b987c2d5cea03911dad4",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2734/1/012071/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b01d6ce9b15cf07378c9b9be2c881d7148611140",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
226302169
|
pes2o/s2orc
|
v3-fos-license
|
Physiological and Biochemical Changes in Sugar Beet Seedlings to Confer Stress Adaptability under Drought Condition
The present study was conducted to examine the adaptability of 11 sugar beet cultivars grown under drought stress in the controlled glasshouse. The treatment was initiated on 30-day-old sugar beet plants where drought stress was made withholding water supply for consecutive 10 days while control was done with providing water as per requirement. It was observed that drought stress expressively reduced plant growth, photosynthetic pigments, and photosynthetic quantum yield in all the cultivars but comparative better results were observed in S1 (MAXIMELLA), S2 (HELENIKA), S6 (RECODDINA), S8 (SV2347), and S11 (BSRI Sugarbeet 2) cultivars. Besides, osmolytes like proline, glycine betaine, total soluble carbohydrate, total soluble sugar, total polyphenol, total flavonoid, and DPPH free radical scavenging activity were remarkably increased under drought condition in MAXIMELLA, HELENIKA, TERRANOVA, GREGOIA, SV2348, and BSRI Sugar beet 2 cultivars. In contrast, activities of enzymes like superoxide dismutase (SOD), catalase (CAT), and peroxidase (POD) were significantly decreased in all, while the cultivars SV2347, BSRI Sugar beet 1 and BSRI Sugar beet 2 were found with increased ascorbate peroxidase (APX) activity under drought condition. In parallel, polyphenol oxidase (PPO) was increased in all cultivars except HELENIKA. Overall, the cultivars HELENIKA, RECODDINA, GREGOIA, SV2347, SV2348, BSRI Sugar beet 1, and BSRI Sugar beet 2 were found best fitted to the given drought condition. These findings would help further for the improvement of stress adaptive sugar beet cultivars development in the breeding program for drought-prone regions.
Introduction
To overcome the threats on the agriculture and ecosystem, top-most priority has been given to evaluate the plant response under hostile environments, such as heat, drought, cold, toxic metals, and nutrient deficiency [1,2]. These harsh conditions are collectively referred to as abiotic stress which directly interlinked with the plant growth, development, and overall crop productivity [3,4]. Amidst various detrimental factors, drought is the most repulsive one and extensively impinges on crop growth, yield, and production. The yield and quality of the crop greatly depend on environmental conditions including soil moisture and rainfall pattern [5]. It was estimated that drought caused a
Determination of Plant Height, Fresh Weight, and Dry Weight
On the 10th day on the onset of drought treatment, five seedlings from each pot were randomly selected and measured the plant height and fresh weight. Afterward, plants were placed in an oven at 60 • C for 48 h, to determine the dry weight.
Chlorophyll (Chl) and Carotenoid (Car) Determination
For the determination of photosynthetic pigments, the freeze-dried (50 mg) leaves were extracted (10 mL of 80% acetone) and placed at room temperature for 15 min. The collected extract was transferred into a tube and centrifuged at 4000 rpm for 10 min. The absorbance was taken at 647, 663 and 470 nm, respectively using a spectrophotometer (UV-1800 240 V, Shimadzu Corporation, Kyoto, Japan). Chlorophyll a, Chlorophyll b, Total chlorophyll and Carotenoid were determined according to the formula proposed by Lichtenthaler (1987) [48] and expressed as mg g −1 DW: The photosynthetic quantum yield (Fv/Fm) of photosystem II (PSII) was measured using the Fluor Pen FP 100 (Photon system Instruments, Czech Republic) by measuring OJIP transient under the dark-adapted condition at least for 20 min [49].
Leaf Relative Water Content
Leaves from three seedlings of each genotype from control and water stress treatment were collected and immediately weighted (FW). Later, the leaves were put in deionized water and weighed (g) again after 48 h to determine the leaf turgid weight (TW). Finally, dry weight (DW) was determined by placing in an oven at 60 • C for 48 h. The relative water content (%) of the leaves was calculated according to the following formula [50]: The drought tolerance index given by Fernandez [51] and adjusted for sugar beet by Ober et al., 2004 [52] was calculated according to the following formula: where PDMd is the plant dry matter under drought stress, PDMw is the plant dry matter of the control plants for each genotype and PDMTd and PDMTw are the average values of the plant dry matter for all genotypes under drought stress and control condition, respectively.
The Membrane Stability Index (MSI)
The membrane stability index (MSI) was taken by the methodology developed by Sairam et al. (2002) [53]. Leaf samples (0.1 g each) were cut into a uniform size (disk shape) and placed into the test tubes with double-distilled water (10 mL) in two sets. One set was kept in a hot water bath (40 • C for 30 min) and its conductivity was recorded (C 1 ) using a conductivity meter (HI 5321, HANNA Instruments Inc, Woonsocket, Rhode Island, USA). The second set was also placed in Plants 2020, 9, 1511 5 of 27 hot water bath (100 • C for 15 min) to record its conductivity (C 2 ) again. The membrane stability index (MSI) was measured as: MSI = [1 − (C 1 / C 2 )] × 100.
Determination of Lipid Peroxidation
Lipid peroxidation was determined by estimating the malondialdehyde (MDA) content in leaves of sugar beet seedlings. MDA was calculated by following the method of Heath and Packer (1968) [54] with slight modification. The freeze-dried leaf sample (25 mg) was macerated in 0.1% trichloroacetic acid (5 mL). The homogenate was centrifuged at 10,000× g for 5 min. After that, 4 mL of Trichloroacetic acid (TCA; 20%) containing thiobarbituric acid (0.5%) was added to 1 mL aliquot of supernatant in a test tube. The test tube was placed in a water bath and warmed at 95 • C for 30 min followed by cooled quickly on an ice bath. The resulting mixture was centrifuged again at 10,000× g for 15 min and the absorbance was taken at 532 nm. To avoid unspecific turbidity absorbance at 600 nm was subtracted from 532 nm and an extinction coefficient of 155 mM −1 ·cm −1 was used to calculate the concentration.
Estimation of Hydrogen Peroxide (H 2 O 2 )
The estimation was done according to Singh et al. (2006) [55]. Freeze-dried leaves (25 mg) were extracted in an ice bath with 5 mL of 0.1% (w/v) TCA and centrifuged at 12,000× g for 15 min. For every 0.5 mL of the aliquot of the supernatant, 0.5 mL of 10 mM potassium phosphate buffer (pH 7.0) and 1 mL of 1 M KI were added followed by incubation in darkness for 1 h. The absorbance was measured at 390 nm where 0.1% TCA was used as blank prove in place of leaf extract. A standard H 2 O 2 curve was prepared to calculate the concentration of H 2 O 2 in the sample.
Free Proline Content
Proline concentration was determined by following the method of Bates et al. (1973) [56]. Approximately 25 mg of freeze-dried plant material was homogenized in 10 mL sulfosalicylic acid (3%) and filtered through Whatman's filter paper. Two milliliters of the filtrate was mixed with a similar amount (2 mL) of acid-ninhydrin and glacial acetic acid. The mixture was heated for 1 h at 100 • C and cooled immediately on an ice bath. The reaction mixture was extracted with toluene (4 mL). The upper chromophore layer (upper layer) was aspirated and cooled to room temperature. The absorbance was taken at 520 nm with a UV-Vis spectrophotometer (UV-1800 240 V, Shimadzu Corporation, Kyoto, Japan) and calculations were done by using an appropriate proline standard curve.
Glycine Betaine Content
A freeze-dried sample (25 mg) was extracted with methanol (80%) and sonicated for 2 h at room temperature. The solution was filtered through a syringe filter (0.45 µM, Millipore, Bedford, MA, USA). The stock solution of betaine was prepared in ultrapure water (10 mg/mL) and went under serial dilution with the mobile phase to prepare 10, 8, 6, 4, 2, and 1 mg·mL −1 , respectively. An improved HPLC method developed by Xu et al. (2018) [57] was used to determined GB in plant leaves. The HPLC system (CBM 20A, Shimadzu Co, Ltd., Kyoto, Japan) with diode array detector (DAD) and a 5 µm C18 column (25 cm × 4.6 mm) were used. The mobile phase consisted of water and acetonitrile 10:90 (v/v) mixed with 0.2% phosphoric acid. Sample (2 µL) was injected with a flow rate of 0.5 mL·min −1 where oven temperature was 35 • C and the detection wavelength was 195 nm.
Total Soluble Carbohydrate (TSC) and Sucrose Content
Each sample (25 mg, freeze-dried) was homogenized in 5 mL of ethanol (95%). The insoluble fraction of the extracts were washed with 5 mL of ethanol (70%) followed by centrifuging at 3500 rpm for 10 min and the supernatant was kept in a refrigerator (4 • C) for the determination of TSC and Sucrose content. TSC content was determined according to Khoyerdi et al. (2016) [58]. Briefly, 0.1 mL of the aliquot was mixed with 1 mL anthrone (200 mg anthrone mixed with 100 mL of 72% sulfuric acid). The mixture was heated at 100 • C for 10 min and then cooled. Total soluble carbohydrate was estimated by using a standard curve, the detection wavelength was 625 nm and the results were expressed as µg·g −1 dry weight.
In the case of sucrose content, 0.2 mL of the supernatant was mixed with 0.1 mL of KOH (30%) and heated at 100 • C for 10 min. After cooling at room temperature 3 mL of anthrone (150 mg anthrone mixed with 100 mL 70% sulfuric acid) was added. Ten minutes later, the samples were cooled, and absorbance was read at 620 nm. Sucrose concentration was calculated using the standard curve and the results were expressed as µg·g −1 dry weight [59].
Determination of Antioxidant Enzyme Activities
Leaves samples were collected and immersed immediately in liquid nitrogen followed by freeze-drying and stored at −80 • C until use. A 25 mg sample was homogenized in 5 mL 50 mM sodium phosphate buffer (pH 7.8) using a pre-chilled mortar and pestle, then centrifuged at 15,000× g for 20 min at 4 • C. The enzyme extract was stored at 4 • C for analysis [60]. The activity of superoxide dismutase (SOD; EC 1.15.1.1) was estimated by the method Giannopolitis and Ries (1977) [61] with slide modification. A 2 mL of reaction mixture contained 50 mM sodium phosphate buffer with 0.1 mM EDTA, 12 mM methionine, 75 µM Nitro blue tetrazolium chloride (NBT), 50 mM Na 2 CO 3 , and 100 µL enzyme extract, and in case of the blank reaction mixture, 100 µL buffer was used instead of enzyme extract. After that, 200 µL of 0.1 mM Riboflavin was added to each mixture. The tubes were shaken and irradiated under the fluorescent light (15 W) for 15 min. The absorbance of each solution was measured at 560 nm. One unit of SOD represented as the quantity of enzyme that caused 50% inhibition of NBT reduction under the experimental conditions. The peroxidase (POD; EC 1.11.1.7) and catalase (CAT; EC 1.11.1.6) activities were determined according to Zhang (1992) [62]. The 3 mL reaction mixture for POD consisted of 100 µL enzyme extract, 100 µL guaiacol (1.5%, v/v), 100 µL H 2 O 2 (300 mM), and 2.7 mL 25 mM potassium phosphate buffer with 2 mM EDTA (pH 7.0). The increased rate of absorbance was measured spectrophotometrically at 470 nm (ε = 26.6 mM·cm −1 ). The assay mixture for CAT contained 100 µL of enzyme extract, 100 µL of H 2 O 2 (300 mM), and 2.8 mL of 50 mM potassium phosphate buffer with 2 mM EDTA (pH 7.0). CAT activity was assayed by observing decline absorbance reading at 240 nm (ε = 39.4 mM·cm −1 ).
The polyphenol oxidase (PPO; EC 1.10.3.2) activity was determined following the method described by Tagele et al. (2019) [64]. The reaction mixture contained 2 mL of extract, 3 mL of 0.1M sodium phosphate buffer (pH 7.0), and 1 mL catechol (0.01M). The mixture was incubated for 5 min at 28 • C and immediately read the absorbance against a substrate blank at a wavelength of 495 nm at the one-minute interval. The activity of PPO was expressed as enzyme activity per mg dry weight per minute [65].
Determination of Total Phenolic Content, Total Flavonoid Content, and Antioxidant Capacity
The freeze-dried (25 mg) sample was dissolved in 10 mL of ethanol (80%, v/v in water) followed by sonication at 35 • C for 60 min. Afterward, the extracts were filtered (Advantech 5B filter paper, Tokoyo Roshi Kaisha Ltd., Saitama, Japan) and kept in a refrigerator (4 • C for further analysis).
Folin-Ciocalteu method was performed to estimate the total phenolic content (TPC) of the sample [66]. A reaction mixture contained 1 mL of sample, 200 µL of phenol reagent (1N), and 1.8 mL of distilled water. The mixture was vortexed and 3 min later 400 µL of Na 2 CO 3 (10%, v/v in water) was added. After that, 600 µL of distilled water was added to get the final volume (4 mL) and left for 1 h incubation at room temperature. The absorbance was taken at 725 nm, and the phenolic acid was calculated from a standard calibration curve of Gallic acid and expressed as µg·g −1 dry weight.
The total flavonoid content (TFC) was estimated following the method described by Adnan et al. (2019) with modifications [67]. In brief, 500 µL of the extract was mixed with 100 µL of Al(NO 3 ) 3 (10%, w/v) and 100 µL of potassium acetate (1 M) solution, and finally, 3.3 mL of distilled water was added to adjust the volume up to 4 mL. The reaction mixture was vortexed and left at room temperature for incubation for 40 min and the absorbance was measured at 415 nm by a UV-Vis spectrophotometer. The total flavonoid was measured as mg/g of Quercetin equivalent on a dry weight basis.
DPPH (2,2-diphenyl-1 picryl hydrazyl) was used to assess the antioxidant capacity of sugar beet leaf extract following the method described by Braca et al. (2003) [68]. Firstly, DPPH powder (5.914 mg) was dissolved in methanol (100 mL) to prepare a stock solution, and the absorbance range was maintained between 1.1 and 1.3 by a spectrophotometer. After that, 1 mL of extract was mixed with 3 mL of DPPH solution followed by shaking vigorously and kept in a dark room for 30 min at room temperature. The blank sample was produced by mixing distilled water (1 mL) with DPPH solution instead of extract. The absorbance was taken at 517 nm by a UV-Vis spectrophotometer (UV-180 240 V, Shimadzu Corporation, Kyoto, Japan). The scavenging capacity of the samples was calculated by using the following formula and results were expressed as percentage (%):
Statistical Analysis
All results were expressed as mean ± SD (standard deviation) and mean ± SE (standard error) in tables and graphs respectively with 3 replications. All graphs were prepared by GraphPad prism 5 (San Diego, CA 92108, USA). A two-factor analysis followed by Duncan's multiple range test (DMRT) was done by Statistix 10 (Tallahassee, FL 32312, USA). The correlation test was done by SPSS statistical software package (Ver. 23.0, SPSS Inc., Chicago, IL, USA). The least significant differences (LSD) were calculated to compare the means of different treatments with 5% level of probability.
Response to Drought Stress on Plant Growth Characteristics
Plant height, fresh weight, and dry weight significantly reduced (p < 0.05) under drought compared to the control condition in most of the genotypes (Table 2). In the case of all genotypes, plant height for control and drought conditions was found ranging from 27.67 to 18.33 cm and 21 to 16 cm, respectively. Higher plant height was recorded from S6 followed by S5, S7, S2, S3, S1, and S4 in control while S6 followed by S4, S3, S7, and S1 in drought condition. Plant fresh weight ranged from 5.94 to 3.38 g in control where the higher value was recorded for S2 followed by S9, S1, S5, and S8 genotypes. In comparison to control, drought stress condition revealed lower weight in all genotypes ranging from 2.5 to 1.11 g where S10 followed by S4, S2, S11 and S9 were recorded as a higher value. Besides, plant dry weight ranged from 0.37 to 0.17 g in control and 0.27 g to 0.13 g in drought condition. Higher plant dry weight was recorded from the genotype S2 followed by S9 and S5 in control but in case of drought, higher dry weight was recorded from the genotype S2 followed by S4, S9 and S10.
In the present experiment, cessation of watering for 10 days imposed severe water stress on all sugar beet genotypes resulted in a reduction of plant height, fresh weight and dry weight. Sugar beet is a well-known sensitive crop to water stress [6,69,70]. It is well described that drought stress can suppress seedling growth, leading to an ultimate reduction in biomass production [71][72][73]. Moreover, photosynthesis rate, partitioning coefficients and accumulations of photosynthetic products in different organs can be affected by the prolongation of drought [74]. It was also reported that drought has a Plants 2020, 9, 1511 8 of 27 negative impact on sugar beet height and dry weight [75]. Our present investigation also found a negative impact of drought stress on sugar beet growth and dry matter accumulation especially in S1, S3, S5, and S7 which might be due to the lower production of plant biomass.
Influence of Drought on Photosynthetic Pigments and Fluorescence
Chlorophyll a, chlorophyll b, total chlorophyll, and carotenoid significantly decreased (p < 0.05) in drought compared to control condition ( Figure 1). In control, the genotype S3 was recorded with significantly higher chlorophyll a (23.05 mg·g −1 dry weight), chlorophyll b (10.22 mg·g −1 dry weight), and total chlorophyll content (33.28 mg·g −1 dry weight) while the genotype S11 was detected as notable for higher Carotenoid content (5.25 mg·g −1 dry weight). However, under drought condition, S10 genotype demonstrated remarkable tolerant efficiency with the highest pigments content, such as chlorophyll a (5.27 mg·g −1 dry weight), chlorophyll b (1.88 mg·g −1 dry weight), total chlorophyll (7.15 mg·g −1 dry weight), and carotenoid (1.48 mg·g −1 dry weight). Also, other significant cultivars including S2, S5, S7, S10, and S11 for control whereas S1, S2, and S9 genotypes were noteworthy for the drought stress condition. In the case of photosynthetic quantum yield analysis, no significant differences were observed among the genotypes, while it was varied significantly in drought conditions (p < 0.05) where S11, S7, S6, S4, S2, and S5 performed better than others.
It is well known that photosynthetic pigments considerably decrease under drought stress which causes impairment of plant growth and yield [76,77]. A low photosynthetic rate is a common issue under drought due to reduced green pigment synthesis [77]. In this study, chlorophyll a, b, total chlorophyll, and carotenoid contents significantly decreased in all the genotypes under drought conditions. However, the maximum reduction of pigments was observed in S3, S5, and S4 respectively. Importantly, activities of chlorophyllase and peroxidase are enhanced under stress conditions involved in chlorophyll breakdown that may have a crucial role in chlorophyll pigments reduction [78]. Apart from chlorophyll content and photosynthetic rate, drought has also a negative impact on different chlorophyll fluorescence parameters [79]. Concerning this principle in our study, quantum yield (Fv/Fm) was significantly reduced in most of the genotypes (except S3, S7, and S11) under 10 days' long drought condition. It was reported that the photochemical activity of photosystem II (PS II) and electron requirement for photosynthesis both might be influenced by the drought stress, resulting in an over-excitation and photo-inhibition damage to reaction centers of PS II [80,81]. An earlier study found a significant decrease of Fv/Fm under drought conditions with comparison to the well-irrigated condition [19]. The positive correlation (Table 3) among the photosynthetic pigments, Fv/Fm and growth parameters in the present study also comply with the previous findings. It is well known that photosynthetic pigments considerably decrease under drought stress which causes impairment of plant growth and yield [76,77]. A low photosynthetic rate is a common issue under drought due to reduced green pigment synthesis [77]. In this study, chlorophyll a, b, total chlorophyll, and carotenoid contents significantly decreased in all the genotypes under drought
Effect of Drought on Drought Tolerance Index, Relative Water Content (%), and Membrane Stability Index (%)
Drought tolerance index (DTI) varied among the sugar beet genotypes ranging from 1.55 to 0.69 ( Figure 2). The highest DTI was recorded from S10 genotype followed by S4 and S11. Relative water content (RWC %) was found higher in control (70.01 to 85.26) than drought (43.47 to 57.88) (Figure 3). Despite low statistical dissimilarities among most of the genotypes, S10 followed by S8, S7, S4, and S9 from control, and S5 followed by S9, S11, S4, and S2 from drought were found with higher RWC. Besides, membrane stability index (MSI %) was higher in control than drought condition for all the cultivars, but significant (p < 0.05) genotypes were noted for S5, S10, and S11. Overall, S9 genotype was recorded as higher MSI (%) in the case of both control (92.24) and drought condition (88.17) (Figure 3). II [80,81]. An earlier study found a significant decrease of Fv/Fm under drought conditions with comparison to the well-irrigated condition [19]. The positive correlation (Table 4) among the photosynthetic pigments, Fv/Fm and growth parameters in the present study also comply with the previous findings.
Effect of Drought on Drought Tolerance Index, Relative Water Content (%), and Membrane Stability Index (%)
Drought tolerance index (DTI) varied among the sugar beet genotypes ranging from 1.55 to 0.69 ( Figure 2). The highest DTI was recorded from S10 genotype followed by S4 and S11. Relative water content (RWC %) was found higher in control (70.01 to 85.26) than drought (43.47 to 57.88) ( Figure 3). Despite low statistical dissimilarities among most of the genotypes, S10 followed by S8, S7, S4, and S9 from control, and S5 followed by S9, S11, S4, and S2 from drought were found with higher RWC. Besides, membrane stability index (MSI %) was higher in control than drought condition for all the cultivars, but significant (p < 0.05) genotypes were noted for S5, S10, and S11. Overall, S9 genotype was recorded as higher MSI (%) in the case of both control (92.24) and drought condition (88.17) (Figure 3). Evidence found that the cultivars with higher DTI under drought have a better carbon assimilation rate than that of lower DTI, but they may grow slowly and their productivity also hampered [52]. But in our study, the genotypes with higher DTI manifested relatively higher fresh and dry weight at the seedling stage. However, in a strict sense, this phenomenon is treated as "drought delay" not "drought tolerance" but still may be considered as a good index for early selection of suitable sugar beet genotypes under drought stress.
RWC is one of the most important and reliable physiological indices by which assist to evaluate a particular genotype whether it is drought tolerant or not [69,82]. As shown in Figure 4, despite the reduction of RWC in all the genotypes under drought stress condition, the genotypes S2, S4, S5, S9, and S11 exposed as higher RWC with minimum loss (30%, 34%, 19.35%, 31.4%, and 25.4% respectively) compare to others. This higher RWC may be caused due to stomatal closure induced by stress, resulting reduction of transpiration [83,84]. To maintain cell integrity and proper functioning, the stability of cell membrane lipidome is an important index in a growing plant [85]. Like RWC, a higher membrane stability index (MSI) of tolerant genotypes may be due to reduced transpiration and efflux of solutes that are involved in the osmoregulation process of plant leaves [86]. In the present experiment the genotypes S3, S6, and S9 demonstrated strong MSI in drought conditions compared to control. Evidence showed that protecting the cell membrane helps from reducing sugar beet root yield under drought stress in different genotypes [87]. Thus, the genotypes with higher MSI under drought condition may be considered as tolerant plants. Evidence found that the cultivars with higher DTI under drought have a better carbon assimilation rate than that of lower DTI, but they may grow slowly and their productivity also hampered [52]. But in our study, the genotypes with higher DTI manifested relatively higher fresh and dry weight at the seedling stage. However, in a strict sense, this phenomenon is treated as "drought delay" not "drought tolerance" but still may be considered as a good index for early selection of suitable sugar beet genotypes under drought stress.
RWC is one of the most important and reliable physiological indices by which assist to evaluate a particular genotype whether it is drought tolerant or not [69,82]. As shown in Figure 4, despite the reduction of RWC in all the genotypes under drought stress condition, the genotypes S2, S4, S5, S9, and S11 exposed as higher RWC with minimum loss (30%, 34%, 19.35%, 31.4%, and 25.4% respectively) compare to others. This higher RWC may be caused due to stomatal closure induced by stress, resulting reduction of transpiration [83,84]. To maintain cell integrity and proper functioning, the stability of cell membrane lipidome is an important index in a growing plant [85]. Like RWC, a higher membrane stability index (MSI) of tolerant genotypes may be due to reduced transpiration and efflux of solutes that are involved in the osmoregulation process of plant leaves [86]. In the present experiment the genotypes S3, S6, and S9 demonstrated strong MSI in drought conditions The value of the estimated MDA level is commonly used to measure the quantity of lipid peroxidation "an exponent of cell membrane damage" under drought [88]. MDA is a valuable indicator to determine the degree of damage at the cellular level caused by abiotic stress [89]. In the present study, the lower increment of MDA was recorded from S6 (1.98%) followed by S5 (20.24%), Different letters indicate significant differences (p < 0.05) among the genotypes within each parameter using Duncan's multiple range test.
The value of the estimated MDA level is commonly used to measure the quantity of lipid peroxidation "an exponent of cell membrane damage" under drought [88]. MDA is a valuable indicator to determine the degree of damage at the cellular level caused by abiotic stress [89]. In the present study, the lower increment of MDA was recorded from S6 (1.98%) followed by S5 (20.24%), S9 (23.50%) and S3 (31.67%) and lower increment of H 2 O 2 was recorded from S6 (−8.22%) followed by S8 (8.3%), S1 (8.7%), and S9 (13.15%). It was stated that oxidative burst induced by drought stimulate to increase MDA level, has been regarded as an index of lipid peroxidation, membrane permeability, and finally cell injury [90]. In a previous study, it was reported that, MDA folds two times higher than control in sugar beet under drought stress [91] which is in agreement with our present experiment. In contrast, over-expression of H 2 O 2 may cause cell damage, but depend on the strong ROS scavenging mechanisms; it may act as a signaling molecule [92]. A similar experiment on Amaranthus tricolor [93] and fescue genotypes [29] showed that both MDA and H 2 O 2 increased in drought conditions and they augmented progressively with the increment of drought stress. Thus, a higher generation of MDA and H 2 O 2 can boost up the damage of cell membranes more in sensitive genotypes than tolerant genotypes. In our study, MDA and H 2 O 2 showed a positive correlation with each other (Table 3). On the other hand, MDA manifested a positive correlation with proline, TPC and TFC, and a negative correlation with growth, photosynthetic pigments, Fv/Fm, MSI, and RWC. In this connection, higher MSI was observed in S2, S6, and S9, and higher RWC in S6 and S9 genotypes respectively with lower MDA and H 2 O 2 accumulation.
Effect of Drought on Osmotic Adjustment Molecules
The sugar beet genotypes varied concerning individual osmolytes in their leaves (Table 4). Proline was found significantly higher (p < 0.05) in all the genotypes in drought than the control condition. A similar result was also observed for glycine betaine (GB), total soluble carbohydrate (TSC), and sucrose in most of the case. It was observed that S4, S5, S7, and S11 produced higher proline than others (p < 0.05), while S1, S2, S4, S6, and S11 produced higher GB and S1, S5, and S10 produced both higher TSC and sucrose under drought condition. The accumulation of proline is a useful indicator of stress in sugar beet which can act as a signaling molecule for modulation of mitochondrial functions, influencing cell proliferation and expression of specific stress tolerant genes [94,95]. From our experiment, maximum proline increment (%) was observed from S7 (>2600%) followed by S11 (>2150%), S5 (1800%) and S3 (>1200%). Proline is considered the most important organic solute can play a key role to regulate ROS, Hydroxyl radicals, also can be an important component of cell wall proteins and that is why genotype having more proline is considered as more resistant to drought stress [58,96].
Glycine betaine also important osmoprotectants produced in response to abiotic stress which can contribute to metabolic pathways like osmotic adjustments and osmoregulation consequently reduce the negative impact [97,98]. Under osmotic stress, proline generally is accumulated in the cytosol and plays an important role in osmotic adjustment in the cytoplasm, whereas Glycine betaine is accumulated mainly in chloroplast and takes part in maintaining photosynthetic efficiency by adjustment and protection of thylakoid membrane [99,100]. Glycine betaine can also mediate osmoregulation, scavenging free radical, maintain membrane integrity, and protecting macromolecules [101]. In our study, the maximum GB increment (%) was recorded from S6 (67%) followed by S7 (51%), S11 (46%), and S9 (42%).
In our study, we observed that the ROS (MDA and H 2 O 2 ) were significantly increased under drought stress. Generally, the plant creates its defensive mechanism to scavenge the ROS which in this regard positively produces a higher level of osmolytes. All the osmolytes maintained a positive correlation among them (Table 3). Besides, proline mediates a positive correlation with MDA and H 2 O 2 . We also observed that GB, TSC, and sucrose have a negative correlation with photosynthetic pigments, Fv/Fm, and RWC, etc. In a previous study, it is believed that TSC and sucrose accumulation under stress depends on lower photosynthesis rate and carbohydrate consumption in plants [103]. From the overall results the genotypes S2, S6, and S11 produced higher osmolytes with higher photosynthetic traits, MSI, and RWC with minimum cell damaged by reactive oxygen species.
Effect of Drought Stresses on Antioxidant Enzymes Activities
The activities of antioxidant enzymes varied within the variety and treatments ( Figure 5). Superoxide dismutase (SOD) ranged from 2187.6 to 1888.1 (units·g −1 dry weight) and 2237.5 to 1636.9 (units·g −1 dry weight) in control and drought, respectively. Higher SOD was recorded from the genotypes S10 followed by S11, S4, S8, S9, and S7 in control, while genotype S9 was followed by S11, S10, S6, S8, and S4 in that of drought condition. SOD increased in S9 (8.38%) but decreased in all other genotypes after 10 days of water rescission. However, the minimum reduction of SOD under drought condition was recorded by 4.14%, 4.53%, 6.37%, and 9.57% from the genotypes S11, S5, S6, and S8, respectively.
Catalase (CAT) ranged from 1.79 to 0.57 (mM·g −1 dry weight) and 1.13 to 0.28 (mM·g −1 dry weight) in control and drought treatment, respectively. Higher CAT was recorded from S10 followed by S11, S1, S8, and S7 in control, while S7 was followed by S9, S8 in that of drought condition. The activity of CAT was reduced in all genotypes under drought stress compared to the control condition. However, the minimum reduction was recorded at 9.89%, 12.83%, 21.52%, 24.75%, and 36.5% from the genotypes S5, S7, S9, S3, and S6 respectively.
Ascorbate peroxidase (APX) ranged from 6.68 to 3.63 (mM·g −1 dry weight) and 6.12 to 1.39 (mM· g −1 dry weight) in control and drought condition, respectively. Higher APX was recorded from S9 followed by S6, S7, S3, S10, and S2 in control, while S10 followed by S9, S8, S7, and S11 in that of drought condition. It was also observed that, only S8, S10, and S11 genotypes were found with increased APX in drought treatment compared to the control condition, while the opposite result was observed from others. Also, during peroxidase (POD) analysis, S3 (1.11 mM·g −1 dry weight) followed by S6, S7, S9, S4, and S8 demonstrated higher enzyme activity in control whereas S2 (0.43 mM·g −1 dry weight) followed by S3, S10, S8, and S6 in drought condition. Antioxidant enzyme defense activities had a high positive relation with increased abiotic stress [104]. Under drought stress, plants with higher antioxidant enzyme activities are considered to have a better free radical scavenging ability [105]. In this connection, cellular SOD acts as a first-line scavenger of ROS by catalyzing the superoxide radical (O2 −• ) into oxygen and H2O2 [106,107]. It was stated that, prolong drought stress causes oxidative damage as exhibited by a general decline in In the case of polyphenol oxidase (PPO), higher PPO was observed in all the genotypes in drought compared to the control condition. PPO ranged from 1.5 to 0.34 (µM·g −1 dry weight) and 3.89 to 0.59 (µM·g −1 dry weight) in control and drought conditions, respectively. Among all the genotypes, higher PPO was recorded for S3 followed by S4 and S8 in control, while S5 followed by S4, S8, S1, S3, and S6 in that of drought condition. Maximum augmentation in drought was observed 471.5% followed by 191.2%, 189.2%, and 138% from S5 followed by S6, S1, and S4 respectively. Antioxidant enzyme defense activities had a high positive relation with increased abiotic stress [104]. Under drought stress, plants with higher antioxidant enzyme activities are considered to have a better free radical scavenging ability [105]. In this connection, cellular SOD acts as a first-line scavenger of ROS by catalyzing the superoxide radical (O 2 −• ) into oxygen and H 2 O 2 [106,107]. It was stated that, prolong drought stress causes oxidative damage as exhibited by a general decline in antioxidant enzyme activities and an increase in lipid peroxidation [108]. In our study, higher lipid peroxidation was found with lower antioxidant activities in most cases. Generally, higher SOD activity may be exerted due to the consequences of excess superoxide radical generation [29] that can act as a signal for the induction of antioxidant enzyme, resulting in greater SOD induction [109]. This hypothesis indicated that the genotypes S9 followed by S11, S10, S6 and S8 with comparatively higher SOD might have a high capacity to catalyze superoxide radical under drought conditions. Besides, CAT, APX, and POD are also the major enzymes involved in the detoxification of H 2 O 2 (produced through the dismutation of O 2 −• in peroxisomes and chloroplasts) to H 2 O and O 2 [29,110,111]. In the present experiment, genotypes S6, S7, S8, S9, and S10 were found with higher CAT, APX, and POD activity under drought stress. This may be attributed to the increased H 2 O 2 and MDA accumulation, resulting in excitation of enzyme activities. A similar response was also observed in previous research on cotton [112]. Usually, high oxidative stress provokes the H 2 O 2 generation under drought stress which also stimulates to produce high antioxidant enzyme activity. Subsequently, this enzyme activity may minimize the adverse effect of abiotic stresses either by acting as a bio-signal and/or modulating resistant gene expression [113]. Moreover, It has been reported that PPO can protect the over reduction of photosynthetic electron transport during environmental stress [114]. In a previous study, PPO activity was found significantly high in white clover after 7 days of drought treatment [115]. However, some conflict results were also reported, which showed that suppression of PPO activity increased drought tolerance in tomato [116]. In our study, we observed a positive correlation of PPO with TPC (Table 3). In this connection, the genotypes S1, S3, S5, S6, and S8 performed better in drought conditions. Besides, PPO can directly influence photosynthesis by acting as an oxygen buffer to facilitate reactive oxygen scavenging [114,117,118]. Although PPO regulates available oxygen to oxidize phenolic compounds to o-quinones and H 2 O, but it can regenerate o-diphenol to o-quinones by providing a source of reducing power with the close association of photosystems during photosynthesis [118][119][120].
In general, under stress conditions plant physiological activities and performance decline with a decrease in antioxidant enzymes activities and an increase in ROS chemicals. These phenomena indicating an inability of plant cells against ROS chemicals. The activities and involvement of different antioxidant enzymes in ROS scavenging vary with some factors including plant species, stress severity and duration [108]. Sairam et al. (2000) observed a variation of antioxidant enzymes activities such as SOD, CAT, and APX among different wheat genotypes [121]. In the present study, we observed lower activities of SOD in S5, CAT in S2, S4, S5, S6, and S8, APX in S4 and S5, POD in S4, S5 and S11, and PPO in S7 and S11 genotypes along with a higher accumulation of MDA. From the overall observations the genotypes S6, S7, S8, S9, and S10 showed better stress response through redox homeostasis process by using different antioxidant enzymes under drought condition.
Influence of Drought Stresses on Antioxidant Activities
Total phenolic content (TPC) was augmented in all the genotypes under drought stress (p < 0.05) in the present experiment ( Figure 6). TPC ranged from 170.37 to 32.59 (µg·g −1 as GAE) and 690.37 to 340.74 (µg·g −1 as GAE) in control and drought, respectively. Higher TPC was recorded from S4 followed by S3, S10, S8, S9, and S11 in control, while S3 was followed by S5, S6, S1, S2, and S7 in that of drought condition. Total flavonoid content (TFC) is also influenced by drought stress in all the genotypes compared to control (p < 0.05) ( Figure 6). TFC ranged from 3.92 to 3.21 (mg·g −1 as QE) and 5.93 to 4.11 (mg·g −1 as QE) in control and drought treatment. Higher TFC was recorded from S6 followed by S9, S7, and S8 in control, while S3 was followed by S1, S2, S6, S9, S8, and S4 in that of drought condition.
Generally, total antioxidant activity is the combined activities of both enzymatic and non-enzymatic antioxidants in plants. In a previous experiment, total antioxidant activity along with total polyphenol and total flavonoid content significantly increased in Achillea species [124] and also in soybean [125] by drought stress. Tolerant plants can protect themselves by augmenting the antioxidant enzymes, molecules activity, and quantity under stress (biotic or abiotic) condition [93]. A plant having higher antioxidants can improve their defensive responses under oxidative stress by scavenging free radicals rapidly [126]. Since drought can trigger the increment of ROS, hence a higher amount of antioxidants are required to get acclimated by improving the tolerance level [127]. Antioxidant activity plays a vital role in plant defense mechanisms by keeping the balance between synthesis and scavenging of free radicals [128]. In our study, antioxidants have a positive correlation with osmolytes however, a vice-versa relationship was also observed with enzymes (Table 3). Overall, the genotypes S2, S3, S5, S6, S7, and S9 performed better regarding TPC, TFC, and antioxidant capacity.
Conclusions
Under drought conditions, seedlings growth and development, photosynthetic pigments and photosynthetic quantum yield (Fv/Fm) expressively reduced compare to control in all sugar beet genotypes. Significant reductions were also observed in membrane stability index, leaf relative water content, and activity of antioxidant enzymes. Positive correlations were noticed between growth parameters and photosynthetic traits, physiological parameters and antioxidant enzyme activities respectively. On the other hand, drought significantly increased the reactive oxygen species and osmotic adjusting molecules in sugar beet seedlings. A remarkable increment of secondary metabolites and antioxidant capacity was also observed in drought conditions. Although enzymes have a negative correlation with osmolytes and secondary metabolites, the reactive oxygen species maintained a positive correlation with them. Based on the results, higher activities of antioxidant enzymes, osmolytes, membrane stability index and relative water content with lower reactive oxygen species accumulation were the most important factors considering drought-tolerant sugar beet seedlings. After analyzing all the data, the cultivars S2 (HELENIKA), S6 (RECODDINA), S7 (GREGOIA), S8 (SV2347), S9 (SV2348), S10 (BSRI Sugarbeet 1), and S11 (BSRI Sugarbeet 2) were found best fitted under drought condition. The findings of the experiment can help further for better understanding of different mechanisms related to the adaptability of sugar beet in drought conditions to improve drought tolerance cultivars.
|
2020-11-12T09:10:13.000Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "8cf48489cabfee5b1af13c4b812a78a4e07a7f2b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/9/11/1511/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "113bd353deba248b8505d5595669e28b84f42304",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
181576493
|
pes2o/s2orc
|
v3-fos-license
|
Urban Agricultural shocks and drivers of livelihood precariousness across Indian rural communities
Spatial factors, such as environmental conditions, distance to natural resources and access to services can in- fl uence the impacts of climate change on rural household livelihood activities. But neither the determinants of precarious livelihoods nor their spatial context has been well understood. This paper investigates the drivers of livelihood precariousness using a place-based approach. We identify fi ve community types in rural regions of the Mahanadi Delta, India; exurban, agro-industrial, rainfed agriculture, irrigated agriculture and resource per- iphery by clustering three types of community capitals (natural, social and physical). Based on this typology, we characterise the associations between precarious livelihood activities (unemployment or engagement in agri- cultural labour) with agricultural shocks and household capitals. Results demonstrate that, the type of community in fl uences the impact of agricultural shocks on livelihoods as four of the fi ve community types had increased likelihoods of precarious livelihoods being pursued when agricultural shocks increased. Our research demonstrates that the bundle of locally available community capitals in fl uences households' coping strategies and livelihood opportunities. For example, higher levels of physical capital were associated with a lower likelihood of precarious livelihoods in agro-industrial communities but had no signi fi cant impact in the other four. Results also indicate that agricultural shocks drive livelihood precariousness (odds ratios between 1.03 and 1.07) for all but the best-connected communities, while access to household capitals tends to reduce it. Our results suggest that poverty alleviation programmes should include community typologies in their approach to provide place-speci fi c interventions that would strengthen context-speci fi c household capitals, thus reducing livelihood precariousness.
Introduction
Investigating the impacts of climate change on rural livelihoods and rural poverty is a continuing concern within environmental sciences and development studies. Repeated exposure to climatic stresses can undermine current and future coping capacity, which can lead to shifts from transient to chronic poverty (Ahmed, Diffenbaugh, & Hertel, 2009). However, the impacts of climate shocks on rural households depend on coping strategies and livelihood opportunities and cannot be explained by income-based approaches alone (Scoones, 2015). Livelihood approaches reveal that inequalities in access to livelihood capitals and in livelihood opportunities are spatially dependent and that they perpetuate poverty and undermine households' ability to cope with external shocks (de Sherbinin et al., 2008). Understanding the links between multiple stressors and livelihoods is central to achieving sustainable development pathways. However, insufficient work assesses the spatial distribution of livelihoods as a consequence of weather shocks. This paper aimed to bridge this gap by conducting a place-based analysis of the associations between livelihood strategies, agricultural shocks and livelihood capitals. The objective of this paper was to demonstrate how the type of rural community in which households are situated modifies the relationships between livelihood strategies, agricultural shocks and access to livelihood capitals.
Our research demonstrates that the bundle of locally available community capitals influences households' coping strategies and livelihood opportunities, thus influencing the drivers of rural poverty. We also argue that agricultural shocks drive livelihood precariousness, while access to capitals tends to reduce it. Our results suggest that poverty alleviation programmes should include community typologies in their approach to provide place-specific interventions that would strengthen context-specific household capitals, thus reducing livelihood precariousness.
Access to community capitals and household livelihood activities
A major theoretical issue that has dominated the field of livelihood studies for many years concerns the use of quantitative methods to characterise rural livelihoods and their dynamics (Jiao, Pouliot, & Walelign, 2017). However, most of these studies have considered that the effect of capitals on livelihood strategies is constant across space, without considering community-level effects Bhandari, 2013). For example, access to a common agricultural area in the village can have a positive effect on livelihoods as it can create synergies between farmers to invest into agricultural equipment or irrigation infrastructure and it can increase their bargaining power (Agarwal, 2018). Community-level studies that paid particular attention to the spatial component of livelihoods led to descriptive results, such as the creation of indices (e.g. Singh & Hiremath, 2010). Although such indices are a useful mapping tool for policy makers, they fail to break down the different livelihood components and thus characterise the place-based dimensions of rural poverty.
Overall, despite the recommendations from previous poverty studies (e.g. Palmer-Jones & Sen, 2006) and from livelihood studies (e.g. Angelsen et al., 2014) that have shown the importance of place-based approaches to rural poverty, there have been very few studies that have characterised the place-based sensitivity of livelihood strategies to livelihood capitals and external shocks. To the authors' best knowledge, the only study that looked at the associations between livelihood capitals and livelihood strategies using a place-based approach relied on an arbitrary categorisation of community types based on a total of six settlements (Fang, Fan, Shen, & Song, 2014). In their study, Fang et al. (2014) demonstrated that different settlement types affect how access to capitals influences households' livelihood strategies. However, the interpretation of the results was micro-localised and difficult to reproduce across a larger spatial extent. Our approach helps meet this challenge by identifying how the effects of key determinants of precarious livelihood strategies vary across a broad geographic extent.
Community capitals can be defined as public goods through which people are able to widen their access to resources and to economic opportunities (Gutierrez-Montes, Emery, & Fernandez-Baca, 2009;Lindenberg, 2002). They can include factors such as environmental conditions (e.g. elevation, rainfall, soil quality), distance to natural resources (e.g. forest, wetlands) and access to services (e.g. markets, hospitals, schools). These community capitals vary spatially and can shape differential vulnerabilities and influence the impacts of climate change on rural households (Berchoux, Watmough, Johnson, & Hutton, 2019). These spatial factors form a group of interacting services that cooccur in time and space, creating bundles of community capitals (Turner, Odgaard, Bøcher, Dalgaard, & Svenning, 2014;Yang et al., 2015).
Characterising community capitals using typologies
Typologies are useful tools for policy-makers, planners and other practitioners to improve place-specific understandings of rural heterogeneity and rural change. The heterogeneity of rural areas can be categorised into community typologies that reflect similar combinations of natural resources (i.e., water, cropland, forest), social services (including education, health, governance), and productive infrastructures (Alessa, Kliskey, & Altaweel, 2009;Van Eetvelde & Antrop, 2009). These different combinations of assets reflect different underlying types of communities (van der Zanden, Levers, Verburg, & Kuemmerle, 2016), which influence the drivers of livelihood strategies and rural poverty, and therefore lead to different responses to multiple stressors.
In this paper, we investigate the drivers of livelihood precariousness using a place-based approach. We create a typology of rural communities (defined here as villages derived from national population and housing censuses) by clustering characteristic variables of community capitals, focused on natural resources, social services and productive infrastructures. Based on this typology, we characterise the associations between precarious livelihoods, agricultural shocks and household capitals for each community type. This approach helps to elucidate how the type of community can determine the impact that agricultural shocks can have on household livelihood activities and in particular on the likelihood that households pursue precarious activities.
Weather shocks and impacts on livelihood activities
Despite the Government of India's efforts to enhance livelihood security in rural areas, only 53.2% of the working age rural population is able to get work throughout the year (Indian Ministry of Labour and Employment, 2015). While the majority of the employed population depends on agriculture, forestry and the fishing sector for their livelihoods, around 78% of households do not earn any wages. Weather shocks affect agricultural production through frequent floods, droughts, and storm surges with subsequent impacts on rural livelihoods (Birthal, Roy, & Negi, 2015). Households put in place coping strategies to adjust to the loss of wages following a crop failure.
Coping strategies are defined as temporary adjustments made by households in their livelihood systems in response to shocks, which can be external (natural hazards, movements in markets, changes in policy environment) or internal (health problems, changes in household composition, social rituals) (Scoones, 2015). Three different types of coping mechanisms can be highlighted based on their reversibility: (i) reversible mechanisms (temporary activity shift, disposal of protective assets); (ii) erosive mechanisms (disposal of productive assets such as land); and (iii) destitution (unemployment, distress migration). Reversible mechanisms can be observed when some members take wage labour or migrate to find paid work (temporary activity shift) or when using self-insurance mechanisms, such as selling protective assets. Protective assets include any asset held as a store of value and that can be sold if the household faces an external shock, including cash, jewellery or livestock (Chena et al., 2013). Erosive mechanisms are usually implemented in response to heavy shocks or persisting stresses and undermine households' productive capacity. In the case of disposal of agricultural land, this leads to a long-term livelihood change, as households shift from cultivation to other activities, for example, agricultural labour. The last category of coping mechanisms comes as a last resort for the household and indicates its destitution, with household members becoming unemployed or choosing permanent out-migration.
In India, although the percentage of farmers with land access rights declined from 72 to 45% between 1951 and 2011, the percentage of landless agricultural labourers increased from 28 to 55% (Indian Ministry of Labour and Employment, 2015). This considerable rise in landless agricultural labourers is an indication that many households have put in place erosive mechanisms to cope with the impacts of agricultural shocks (Williams et al., 2016). However, the effects of such shocks vary widely across a broad geographic extent, with livelihood opportunities (and, thus, the ability to put in place reversible coping mechanisms) being conditioned by access to community capitals .
Conceptual framework
The approach taken in this paper ( Fig. 1) is based on the household livelihood strategy framework (Nielsen, Rayamajhi, Uberhuaga, Meilby, & Smith-Hall, 2013) and shows the different components used to understand how access to community capitals can influence the associations between precarious livelihoods, agricultural shocks and livelihood capitals.
A livelihood system combines the capabilities, assets and activities of one household to achieve its means of living (Scoones, 2015). Assets are resources that people have access to, which can be private goods (household capitals) or public goods (community capitals). Household assets are grouped into a set of five livelihood capitals: natural (private natural resource stocks), physical (productive assets), financial (liquidities and protective assets), human (capabilities and capacities of the households) and social (networks and kinships). Regarding community capitals, three categories can be differentiated (Flora, Flora, & Gasteyer, 2015): common-pool natural resources, social services (access to social amenities) and productive infrastructures (road networks, markets and industries). Based on their access to community and household assets, households put in place a range of livelihood activities to achieve their basic needs. Livelihood opportunities depend on the household and community capitals that households have access to. The combination of capitals and activities leads to livelihood outcomes if the household does not face any shocks, which are reinvested in the system. In the case of a shock (internal or external), households can implement three types of coping strategies depending on their assets, as well as public assets from the community they live in: reversible mechanisms (activity shift, sell of protective assets), disposal of productive assets (sell land) and destitution (unemployment, distress migration).
Methods
Most of the people who live in deltas rely on agriculture to ensure their food security and to generate economic incomes. However, deltas are exposed to multiple stressors arising from both terrestrial (such as run-off from rivers) and marine processes (such as storms, waves or sealevel from oceans), which are a threat for rural populations relying on agriculture for their livelihoods. Moreover, deltas are one of the most exposed ecosystems to climate change (Ericson, Vorosmarty, Dingman, Ward, & Meybeck, 2006). As a consequence, rural households located in deltas that rely on agriculture are amongst the most vulnerable to climate change, as their main livelihood is highly vulnerable to the projected increase in the frequency of floods and droughts. Despite the ecological services they perform, the economic value they generate and that they are home to around 500 million people (Ericson et al., 2006), little attention has been paid to deltas as a socio-ecological unit. Therefore, we selected the Mahanadi Delta located within the state of Odisha in East India as study site.
Study site
The Mahanadi Delta in Odisha, India, is a populous delta where livelihood opportunities are affected negatively by environmental stressors, such as floods, droughts cyclones, erosion and storm surges. The combination of environmental stresses has resulted in a loss of income for rural households who are dependent on agriculture for their livelihoods (68% of the delta's population), due to major crop failures (Duncan, Tompkins, Dash, & Tripathy, 2017). As a consequence of their inability to cope with the impacts of environmental shocks, many households have to sell off their agricultural land. Their members often become unemployed with limited livelihood opportunities to move out of poverty, either to migrate or become agricultural labourers (Sahu & Dash, 2011).
This research focused on an area covering the five districts of the Mahanadi Delta in Odisha, eastern India: Bhadrak, Jagatsinghpur, Kendrapara, Khorda and Puri (Fig. 2). Given that communities are statutory units in India with a definite boundary and separate land records, we used the administrative boundaries provided by the General Registrar and Census Commissioner (2011) for our analysis. In total, 9829 rural communities were considered.
Local perceptions of the drivers of livelihood strategies
Fieldwork was conducted between February and May 2016 to identify indicators that stakeholders, experts and local residents perceive as representative and robust to examine the effects of community and household capitals on their livelihoods. A Rapid Rural Appraisal (RRA) was used for data collection to highlight the perceptions and opinions of rural dwellers (Supplementary Material S1). This method enables local people to share their knowledge and discuss their situation using their own terms (Chambers, 1994). In total, ten communities were selected by using stratified random sampling based on their access to community capitals and on the main livelihood activities conducted by households (Fig. 2).
A variety of additional activities were used to cross-check the data acquired from the RRA. First, a focus group was held to identify general information about the village and the evolution of its infrastructure. The focus group also investigated differences in livelihood assets and strategies within the community which were combined into a series of categories by the participants. The proportion of households falling into each livelihood category were subsequently quantified by the T. Berchoux, et al. Landscape and Urban Planning 189 (2019) 307-319 participants. The last activity was a participatory photography workshop using the photovoice methodology (Wang & Burris, 1997) on the theme of "Key assets to achieve your livelihoods"; a theme broad enough to let the participants themselves highlight the different roles that community and household capitals play in their decision to pursue an economic activity.
Developing community typologies
Every community has common-pool resources (i.e. road, market, forest, lake) that can provide services for rural dwellers' livelihoods. For example, a road can provide farmers with alternative outlets for their agricultural production, while a forest can give the opportunity for households to collect and sell non-timber forest products. Such common-pool resources appear together repeatedly in the landscape, creating bundles of community capitals (Bański & Mazur, 2016). We used cluster analysis on 18 variables derived from open source data to generate community typologies. Indicators were selected based on participatory rural appraisals conducted in ten communities located across the Mahanadi delta. Participants argued that remoteness plays an important role in their access to community capitals, and thus in their choice of livelihood strategy. As a consequence, we used travel time to key amenities rather than amenity availability to reflect community remoteness in the cluster analysis. Euclidean distances are inappropriate for this purpose as the Mahanadi delta has several water bodies, which act as boundaries to travel. We thus estimated accessibility to key amenities by creating a least accumulative cost surface to estimate time (in hours) to travel from each community to the nearest amenity of interest, using the R package "gdistance" ( van Etten, 2017).
Estimating accessibility
We downloaded road data from OpenStreetMap, using the R package "osmdata" (Padgham, Rudis, Lovelace, & Salmon, 2017). Roads were converted to a raster with 30 m spatial resolution and merged with 30 m spatial resolution land cover data from 2010 Glo-beLand30 (Chen et al., 2014). Based on a previous study in India (Watmough, Atkinson, Saikia, & Hutton, 2016), average speeds were assigned to each land cover class (Table 1) and were based on travel by foot across land covers and footpaths and travel by motorised vehicles on other forms of road and track.
Variables for community typologies
In total, 18 variables were chosen to be included in the cluster analysis (Table 2). These were selected to represent the diversity of drivers that were highlighted by participants during the participatory rural appraisals. They can be grouped into three categories, natural resources, social services and productive infrastructure. Locations of the main amenities were extracted from the Village Amenities tables of the 2011 Indian National Population and Household Census and from The study area covers all five districts (Bhadrak, Jagatsinghpur, Kendrapara, Khorda and Puri) located within the Mahanadi Delta. Rapid Rural Appraisals were conducted in ten communities (C01-C10). Table 1 Estimated travel speeds for different land cover types (based on Watmough et al., 2016). Pedestrian movement was assumed where no roads exist, travel by motor vehicles was assumed where roads are available and travel by boat was assumed on waterways. Speeds were then used to generate travel cost to nearest amenities.
Class
Estimated speed (min.km −1 ) OpenStreetMap data. We used 2010 MODIS data at 250 m spatial resolution to obtain a land cover dataset detailing the different types of cropping systems found in the delta (Gumma et al., 2014). Travel costs to the nearest amenity of interest were computed from the least accumulative cost surface dataset mentioned earlier. In situations where multiple indicators for the same service were found (type of education or health facility), we favoured the indicator that exhibited the greatest variation among communities. Based on the results from RRA (S1), travel times to six types of amenities were chosen to reflect access to social services: public services and polling stations, secondary schools, banks and credit cooperatives, hospitals, worship temples and recreational areas, such as sports centres and playgrounds. Three amenities were used to reflect access to productive infrastructures: travel time to communication services, agricultural outlets and industrial areas. Availability of public transport was also chosen to represent productive infrastructures, as they can be used by smallholders to access agricultural markets. Eight variables were chosen to reflect the natural resources from which most households derived their incomes, seven of which were derived from satellite sensor data and one from Open-StreetMap data ( Table 2). The variables to reflect the natural resources included: the area of forest, the area of cropland available per household, the type of agricultural system (based on the proportion of each cropping pattern within the community) and the travel time from each community to the nearest aquaculture ponds. These variables were chosen since the number of growing seasons and the availability of irrigation systems can be a determinant for livelihood outcomes.
Clustering method
We used a model-based clustering method to avoid the limitations of deterministic procedures, such as hierarchical and k-means clustering algorithms. As demonstrated by (Raykov, Boukouvalas, Baig, & Little, 2016), these two popular clustering methods rely on restrictive assumptions that lead to severe limitations in accuracy and interpretability. In particular, these algorithms cluster data points based on geometric closeness to the cluster centroid, without taking cluster densities into account. Therefore, they implicitly assume that each cluster must contain the same number of data points, which is a biased assumption for building community typologies. On the contrary, modelbased clustering considers that the data comes from a distribution that is a mixture of two or more clusters, and assigns to each data point a probability of belonging to each cluster (C Fraley & Raftery, 2002). Each cluster is modelled by the Gaussian distribution and is characterised by its mean vector, covariance matrix and the probability of each point belonging to this cluster. These parameters are estimated using the Expectation-Maximisation algorithm, which is initialised by hierarchical model-based clustering. The covariance matrix determines the geometric shape of each cluster, the latter being centred at the mean, around which there is an increased density of points. The model with the greatest integrated likelihood, or Bayesian Information Criterion (BIC), is considered as the best fitting model. We used the R package "mclust" (Fraley, Raftery, Murphy, & Scrucca, 2012) to implement the model-based clustering algorithm, which estimated the best finite mixture model according to different covariance structures and different numbers of clusters.
Quantifying livelihood capitals
The quantification of livelihood capitals was based on register data at the village level from a subset of the 2011 Indian National Population and Household Census. The variables selected to quantify livelihood capitals are proxies for the participants' views, regarding the capitals that they perceived as determinant for their livelihood opportunities (Table 3 and Supplementary Material S2). Given the high correlation amongst the selected variables, a principal component analysis was used to circumvent the problem of multicollinearity and to derive a single factor score for each capital. Multiple factors were not combined as this would have distorted what the component represents and would have made interpretation difficult (McKenzie, 2005). After ensuring that the factor loadings corresponded with the conceptualisation of each capital based on the RRA activity, the first factor score was selected to represent each capital. Low loading factors (|λ| ≤ 0.2) were kept as excluding them would have distorted the views from RRA participants. Moreover, McKenzie (2005) showed that low loading factors should be included when measuring inequality, especially when the variable is a known (or perceived) determinant of poverty.
Quantifying precarious livelihoods
The census indicators comprise population enumeration including cultivators, agricultural labourers, entrepreneurs and unemployed. Detailed examinations of poverty structures in rural India show that households engaged in agricultural labour or the unemployed are the poorest of the rural poor (Ravi & Engler, 2015). We, thus, defined precarious livelihoods as the proportion of working-age people (15-59) who are engaged in agricultural labour or unemployed, as defined in the Census of India. The census defines a person as an agricultural labourer if they work on another person's land for wages in money or kind or share, with no right of lease or contract on the land on which they work, while a person is defined as a non-worker if they do not engage in any economically productive activity for more than 6 months per year.
Proxying climate shocks
Extreme events, such as heat waves, droughts, floods and cyclones are becoming more frequent and both their frequency and intensity are likely to increase in the future (Baker et al., 2018). Extreme weather events can result in agricultural losses, which can lead to shifts from transient to chronic poverty (Krishnan & Dercon, 2000). Decreases in agricultural production can be identified by remotely sensed satellite sensor data in the form of abrupt changes in vegetation greening (Liu, Table 2 Variables used for community typologies. Indicators for social services are based on travel times to the closest service found by using a least accumulative cost surface dataset, computed from road networks and land cover data. Natural services are derived from agricultural-relevant metrics from land cover data. Where ρ NIR is the near-infrared reflectance, ρ red the red reflectance and α a weighting parameter selected by the user. A weighting of α = 0.20 was used, as it has been found to be the optimum value to monitor phenological processes when using computer-intensive algorithms (Testa, Soudani, Boschetti, & Borgogno Mondino, 2018). We used band 1 (ρ red , 620-670 nm) and band 2 (ρ NIR , 841-876 nm) from Table 3 List of variables used for the quantification of household livelihood capitals. The associated factor loading retrieved from the PCA represents the weight for each variable in the construction of their associated livelihood capital. The justification for the inclusion of each variable is based on participants' views from participatory rural appraisals.
Detecting breaks in crop production
The Breaks For Additive Season and Trend (BFAST) technique was used to detect changes in time-series of WDRVI to identify crop failures. This method was used to determine the number, type, and timing of trend and seasonal changes within historical time-series (Verbesselt, Hyndman, Newnham, & Culvenor, 2010). It estimates the dates, the magnitude and direction of change without setting a threshold or defining a reference period, and thus can be used to characterise changes occurring in seasonal and trend components. The general decomposition model fits a piecewise linear trend T t and a seasonal model S t , and is of the form: Y t = T t + S t + e t , with t = 1,…, n. The ordinary least squares (OLS) residuals-based MOving SUM (MOSUM) test is used to detect whether one or more breakpoints are occurring. If breaks are occurring, the number and position of breaks are determined by minimising the residual sum of squares and by minimising an information criterion, such as the Bayesian Information Criterion (BIC). The intercept and slope of consecutive linear models are used to characterise the magnitude and direction of abrupt changes in the trend. Fig. 3 presents the outputs from the break detection in the WDRVI time-series, where only negative breaks were considered. The algorithm was run on pixels that were used for agricultural production throughout 2000-2011. Pixels that changed land use during that period (i.e. specifically if they were converted to urban) were not included to prevent the detection of false-breaks due to land use changes. Thanks to the linear correlation that exists between WDRVI and crop yield over the Mahanadi Delta (Duncan et al., 2015), breaks in WDRVI time-series represent abrupt changes in crop production, and negative breaks are thus considered to represent crop failures. Moreover, Watts and Laffan (2014) showed that breaks in vegetation indices detected by BFAST corresponded with the timing of known floods in the study region for between 68% and 79% of breaks detected across the sample pixels. Taken together, these studies indicate that the BFAST method is able to detect abrupt changes in vegetation greening caused by climatic hazards. We thus consider negative breaks in the WDRVI time-series as proxies of weather shocks that had a negative impact on crop production.
Statistical modelling
Multilevel regression techniques were used to control for contextual factors, by allowing the model to vary at the Tehsil level. To characterise how community typologies affect the associations between livelihood capitals, crop failures and precarious livelihood activities, we fitted separate models for each one of the village types identified through model-based clustering. Access to livelihood capitals is mediated by overarching systems of power, the demographic pressure and the local political context, which have been shown to be one of the main causal determinants of poverty in India (Lerche, 2009). To avoid inferring any definite causal relationship, we controlled for these mediating factors by using the respective proxy variables: proportion of scheduled castes and tribes, population density and District. For each community type, a two-level random intercept model was fitted using the R package "R2MLwiN" (Charlton, where π ij refers to: the probability of being engaged in precarious livelihoods (unemployment and agricultural labour) for the village i in the Tehsil j. Each level 1 unit (village) had an associated denominator n i , which was the total number of people of working age (every person aged 15-59. Two sets of explanatory variables were considered: livelihood capitals and the number of breaks in the WDRVI time-series, as a proxy of the number of crop failures. As the response variable is binomial, we used a linearisation method in the model to transform the discrete response model (binomial) to a continuous response model (Goldstein, 2003), with a Maximum Likelihood modelling approximation method to estimate the unknown parameters of interest in the model.
Typology of rural communities
The clustering of 18 variables in three domains (natural resources, social services and productive infrastructures) resulted in five distinct clusters being identified. These formed the basis for five community typologies that could be used to investigate how the place-based relationships between livelihood precariousness, agricultural shocks and household capitals. The five community types were spatially clustered in the landscape (Fig. 4) and each was named based on the type of services available to the community and on the dominant land cover class.
Exurban communities
This cluster reveals a clear geographic profile, with a total of 2,245 communities (total population of 1,928,232) located in the near vicinity of main roads. It reveals characteristics that are ascribed to communities well connected to urban and peri-urban areas, defined as exurbs. This cluster is characterised by a high availability of public transport and close proximity to markets (19 min average travel time) and industries (1 h 29 min average travel time). Communities also have high levels of access to social services such as education (10 min average travel time) and health facilities (45 min average travel time) and are located near local official institutions (average travel time of 8 min). The main agricultural systems are a combination of freshwater aquaculture, irrigated rice crop grown once (22.8% of cropland area on average), twice (19.0% of cropland area on average) and thrice (22.1% of cropland area on average) per year. However, although the total area of land devoted to agriculture is lower than for other clusters (average of 91 ha), the average farm size is 1.07 ha per cultivator.
Rainfed agricultural communities
This cluster represents a total of 2,563 agricultural communities (total population of 2,511,527) mainly located in the south western and north-eastern parts of the delta. These communities are characterised by low access to social services (average travel times to secondary schools, hospitals and public offices are 56, 2 h14 and 32 min respectively) and productive infrastructures, such as markets (average travel time of 1 h 21 min) and industries (average travel time of 3 h03 minutes). The main agricultural system is single rice crop (38.3% of cropland area on average) or single mixed crops (14.6% of cropland area on average) grown in rainfed conditions. The total cultivated area in each community is 101 ha on average, with an average farm size of 1.00 ha per cultivator.
Agro-industrial communities
The 2,174 communities (total population of 2,122,436) of this cluster are located in the northern part of the delta and in the south of the axis Bhubaneswar-Cuttack. They have a high access to worship amenities, a relatively high access to other social services (average travel times to secondary schools, hospitals and public offices are 51, 2 h05 and 30 min respectively) combined with a greater proximity to industrial areas (1 h 51 min average travel time) and markets (1 h 14 min average travel time) compared to the other agricultural communities. The main agricultural system is irrigated rice crop grown once (36.5% of cropland area on average) or twice (20.0% of cropland area on average) per year. The communities within this cluster have an average cultivated area of 97 ha for an average of 0.96 ha per cultivator.
Irrigated agricultural communities
The 2,438 agricultural communities (total population of 2,422,307) of this cluster are located in the central part of the delta and near the Chilika lake. They share similar characteristics with agro-industrial communities in terms of their access to social services (average travel times to secondary schools, hospitals and public offices are 53, 2 h04 and 30 min respectively) but with lower access to productive infrastructures (average travel times to markets and industries are respectively 1 h17 and 2 h57). However, unlike rainfed communities, the irrigated agricultural communities are characterised by a high share of irrigated rice crop grown twice (24.1% of cropland on average) and thrice (23.5% of cropland area on average) per year. The area of cropland is on average 98 ha in total and 0.99 ha per cultivator in the cluster.
Resource periphery communities
The 409 resource periphery communities (total population of 362,797) are located in remote areas, far from market towns and urban Fig. 4. Community typologies as identified by model-based clustering. Types of communities were identified based on their access to natural resources, social services and productive infrastructures. Five clusters were identified: communities with great access to productive infrastructures and social services (exurbs), production communities with low agricultural infrastructures (rainfed agricultural) and with irrigation infrastructures (irrigated agricultural), production communities with industries (agro-industrial) and remote communities with high natural resources (resource periphery).
centres. These communities are characterised by a very low access to social services (average travel times to secondary schools, hospitals and public offices are 1 h06, 3 h18 and 41 min respectively) and to productive infrastructures (average travel time to industries: 4 h10; and to markets: 1 h40). Due to the lack of irrigation infrastructures, the main agricultural systems are single mixed crops (34.7% of cropland area on average) and single rice crop grown in rainfed conditions (26.5% of cropland area on average). The communities within this cluster are characterised by the dominance of natural resources, such as forests (area of 0.92 ha on average), proximity to aquaculture ponds and a large cropland area with an average of 1.11 ha per cultivator for a total cultivated area of 112 ha on average.
Statistical modelling
Odds ratios were used to quantify the relationships between the response variable (proportion of people engaged in precarious livelihood activities) and the explanatory variables (livelihood capitals and number of agricultural shocks), controlling for district and population density effects, but also for the effects of class and caste (Table 4). An odds ratio above one indicates that, as the explanatory variable increases, the odds of being engaged in precarious livelihood activities also increase. When explanatory variables are categorical (e.g. "District"), odds are interpreted by comparing the variable level to a reference (district "Bhadrak"). For example, in rainfed agricultural communities, an odds ratio of 0.74 for Jagatsinghpur can be interpreted as: the likelihood of being engaged in precarious livelihood activities for communities located in Jagatsinghpur is 26% lower compared to communities located in Bhadrak.
The models show that it is more likely that households will engage in precarious livelihood activities when the number of agricultural shocks increases, except for exurban communities (OR = 0.99, 95% CI = 0.97, 1.01) and agro-industrial communities (OR = 1.02, 95% CI = 1.00, 1.05) where associations between shocks and livelihoods are not significant. Fig. 5 shows the predicted probability of being engaged in precarious livelihood activities depending on the number of agricultural shocks faced by the community during the ten previous years. From these data, we can see that the probability of precarious livelihoods strongly increases with the number of agricultural shocks in agricultural-based communities with low access to productive infrastructures, such as rainfed agricultural, irrigated agricultural and resource periphery. However, we found that the number of agricultural shocks does not have a significant effect on precarious livelihoods in exurban and agro-industrial communities.
Results for the control variables indicated that there was a significant and negative effect of population density on the odds of being engaged in precarious livelihoods only in exurb communities: an increase in population density is associated with a decrease in the odds of being an agricultural labourer or unemployed (OR = 0.94, 95% CI = 0.90, 0.97). It is also apparent that belonging to disadvantaged groups (scheduled castes and tribes) increases the odds of being engaged in precarious livelihoods only in exurban (OR = 1.13, 95% CI = 1.04, 1.22) and agro-industrial communities (OR = 1.14, 95% CI = 1.06, 1.22). Households located in Puri and Jagatsinghpur have lower odds of engaging in precarious activities, compared to those located in Bhadrak, especially in rainfed agricultural communities (OR Puri = 0.78, 95% CI = 0.72, 0.83; (OR Jagatsinghpur = 0.74, 95% CI = 0.69, 0.80). Table 4 Results of the logistic models for each community. The dependent variable represents the odds of engaging in precarious activities (agricultural labourers and unemployed) for people who are within the legal working age. The explanatory variables represent the capitals that households have access to and the number of agricultural shocks that the community faced between 2000 and 2011.
Discussion
This paper presents a geographical perspective of livelihood systems and of the impact of agricultural shocks on livelihood activities. The results suggest that multiple agricultural shocks increase the probability for households engaging in precarious livelihood activities in most rural communities, except for those located near main roads and higher levels of productive infrastructures. Another important finding is that access to human capital and to financial capital are associated with more stable livelihoods, such as cultivation, self-employment and salaried employment. Self-employment, defined as household industry work in the census of India, is considered here as a more desirable livelihood compared to agricultural labour and joblessness as it is associated with greater returns to capital and skills (Falco & Haywood, 2016). Our findings also indicate that access to physical capital significantly reduces the likelihood of being engaged in agricultural labour or being unemployed only in agricultural communities with irrigation infrastructures and located near industrial areas (agro-industrial landscapes). We found that an increase in natural capital is associated with a decrease in the likelihood of having a precarious livelihood in rainfed agricultural and agro-industrial landscapes. Importantly, our findings show that this trend is reversed in exurban communities.
Climate change impacts on livelihoods and poverty
Our findings showed no significant associations between agricultural shocks and the likelihood of engaging in precarious livelihood activities in exurban communities and only weak associations in agroindustrial communities, when compared to more remote clusters. These results suggest that investments in infrastructure, such as connections to market centres and social services, provide households with a greater flexibility and agency to cope with climate shocks. Overall, the impact of an increase in the variance of climate will probably lead to a greater variability in agricultural productivity and to a greater number of crop failures (Challinor et al., 2014). The findings from this study support the idea that such changes are likely to drive households into precarious livelihood strategies, thus exacerbating rural poverty especially in remote rural agricultural communities. Although the probability to be an agricultural labourer or unemployed in resource periphery communities is lower than in other clusters in the absence of shocks, we found that it is the cluster where households' livelihoods are the most likely to be negatively impacted by crop failure. Arguably, the most important result from this research is that rural typologies should be included in the design of climate change assessments to take into account the differential vulnerability of communities to crop failure.
Spatial dimensions of livelihoods
Rural poverty is spatially distributed, with factors such as institutional linkages, access to and control over resources affecting livelihood opportunities. Previous studies showed that the sensitivity of on-farm and off-farm livelihood strategies to livelihood capitals exhibit different patterns depending on the type of settlement considered (Fang et al., 2014). Our findings demonstrate that the probability of engaging in precarious livelihoods depends on households' access to capitals, and that the type of community in which households live modifies this association. For example, financial capital has a weaker effect on livelihoods in remote communities than in exurban communities, natural (Table 4). The range of values in the x-axes are constrained to number of shocks that are likely to be observed in the area over 10 years. The envelope includes the mean plus or minus one standard error.
capital is associated with more precariousness in exurban communities but reduces the likelihood of precarity in single rice crop agricultural systems and physical capital is a determinant only in agro-industrial communities.
In remote communities that did not benefit from the technological packages of the green revolution, such as rainfed agricultural and agroindustrial communities, farmers have kept traditional single rice cropping systems (Gumma et al., 2014). We found that in these communities, access to natural capital has a positive effect on stable livelihood strategies, notably because of the increased probability to engage in cultivation. This finding was also reported by van den Berg (2010) who showed that lack of access to natural resources in rural areas can drive households into more precarious on-farm activities such as daily-wage labour. However, access to natural capital is associated with precarious livelihoods in exurban communities. A similar finding is likely to be related to the connection of such communities to urban centres: proximity to market increases the pressure on farm holdings, encourages smallholders' land dispossession and thus leads to the cornering of natural resources by a few large-scale farmers (Manjunatha, Anik, Speelman, & Nuppenau, 2013). Previous research has demonstrated that a larger average of cropland per household was associated with fewer large-scale farms owning the natural resources (Levien, 2013). This hypothesis is further supported by the descriptive statistics presented earlier, showing that the area of cropland per cultivator in exurban communities is amongst the largest of all clusters, despite having the lowest average of cropland area. It shows that smallholders in exurban communities are more likely to be driven out of agriculture than in the other types of rural communities.
The findings show that access to human and financial capitals has a positive effect on the probability of engaging in stable livelihood strategies. Access to financial services and workforce availability enable households to decrease the barrier to engage in more remunerative onfarm activities, but also to engage in off-farm livelihood strategies (Jansen, Pender, Damon, Wielemaker, & Schipper, 2006). Our typology of rural communities shows that the effect of financial capitals is weaker in remote communities with rainfed agricultural systems (rainfed agricultural, resource periphery). These differences can be explained in part by the physical lack of access to job opportunities in remote communities: although access to financial services helps households to decrease the barrier to engage in stable activities, the lack of livelihood opportunities reduces the positive impact of access to financial capital (Zenteno, Zuidema, de Jong, & Boot, 2013). We found that access to physical capital reduces the probability of engaging in precarious activities, but only in agro-industrial communities. This result highlights the link between physical capital and off-farm strategies: private means of transportation enables households to reach more livelihood opportunities.
The overarching influence of social and cultural norms on lowest castes' access to decent employment depends on the proximity to productive infrastructures and markets. People who belong to disadvantaged groups are more likely to be engaged in precarious labour in exurban and agro-industrial communities, confirming that people with higher caste status have better endowments required for absorption in the nonfarm market (Chandrasekhar & Mitra, 2018). On the contrary, it appears that the effect of caste is not the most significant driver to explain the causes of precarious livelihoods in more remote communities. This surprising result can be explained by the prevalence of culturally homogeneous communities in Odisha's remote areas, thus reducing its influence on access to land ownership and assets (Lakerveld, Lele, Crane, Fortuin, & Springate-Baginski, 2015).
Policy relevance
The above findings suggest several courses of action for public policies in India to reduce rural outmigration and reduce rural poverty. The National Rural Livelihood Mission (NRLM) aims to enable the poorest households to access self-employment and skilled wage employment opportunities seems to be well targeted to help reduce livelihood precarity. This research supports the scheme's main focus of strengthening human (skill building), financial (access to credit) and physical (access to markets) capitals for the poorest households, through their participation in strong and sustainable grassroots institutions (Self-Help Groups). However, important changes would need to be made to ensure that it plays a role in long-term poverty alleviation. We would argue that the NRLM should include community typologies in its approach to provide an opportunity for place-specific activities to strengthen livelihoods of the rural poor. In exurban communities, such activities could focus on human capital (skills) to ensure that households are able to adapt their livelihoods to off-farm strategies. In agro-industrial communities, schemes focusing on strengthening household physical capital, especially through the ownership of private means of transportation, would enable households to diversify their livelihood opportunities. In remote agricultural communities, in addition to activities strengthening human and financial capitals, the NRLM should work hand in hand with the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) to ensure work stability throughout the year, especially during the lean season. Finally, agricultural tenancy laws should be implemented and enforced to regulate rents and offer security of tenure to tenants. Interventions in property rights would prevent land grabbing by agro-industries and increase smallholders' bargaining power and secure their productive assets, thus reducing livelihood precarity.
Overall, the findings demonstrate that conducting place-based analyses of the determinants of livelihood strategies is necessary to design effective policies for poverty alleviation and rural development. Community typologies based on selected key indicators are an effective way to implement such analyses in order to highlight the different drivers of precarity within the landscape.
Conclusion
This research makes several contributions to the current literature. First, it defined a set of indicators that adequately capture the multidimensional and multi-attribute nature of rural communities and household capitals. Two different methods were used to obtain the final results: a deductive binning of indicators into different categories based on participatory rural appraisals, followed by an inductive indicator method constructed via model-based clustering for community typologies and via principal components analysis for household capitals.
Second, the community typologies show a distinct spatial pattern, highlighting a profile of rural communities with similar bundles of capitals. It was demonstrated that the type of rural community in which households live modifies the associations between livelihood capitals and precarious livelihoods. Access to physical capital reduces the likelihood of being engaged in precarious activities only in communities located near industrial areas, where people can find alternative livelihood opportunities. In rural communities, access to natural capital has a positive effect on stable livelihood strategies, notably because of the increased probability to engage in cultivation, while it has a negative effect in exurban communities, showing that smallholders in these places are more likely to be driven out of agriculture than in the other agricultural communities. Our results also demonstrate that lack of access to financial services and workforce unavailability prevent households to profit by local job opportunities that would enable them to engage in more sustainable livelihoods. Finally, people who belong to disadvantaged groups are more likely to be engaged in precarious labour in exurban and agro-industrial communities, confirming that people with higher caste status have better endowments required for absorption in the off-farm market and for land-ownership where agricultural land is scarce.
Third, the paper demonstrated quantitatively that the type of rural community in which households live modifies households' opportunities for coping strategies. The findings show that recurrent weather shocks are a driver of precarious livelihoods, except in exurban communities where the number of crop failures faced by the community does not influence livelihood opportunities. This result is explained by the availability of off-farm livelihood opportunities in well-connected communities: households can engage in off-farm daily wage activities as a coping strategy, preventing them to sell their productive assets and thus to become agricultural labourers or unemployed.
A final caveat is that this paper did not address the persistent difficulty in quantifying livelihood dynamics in the long-term, including questions of asset trade-off and migration. Nevertheless, such a quantitative analysis has a wider application for rural development policies seeking to make livelihoods more resilient to climate hazards and to reduce poverty. Identifying typologies of rural communities is useful for assessing needs and targeting intervention or mitigation programs. It provides an approach for policy makers to take into account the contextual factors that drive livelihood precarity and thus to target more strategically anti-poverty programmes to maximise their effect rather than equally distributing them across all places. Interventions should focus on strengthening human and physical capitals in well-connected communities to ensure that households are able to diversify their livelihoods to off-farm strategies, while they should be targeted on providing financial capital and complementary livelihood opportunities during lean season in remote areas.
|
2019-06-07T21:33:45.461Z
|
2019-09-01T00:00:00.000
|
{
"year": 2019,
"sha1": "cb2d13a0318fa55c13f9a065d0e6f835d72ea4bf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.landurbplan.2019.04.014",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "931a154204afd3f5cd68474ff607a42cf6c861ff",
"s2fieldsofstudy": [
"Environmental Science",
"Geography",
"Economics",
"Sociology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
}
|
267752420
|
pes2o/s2orc
|
v3-fos-license
|
Diagnostic value of cavernous sinus swelling and extrusion sign in cavernous sinus hemangioma
Background and purpose: To examine the diagnostic value of imaging features in cavernous sinus hemangioma (CSH). Materials and Methods: The clinical and imaging data of patients with pathologically confirmed CSH, cavernous sinus meningioma, trigeminal schwannoma and pituitary adenoma invading the cavernous sinus between May 2017 and May 2022 were retrospectively analyzed. The cases were divided into the CSH and non-CSH groups to summarize the magnetic resonance imaging (MRI) characteristics of CSH. Univariate χ2 analysis was performed to assess five indexes, including signal intensity on T2WI, homogeneity of T2WI, enhancement of enhanced T1, enhanced T1 with dural tail sign, and cavernous sinus swelling and extrusion sign. Results: There were significant differences in four features, including hyperintensity on T2WI, homogeneity of T2WI, T1-enhanced without meningeal tail sign, and cavernous sinus swelling and extrusion sign between the CSH and non-CSH groups, with cavernous sinus swelling and extrusion sign showing the most pronounced distinction, with a sensitivity of 100%, a specificity of 93.02%, and an accuracy of 94.23%. The four features could be jointly used as diagnostic criteria, with a sensitivity of 94.44%, a specificity of 100.00%, and an accuracy of 99.04%. Conclusion: Cavernous sinus swelling and extrusion sign is a reliable imaging index for CSH diagnosis. Homogenous hyperintensity or marked hyperintensity on T2WI, enhanced T1 without dural tail sign, and cavernous sinus swelling and extrusion sign could be jointly used as diagnostic criteria, which may improve the accuracy of CSH diagnosis.
Introduction
Cavernous sinus hemangioma (CSH) is the commonest extracerebral cavernous hemangioma [1].However, CSH is a rare benign mass, accounting for only 2-3% of benign tumors or tumor-like lesions in the cavernous sinus area [2]; it is often misdiagnosed as meningioma, schwannoma, and invasive pituitary tumor and treated surgically.Currently, total resection rate for CSH microsurgery is relatively low, with heavy bleeding occurring during surgery, elevated disability rate, various complications and certain mortality.
Gamma Knife, the preferred treatment option for CSH, safely and effectively controls CSH development and significantly relieves symptoms.Accurate diagnosis of CSH preoperatively to perform Gamma Knife could avoid the serious complications and risks associated with the misdiagnosis of other tumors.Thus, accurate CSH diagnosis preoperatively is of vital importance.In this study, the MRI features of 104 cases with cavernous sinus lesions were retrospectively examined, to improve the accuracy of preoperative diagnosis of CSH.
Material and methods
The clinical, imaging and surgical data of 104 patients administered cavernous sinus lesion resection in the Department of Neurosurgery of Tianjin Huanhu Hospital from May 2017 to May 2022 were retrospectively analyzed.Inclusion criteria were: (1) a lesion invading the cavernous sinus; (2) pathological diagnosis of CSH, meningioma, schwannoma or pituitary adenoma; (3) complete brain MRI scan and MRI enhancement data.Exclusion criteria were: (1) no lesion invading the cavernous sinus; (2) no pathological diagnosis of CSH, meningioma, schwannoma or pituitary adenoma; (3) incomplete brain MRI data.
GE450w with 3.0T was utilized for preoperative and postoperative imaging.The conventional sequence for head MRI scanning was performed: plain scanning of conventional axial plane, sagittal plane and coronal T1WI and T2WI sequences, and sagittal plane T1WI sequence; enhanced scanning on the axial, sagittal and coronal planes.Gd-DTPA, a paramagnetic contrast agent, was used at 0.15 mmol/kg.The MRI parameters were showed as follows: Axial position of T2WI: Slice Thickness: 5.0 mm, FOV read: 240 mm, FOV phase: 80.0%: NEX: 1, TR:4060 ms, TE: 93.0 ms.Enhanced T1 axis: Slice Thickness: 5.0 mm, FOV read: 230 mm, FOV phase: 100.0%,NEX:1, TR:300 ms, TE: 2.46 ms.By a review of the literature, a list of potentially related imaging features was made, including signal intensity on T2WI, homogeneity of T2WI (defined as no significant signal difference within the tumor or the visible signal difference less than 10% of the tumor"), enhancement of enhanced T1, enhanced T1 with dural tail sign, and cavernous sinus swelling and extrusion sign.The tables were designed.The data were extracted and recorded separately by two neuroimaging experts with more than 20 years of work experience blinded to the pathological diagnosis.In case of disagreement, both discussed to reach a consensus.
Signal intensity on T2WI:According to Elster scoring criteria, 4 and 5 points were defined as mild and marked hyperintensities, respectively (Table 1).
Cavernous sinus swelling and extrusion sign was defined as the anterior boundary filling the superior orbital fissure in axial images, with the posterior boundary not exceeding the Michael cavity.
Statistical analysis: IBM SPSS Statistics 26.0 was used in this study.All cases were divided into CSH group and non-CSH group.The chi-square test was performed to compare categorical variables between the two independent samples.P < 0.05 was considered statistically significant.
Results
Totally 104 cases of cavernous sinus lesions were enrolled from May 2017 to May 2022, all of whom underwent craniotomy microsurgery or endoscopic transnasal tumorectomy.In terms of pathological results (Table 2), there were 18 and 86 cases in the CSH and non-CSH groups, respectively.The non-CSH group included 22 cases of cavernous sinus meningioma, 6 of trigeminal schwannoma and 58 of pituitary adenoma invading the cavernous sinus.The commonest type in the CSH group was Type B CSH.In the non-CSH group, the commonest cavernous sinus meningiomas and pituitary adenomas invading the cavernous sinus were meningothelial meningiomas and gonadotroph cell adenomas, respectively.
From the perspective of demographic features, there were 6 males and 12 females in the CSH group, aged from 42 to 74 years with a mean age of 57.56 years.There were 22 cases (male to female ratio of 5:19) with cavernous sinus meningioma, aged 29-76 years with a mean age of 55.23 years.There were 6 cases (male to female ratio of 1:2) with trigeminal schwannoma, aged 29-80 years with a mean age of 57.5 years.There were 58 cases (male to female ratio of 18:11) with pituitary adenoma invading cavernous sinus, aging from 28 to 81 with mean age of 56.44 years.
Univariate χ2 analysis of the five MRI features showed that there were significant differences in four features, including hyperintensity on T2WI, homogeneity of T2WI, T1-enhanced without meningeal tail sign, and cavernous sinus swelling and extrusion sign, between the CSH and non-CSH groups (Table 3).Cavernous sinus swelling and extrusion sign can be used alone as a diagnostic tool, with a sensitivity of 100%, a specificity of 93.02%, and an accuracy of 94.23%.Marked hyperintensity on T2WI, homogenous hyperintensity or marked hyperintensity on T2WI, enhanced T1 without dural tail sign, and cavernous sinus swelling and extrusion sign could be jointly used as diagnostic criteria, with a sensitivity of 50.00%, a specificity of 100.00%, and an accuracy of 91.35%.Homogenous hyperintensity or marked hyperintensity on T2WI, enhanced T1 without dural tail sign, and cavernous sinus swelling and extrusion sign could be jointly used as diagnostic criteria, with a sensitivity of 94.44%, a specificity of 100.00%, and an accuracy of 99.04% (Table 4).
Discussion
CSH has a low incidence, accounting for 2%-3% of benign tumors or tumor-like lesions in the cavernous sinus region [2-4] and 0.4%-2% of all intracranial cavernous hemangiomas [5].According to previous studies, age at CSH onset is 40-60 years, and the incidence of CSH is higher in women than in men.In the CSH group, there were 6 males and 12 females, aged 42-74 years, with a mean age of 57.56 years, which was corroborated previous studies.Both CSH and cavernous vascular malformations (CMs) consist of blood sinuses surrounded by a single layer of endothelial cells, but their biological features are completely different as well as their treatment options.CM is characterized by lesion enlargement through hemorrhage-absorption-re-bleeding, as a vascular malformation with high endothelial proliferation, migration and tubular activity that is generally removed by craniotomy.However, CSH rarely causes bleeding and generally develops slowly.CHS is currently more likely to be considered a benign neoplastic lesion, and treatment plans encompass surgery and gamma knife.
Goel et al. retrospectively analyzed 45 CSH cases administered surgery from 1992 to 2020, including 39 cases who were treated by the complete epidural approach.The results showed there were 3 deaths due to excessive blood loss and 36 cases with worsening or no recovery from oculomotor dysfunction.With a mean follow-up of 110 months, 3 cases relapsed [6].Of the 18 CSH cases in this study, 6 had intraoperative blood transfusion above 800 ml with the largest volume of 2800 ml, and no deaths occurred.After the operation, 3 cases had worsened ocular movement dysfunction.Besides, ocular movement dysfunction in the others was not relieved.In conclusion, surgical treatment of CSH has a high risk and causes massive bleeding, with low response rate to symptoms.Compared with elevated disability rate and certain mortality in craniotomy, radiation therapy has overt advantages in the management of CSH.Pan Li retrospectively analyzed the follow-up data of 53 CHS cases treated with Gamma Knife with an average tumor volume of 13.2 cm 3 .The peripheral dose of Gamma Knife averaged 13.3Gy.The average durations of imaging and clinical follow-ups were 24 months and 34 months, respectively, and the response rate of the tumors was 100%.After 6 months of GKS treatment, the tumors shrank significantly, and their average volume decreased by 60.2%.The final follow-up volume decreased by averagely 79.5%.In 33 cases the symptoms disappeared or were improved, while 2 cases showed worsened symptoms [7].Xin et al. retrospectively analyzed the follow-up data of 54 cases with large (20 cm 3 < tumor volume ≤40 cm 3 , diameter of 3-4 cm) and huge (tumor volume >40 cm 3 , diameter >4 cm) CHS [8].The radiotherapy dose of the target margin was 50 Gy divided into 25 doses.All patients had tumor shrinkage within 3 months of radiotherapy, with an average tumor reduction of 79.7% (range, 48.4-98.5%).There were no patients with tumor progression or recurrence.All cases had symptom improvement within 1-12 months after radiotherapy.In the whole follow-up period, no patients developed any form of permanent complications or symptomatic radiotoxicity.
In conclusion, radiation therapy could be considered the preferred treatment option for small, large and huge CSH cases.However, accurate preoperative diagnosis is the prerequisite for optimal treatment.Due to low incidence, the rate of CHS misdiagnosis is as high as 38.9%.To improve the diagnostic accuracy of CSH and avoid misdiagnosis and associated surgical risk, studies have proposed different views.He Kangmin et al. revealed significant differences in marked hyperintensity on T2WI, homogeneity, dumbbell-like appearance and sellar infiltration between the CSH and non-CSH groups [9].Francisca Montoya et al. suggested that the combination of hypointensity on T1WI, T2WI and hyperintensity on FLAIR, no dispersion limitation, and homogenous enhancement should place CSH at the top of the differential diagnostic list, especially in case of a "fill" mode in dynamic or delayed imaging [10].Using a literature review, we generated a list of potentially relevant image features, including signal intensity on T2WI, homogeneity of T2WI, enhancement of enhanced T1, enhanced T1 with dural tail sign, and cavernous sinus swelling and extrusion sign.As shown above, there were significant differences in four features, including hyperintensity on T2WI, homogeneity of T2WI, T1-enhanced without meningeal tail sign, and cavernous sinus swelling and extrusion sign between the CSH and non-CSH groups, of which cavernous sinus swelling and extrusion sign was the most distinctive feature with a sensitivity of 100%, a specificity of 93.02%, and an accuracy of 94.23%.Homogenous hyperintensity or marked hyperintensity on T2WI, enhanced T1 without dural tail sign, and cavernous sinus swelling and extrusion sign could be jointly used as diagnostic criteria, with a sensitivity of 94.44%, a specificity of 100.00%, and an accuracy of 99.04%.Thus, it is believed that CSH could be directly diagnosed using these three features without surgical pathology for direct radiotherapy to avoid craniotomy.
To the best of our knowledge, a role for cavernous sinus swelling and extrusion sign in the diagnosis of CSH has not been reported to date, which was defined as the anterior boundary filling the superior orbital fissure with the posterior boundary not exceeding the Michael cavity.Because CSH originates in the cavernous sinus, it grows like a balloon-like expansion, which is more likely to squeeze into the weak area of the cavernous sinus (e.g., supraorbital fissure) to fill the supraorbital fissure.Besides, the posterior boundary is limited by the posterior wall of the cavernous sinus, preventing CSH from crossing the Michael's cavity into the posterior cranial fossa.Thus, regardless of CSH size, the tumors never cross the dural junction of the cavernous sinuses [6].In the present study, except for 6 patients with invasive pituitary tumors showing the cavernous sinus swelling and extrusion sign, the non-CSH group did not show this sign.
Totally 100% of CSH cases in He Kangmin et al.'s study had marked hyperintensity on T2WI, which is considered the most distinctive feature of CSH, but only 50.00% of CSH cases in the current study had marked hyperintensity on T2WI.Marked hyperintensity on T2WI had low sensitivity, which is inconsistent with a study by He Kangmin et al.This may be explained by differences in the definition of marked hyperintensity, which indicated a case with 5 points, nearly as bright as the cerebrospinal fluid, according to the Elster scoring criteria in this study.The T2WI signal was similar to that of the cerebrospinal fluid.
Francisca Montoya et al. [10] believed that homogenous enhancement of MR is also helpful in the diagnosis of CSH, but 6 of the 18 cases in the CSH group had homogenous enhancement and 12 did not.Univariate analysis revealed no significant difference in homogenous enhancement between the CSH and non-CSH groups.CSH can be classified as spongy (Type A) and mulberry (Type B) according to pathological features [11,12] (Fig. 1).Type A CSH shows rapid filling and homogenous enhancement, while Type B shows early heterogeneous enhancement and progressive enhancement after delay [9].In the current study, enhancement homogeneity in CSH cases was consistent with the pathological classification, i.e., all 6 and 12 cases with and without homogenous enhancement were Type A and B CSH, respectively.Thus, homogenous enhancement cannot be used as a MR feature for the detection of CSH, whose enhancement homogeneity is directly related to its pathological subtype.
The base of cavernous sinus meningioma, generally originating in the visceral layer of the lateral wall of the cavernous sinus, can invade the cavernous sinus as the tumor grows; however, the tumor's body grows anteriorly and externally along the sphenoid crest, grows posteriorly and externally along the anterior and posterior bedded ligaments, or crosses the cerebellar curtain into the posterior cranial fossa (Fig. 2A, B, 2C and 2D).Unlike CSH, cavernous sinus meningiomas have heterogeneous T2WI with rare marked hyperintensity.Morphologically, cavernous sinus meningiomas generally show the dural tail sign without cavernous sinus swelling and extrusion sign, and sometimes grow posteriorly into the posterior cranial fossa.
Schwannomas are located between the visceral and parietal layers of the lateral wall of the cavernous sinus, generally originating from the semilunar segment of the trigeminal nerve.The tumors grow forward but cannot invade the supraorbital fissure (Fig. 2E and F).In addition, these tumors grow backward and can enter the posterior cranial fossa via the Meckel's cavity.Thus, schwannomas show no cavernous sinus swelling and extrusion sign.Meanwhile, schwannomas rarely show homogenous and marked hyperintensity on T2WI, which easily distinguishes them from CSH.
Pituitary adenomas invading the cavernous sinus generally grow by intrasellar to cavernous sinuses (Fig. 2G, H, 2I and 2J).A few cases of pituitary adenomas invading the cavernous sinus may have cavernous sinus swelling and extrusion sign.In this study, all 6 cases of pituitary adenoma invading the cavernous sinus in the non-CSH group had cavernous sinus swelling and extrusion sign, and all were pituitary adenomas invading the cavernous sinus.However, pituitary adenomas rarely show homogenous and marked hyperintensity on T2WI, which easily distinguishes them from CSH.
Conclusion
The cavernous sinus swelling and extrusion sign is a reliable marker for the diagnosis of CSH, which has not been reported in previous studies.Homogenous hyperintensity or marked hyperintensity on T2WI, enhanced T1 without dural tail sign, and cavernous sinus swelling and extrusion sign could be jointly used as diagnostic criteria, which may help improve the accuracy of CSH diagnosis.
Limitations
The research was retrospective and single-center, which there may be confounding and selection bias, so further prospective and multicenter data were needed to test the validity of our prediction model.
Fig. 1 .
Fig. 1.Imaging characteristics of Type A (A, B and C) and Type B (mulberry type) (D, E and F) CSH, revealing significant cavernous sinus swelling and extrusion sign without dural tail sign.(A) Homogenous hyperintensity on T2WI.(B) Hyperintensity on FLAIR.(C) Heterogenous enhancement of enhanced T1 (D) Homogenous marked hyperintensity on T2WI close to the cerebrospinal fluid's signal.(E) Homogenous hyperintensity on FLAIR.(F) Homogenous enhancement of enhanced T1.
G
.Han et al.
Fig. 2 .
Fig. 2. Imaging characteristics of cavernous sinus meningioma (A, B, C, D), schwannoma (E, F) and invasive pituitary adenoma (G, H, I, J).Morphologically, the dural tail sign of cavernous sinus meningioma was obvious with no cavernous sinus swelling and extrusion sign.(A) Heterogenous mild hyperintensity on T2WI.(B) Heterogenous hyperintensity on FLAIR.(C, D) Heterogenous enhancement of enhanced T1.Lesions of schwannoma protrude from the cavernous sinus to the posterior cranial fossa.(E) Heterogenous mild hyperintensity on T2WI.(F) Heterogenous enhancement of enhanced T1 without the dural tail sign of cavernous sinus meningioma or cavernous sinus swelling and extrusion sign.(G, H) Case 1 with heterogenous mild hyperintensity on T2WI and heterogenous enhancement of enhanced T1. (I, J) Case 2 with homogenous mild hyperintensity on T2WI and homogenous enhancement of enhanced T1.
G
.Han et al.
Table 1
Elster scoring criteria on signal intensity on T2WI.
ItemScore T2WIHypointense signal 1 Markedly hypointense to gray matter, nearly as dark as cortical bone 2 Markedly hypointense to cortical gray matter Isointense signal 3 Nearly isointense with cortical gray matter 4 Mildly hyperintense to gray matter Hyperintense signal 5 Markedly hyperintense to gray matter, nearly as bright as the cerebrospinal fluid G. Han et al.
Table 2
Pathological types of cavernous sinus lesions in 104 cases.
Table 3
Univariate and multivariate analyses of MRI features between the CSH and non-CSH groups.
Table 4
Univariate and multivariate analyses of sensitivity, specificity, positive predictive value, and negative predictive value for CSH detection.
|
2024-02-20T16:03:47.329Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "a98b8204e388562fccf01f62e672024270655e15",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "622e5b453b0ee63c834819d0d0d6ffb04a841394",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
256182747
|
pes2o/s2orc
|
v3-fos-license
|
Effect of packaging method and storage temperature on the sensory quality and lipid stability of fresh snakehead fish ( Channa striata ) fillets
The objective of the present study was to investigate the effect of packaging methods (i
Introduction
Fish and seafood products are nutritionally valuable as they contain proteins, lipids, minerals, and vitamins that are beneficial for health. Consumption of fish and seafood products is linked to a lower incidence of cardiovascular diseases and obesity (Tacon & Metian, 2013). Fish consumption accounts for approximately 70% of animal protein intake in Vietnam (Sinh et al., 2014). Snakehead fish is considered to be a noteworthy candidate for global aquaculture as fish farmers choose it for a higher profit over other species (Nen et al., 2018). Snakehead fish is one of the most cultivated fish species in Asia and Vietnam at large. In the Mekong delta, Vietnam, snakehead fish production increased from 5,300 tons in 2004 to 40,000 tons in 2010 (Sinh et al., 2014).
However, fish is prone to oxidation and development of offflavors resulting from improper handling, incorrect storage, and temperature abuse. Freshness is one of the vital attributes when assessing fish quality and consumer acceptance (Alasalvar et al., 2001). Therefore, the extension of the shelf life of fish is vital to prevent food waste and allow transportation to distant locations. The need to extend the shelf life of fish has led to the optimization of handling, packaging, and storage practices to ensure the freshness and quality of fish. Preserving the natural quality parameters of fish requires the use of temperature, packaging, and chemical additives to delay, reduce, or inhibit spoilage reactions. Low-temperature processing is extensively used to extend the shelf life of aquatic food through its interference with normal physiological processes in bacteria and enzymes (Georlette et al., 2004). The use of low temperatures to preserve fish can be achieved through chilling, superchilling, and freezing. Chilling maintains fish freshness but does not kill, eliminate microorganisms or stop enzymatic activity. Chilling brings the temperature close to but not below the freezing point of the fish muscle. This can be achieved using ice flakes, slurry ice, and cooling air. The shelf life of rainbow trout (Oncorhynchus mykiss) increased to 13-and 16-day using flow ice (a mixture of 40% ice and 60% water) alone or with injected ozone respectively from 8 days when stored on normal ice (Ortiz et al., 2008). Superchilling brings the temperature of the fish just below the initial freezing point which is usually between -0.5 °C to -2.8 °C (Kaale et al., 2011). The low temperature and ice crystal formation inhibit bacterial growth and enzymatic activities. Superchilling has been employed to extend the shelf life of seafood like cod (Gadus morhua) and salmon (Salmo salar) (Duun & Rustad, 2007. Chilling and superchilling have also been combined with different packaging methods to extend the shelf life of fish. Stamatis & Arkoudelos (2007) found lower bacterial counts in sardines stored in modified atmosphere packaging compared to those in air and vacuum packaging at 3 °C. Wang et al. (2008) found that a combination of modified atmosphere packaging and superchilled storage extended the shelf life of fresh cod loins (Gadus morhua) from 9 to 21 days while chilled modified atmosphere packaging extended the shelf life to 14 days. Duun & Rustad (2008) found that a combination of vacuum packaging and superchilled storage extended the shelf life of iced salmon samples to 21 days at -1.4 or 3.6 °C. Chowdhury et al. (2017) assessed the quality changes in air-packaged and vacuum-packaged Asian Sea-Bass fillets under 5 ± 1 °C storage temperature. Microbial, biochemical, textural, and sensory properties investigations after 15 days showed that vacuum packaging had better preservative effects. The influence of the packaging technique and storage temperature on the sensory properties and quality of the fish differs from species to species as each has its spoilage patterns and indicators (Sampels, 2015).
Freshness is one of the most important quality indicators of fresh fishery products. The principal method to evaluate the freshness of fishery products is sensory evaluation. Quality Index Method (QIM) and Torry score are reliable methods and have been used for assessment of the freshness of different fishery products (Martinsdottir et al., 2001;Cyprian et al., 2013;Nguyen et al., 2021;Nga & Diem, 2019). The Quality Index Method (QIM) is based on significant, well-defined characteristics of appearance, odor, and texture attributes changing through storage time. A score from 0 to 3 demerit (index) points is given for each quality parameter according to the specific parameter descriptions. The scores are summarized to give an overall sensory score, the Quality Index (QI). The QI is linearly correlated with the storage time at a specific storage condition; thus, the QI can be used to estimate the past and remaining shelf life of the fishery products (Martinsdottir et al., 2001;Cyprian et al., 2013). The Torry score is a systematic scoring system which is based on an objective sensory method to assess the state of the fish or the freshness of the cooked fish (Shewan et al., 1953). It is a descriptive 10-point scale that has been developed for lean, medium-fatty and fatty fish species. When the average Torry score is around 5.5, which indicates that the product is approaching the end of shelf life (Martinsdottir et al., 2001).
The objective of this present study was to compare the shelf life and lipid stability of snakehead fish during storage under chilled or superchilled conditions and air or vacuum packaged conditions. The changes in sensory quality of fresh snakehead fish fillets were evaluated by QIM and Torry schemes. The lipid quality of snakehead fish fillets was assessed by determinations of lipid content, lipid hydrolysis (FFA and PL) as well as lipid oxidation (PV and TBARS).
Materials
All snakehead fish samples used in this research were bought from a local fish farm in Nha Trang city, Khanh Hoa province, Vietnam. The average weight of the snakehead fish was 700-800 g. The fish were transported to the laboratory alive in water containers.
Chemicals
All chemicals used were of analytical grade (Sigma-Aldrich, USA) and purchased from Asia Laboratory Instruments Company Limited, 594/23 Au Co Street, Tan Binh District, Ho Chi Minh City, Vietnam.
Sample preparation and sampling
At the laboratory, fish were rested for 2 h before bleeding and filleting following the procedure described by Nguyen et al. (2021). The fillets were divided into four groups (40 fillets in each group) for different packaging methods and storage temperatures (Table 1). Each fillet was placed on a Styrofoam tray and packaged in a Polyethylene bag for air-packaging and in a Polyamide bag for vacuum-packaging. After packaging, fillets were chilled to 2 °C and superchilled to -2.5 °C in an air blast freezer at a temperature of -35 °C. Chilled samples were stored in a refrigerator at a temperature of 3 ± 1.0 °C. Superchilled samples were stored in a refrigerator at the temperature of -2.5 ± 0.5 °C. The temperature of the refrigerators was controlled using a temperature controller (Conotec FOX-1004, Korea). Five fillets in each group were randomly taken for sensory evaluations (QIM and Torry) and chemical determinations. All measurements were done in triplicate.
Sensory analysis using QIM scheme
Sensory analyses were carried out by five panelists from the Faculty of Food Technology of Nha Trang University. These panelists were selected for their expertise in the descriptive analysis of food sensory parameters. Before the main evaluation, training sessions were conducted to train the panelists on how to use the QIM and Torry Schemes developed for the freshness analysis of snakehead fish fillets (Nguyen et al., 2021).
The QIM based on a total of 12 demerit points described five attributes including color of the skin side, color of the flesh side, texture, odor, and stickiness of the fillets. Each panelist assessed the fillets from day 0 until spoiled.
Evaluation of cooked snakehead fillets with Torry scheme
For the analysis of the odor and flavor of the snakehead fish fillets, six slices (2 × 6 cm) were cut from two fillets, wrapped in aluminum foil paper, placed in a perforated stainless-steel pan, and steam-cooked for 10 minutes at 95-100 °C. After cooking, the samples were blind coded with a 3-digit random number and served to the panelists for evaluation and grading using
Lipid oxidation measurements
Lipid hydroperoxide (PV) was determined by the ferric thiocyanate method of Shantha & Decker (1994). The results were expressed as μmol lipid hydroperoxides per g of sample (μM CPO/g).
Thiobarbituric acid-reactive substances (TBARS) were determined according to the method of Lemon (1975) with adjustments described by Nguyen & Phan (2018). The results were expressed as µmol malondialdehyde per kg (µM MDA/ kg) calculated using a standard curve constructed from MDA equivalence in tetraethoxypropane (TEP).
Statistical analysis
Microsoft Excel 2021 was used to generate graphs and tables (Microsoft Corporation, USA). The results were analyzed using one-way ANOVA and the mean was statistically evaluated using Duncan's multiple range test (DMRT) to obtain the conservative differences with multiple comparisons with the level of significance set at P < 0.05. All the statistical analyses were carried out using the SPSS (version 26) software (SPSS Inc., Chicago, Illinois). The results were presented as means ± SD.
Quality Index Method (QIM)
Changes in the freshness of air-packaged and vacuumpackaged snakehead fish fillets at chilled and superchilled storage evaluated using the QIM scheme are shown in Figure 1.
Torry Scheme developed by Shewan et al. (1953) with some modifications made by Martinsdottir et al. (2001) for medium fatty fish. An average score of ≤ 5.5 was used as the sensory rejection point. Torry Scheme ranged from 3-10 with higher scores reflecting premium quality.
Total lipid determination
Lipids of fish muscle were extracted from a sample with methanol/chloroform/0.88% KCl (/1/0.5, v/v/v) according to the method of Bligh & Dyer (1959). The lipid content was determined gravimetrically after evaporation of all chloroform and the results were expressed as a percentage of the wet-weight samples.
Lipid hydrolysis determinations
Free fatty acid (FFA) content was determined on lipid extract according to the method of Bernárdez et al. (2005), based on complex formation with cupric acetate-pyridine, followed by absorbance reading at 715 nm (Libra S50 UV/VIS spectrophotometer, Biochrom, UK). The results were expressed as grams FFA/100g of lipid using a standard curve prepared from oleic acid.
Phospholipid content of the fish muscle was determined according to the method of Stewart (1980), based on the complex formation of phospholipid with ammonium ferrothiocyanate, followed by absorbance reading at 488 nm (Libra S50 UV/VIS spectrophotometer, Biochrom, UK). The results were expressed as a percentage of total lipid content and calculated using a standard curve prepared from phosphatidylcholine. in the raw snakehead fish fillets at chilled and superchilled temperatures. The results of this study show that the shelf life of raw air-packaged and vacuum-packaged snakehead fish fillets is 9 and 13 days at chilled storage (3 ± 1 °C) and 15 and 25 days at superchilled storage at -2.5 ± 0.5 °C respectively. Vacuum packaging extended the shelf life of the fresh snakehead fish fillet by 66.7% in both chilled and superchilled storages. This is higher than the 6-day and 10-day shelf life reported for airpackaged and vacuum-packaged Salmo (Trutta macrostigma) stored at 5 ± 1 °C by Karakaya & Duman (2016) but slightly lower than the 28-day shelf life reported for common carp (Cyprinus carpio) stored at -12 °C (Raj et al., 2016). The results of the present study were not in agreement with Oliveira et al. (2022) who reported that vacuum packaging did not influence in the shelf life of Brazil nut kernels. This might be attributed to the differences in chemical compositions and properties of different products. Storage temperature is considered vital to the efficacy of the packaging method as the superchilled fillet group had a prolonged shelf life than its counterpart in chilled storage.
Torry score of cooked snakehead fish muscle
Changes in the odor and flavor of the snakehead fish fillets during chilled (3 ± 1.0 °C) and superchilled (-2.5 ± 0.5 °C) storages are presented in Figure 2. The mean Torry scores for all the fillet groups generally declined in a linear pattern throughout the chilled and superchilled storage time. The average Torry score of 5.5 has been used as the limit for human consumption. Torry scores appeared below 5.5 on days 10 and 14 for air and vacuum-packaged fillets stored at chilling temperature. Whereas at superchilled storage, Torry scores remained well above 5.5 up to days 15 and 25 for air and vacuum-packaged fillet groups. Initially (from day 0 to day 6), air-packaged fillets showed a higher Torry score than vacuum-packaged fillets in both storages.
The odor and flavor of all the fillet groups on day 0 were characterized by a fresh oily and boiled milk smell. As storage time progressed, deviation from the fresh state of the fish was observed especially in the air-packaged group which presented a Sensory quality in all the treatment groups reduced with storage time. At the beginning of storage (day 0), the quality of the fish was characterized by a uniform white color on both sides of the flesh and skin of the fillets with a fresh seaweed odor and elastic texture with no shredded meat stickiness. These fresh quality attributes were given a score of zero from the QIM scheme. During storage, the QIM scores (QI) for the chilled air-packaged fillets were relatively low from day 0 to day 6. However, spoilage was evidenced by a change in the color of the skin side and odor of the air-packaged group after day 9. The uneven pale-yellow color, rancid odor, and viscous texture of the fillets on day10 rendered the chilled air-packaged fillets unfit for human consumption. Therefore, the shelf life of the air-packaged fillets stored at 3 ± 1 °C was 9 days. Change in the odor of the chilled air-packaged fillet was in correlation (r 2 = -0.964) with its Torry score. Unlike the air-packaged fillet group, changes in the attributes of vacuum-packaged fillets at chilled storage were comparatively slow up to day 9. On day 10, a pale-yellow color was observed on the flesh side of the vacuum-packaged fillets. This was not conclusive of the end of storage life as subsequent evaluation on day 11 could not verify spoilage. However, from day 11 onward, fluctuation in most of the attributes continued above the critical limit. The panelists finally judged the CVP fillet group to be unfit for consumption on day 14. The packaging method had a significant effect (P < 0.05) on the quality of the snakehead fish fillets stored at chilled conditions. Vacuum packaging at chilled temperatures extended the shelf life of the fillets from 9 to 13 days. Similar results have been reported in previous studies. A shelf life of 9 days for air packaged and 15 days for vacuum-packaged Silver Pomfret (Pampus argenteus) fillets stored at 4 °C was revealed in an earlier study by Chowdhury et al. (2017). It is reported by Frau et al. (2021) that vacuum packaging of artisanal goat cheeses represents the possibility of preserving the cheeses for a longer time and thus increasing their shelf life. The shelf life of vacuum packaged fish balls prepared from Capoeta trutta was extended to two weeks on average during chilled storage at 4 °C (Özpolat, 2022).
The end of storage for the superchilled air-packaged group was signaled by the appearance of a pale-yellow color on the flesh side, a soft texture, and an off-odor on day 15. A similar pattern of spoilage was also noticed in the superchilled vacuumpackaged fillets. A sharp increase in all the attribute scores from day 0 to day 3 was observed. After day 3, the attributes remained comparatively stable until an off-odor and an opaque yellowish color were observed on the flesh side on day 17. An early indication of spoilage frequently observed in the odor and color parameters is thought to be due to the degradation of certain proteins that produce volatile compounds in the fish muscles. The accumulation of secondary lipid oxidation products such as aldehydes, ketones, and carbonyls, is responsible for off-odor and off-flavor development as well as discoloration (Yin et al., 2014).
From regression analysis (Figure 1), a strong positive correlation was found between the QI for each fillet group and storage time. The correlation coefficients (r 2 = 0.9871 for CAP, 0.9605 for CVP, 0.9797 for SAP, and 0.9132 for SVP) indicate the adequacy of the QIM scheme used to examine quality changes was no significant difference in the lipid content at the end of storage of fillets under different packaging methods stored at the same temperature. Storage temperature had a greater effect on the final lipid content than the packaging method. In a similar manner, the lipid content of chilled snakehead fish fillets at the end of storage was significantly (P < 0.05) lower than the initial lipid content at day 0. The decrease in the lipid content of snakehead fish fillets at chilled storage was in accordance with the higher lipid oxidation products (i.e., PV and TBARS) obtained in these fillets. The decrease in the lipid content is thought to be due to lipid oxidation.
Lipid hydrolysis
Changes in the free fatty acid (FFA) and phospholipid (PL) contents of snakehead fish fillets as affected by the packaging method and storage temperature are shown in Figure 4A, 4B, respectively. The FFA content of all the samples significantly increased at the end of storage except SVP samples ( Figure 4A). At the end of the storage period, a higher FFA content was observed in fillets stored at chilled conditions and was significantly different (P < 0.05) from that of superchilled fillets. A similar increase in the FFA content was reported by Ozyurt et al. (2009) for red mullet and goldband goatfish stored in ice for 11 days. The FFA content at the end of storage was 1.12% oleic acid for red mullet and 1.40% oleic acid for goldband goatfish. The PL content of the samples was significantly decreased at the end of the storage period ( Figure 4B). The PL content of the chilled samples was significantly lower than that of the superchilled samples. Vacuum packaging in combination with superchilled storage was effective at retarding lipid hydrolysis as evidenced by the lower FFA content and higher PL content in comparison with other groups. The increased FFA content was in negative correlation with the decrease in PL content, showing that phospholipids are preferentially hydrolyzed. During storage, lipids in fish are hydrolyzed by microbial enzymes, natural lipases in fish muscles, and spontaneous lipid hydrolysis (Hwang & Regenstein, 1993). The increase in FFA content and decrease in PL content with time observed in this study can be used to indicate loss of freshness in snakehead fish fillets.
Lipid oxidation
The extent of lipid oxidation in both chilled and superchilled storage is shown in Figure 5. Generally, the hydroperoxide value (PV) for all the fillet groups significantly (P < 0.05) increased at the end of storage ( Figure 5A). The PV detected in the fresh fish on day 0 was 0.05 µmol CPO/g of the sample. At the end of storage, PV increased to 0.16, and 0.13 µmol CPO/g on days 10 and 14 in fillets stored at chilled conditions for CAP and CVP, respectively. Low PV of 0.10 and 0.07 µmol CPO/g were obtained in SAP and SVP from superchilled storage, respectively. In the same way, a significant increase (P < 0.05) was observed in the TBARS value from 1.1 µm/g on day 0 to 4.3, 8.5, 4.0, and 2.5 µM MDA/kg in CAP, CVP, SAP, and SVP respectively at the end of storage period ( Figure 5B). TBARS of snakehead fish fillets stored at superchilled conditions were lower than those for chilled storage. At the same storage temperature, vacuum-packaged snakehead fish fillets had lower TBARS values rancid odor and a bitter taste that marked the end of its shelf life on day 10 in chilled storage and day 15 in superchilled storage. Variability between fillet groups of the same temperature history is said to be due to a fast rate of chemical deterioration in the fish muscles in the air-packaged group. A similar observation was recorded from the analysis of air-packaged redfish fillets that deteriorated after 11 days of chilled storage at 2 °C (Githu, 2013). A rancid flavor was noticed in air-packaged Nile tilapia (Oreochromis niloticus) fillets after 9 days of chilling at 1 °C in a study by Cyprian et al. (2013) similar to the present study. There was no significant difference (P > 0.05) in the Torry scores between groups of packaged fillets (chilled vs. superchilled storage) as well as within groups (air-packaged vs. Vacuumpackaged). A similar result was reported by Nga & Diem (2019) when Torry Scheme did not differentiate between Nile Tilapia fillet groups stored at 1, 4, 9, 15, and 19 ± 1 °C. However, more fluctuations were observed in the Torry scores of fillets from chilled storage than its superchilled counterparts regardless of the packaging methods. The relative stability of scores from the superchilled storage may be attributed to the preservative effect of low temperature with ice crystal formation. Torry scores from superchilled fillet group slightly outstrip fillets from chilled storage only at the end of storage time with 5.8 and 5.6 scores recorded for SAP and SVP as compared to 4.8 and 5.2 for CAP and CVP, respectively. This indicated that superchilled fillet groups were still in an acceptable range for consumption at the end of storage. Torry Score used in this work could not vividly distinguish the effect of the packaging method on the flavor and odor of cooked snakehead fish fillets.
Lipid content
Changes in the lipid content of snakehead fish fillets during storage for the four treatment groups are shown in Figure 3. The lipid content of all the samples slightly decreased at the end of storage. At the end of the storage period, the lipid content of the fillets stored at superchilled temperatures was significantly (P < 0.05) higher than that of the fillets stored at chilled temperatures. There with the rancid odor and off-flavor indicated by the QIM scheme and Torry scores. Based on this result, it can be judged that vacuum packaging in combination with superchilling temperature efficiently enhanced the fresh quality of the snakehead fish than air packaging at chilled or superchilled storage temperature.
Conclusion
The results of the present study indicated that storage temperature and packaging method strongly affected the sensory quality and lipid stability of fresh snakehead fish fillets during storage. Vacuum packaging and superchilled storage significantly improved the sensory quality and retarded the lipid degradation of fresh snakehead fillets. Based on the QI scores and Torry scores, the shelf-lives of air-packaged and vacuum-packaged fillets at chilled storage of 3 ± 1.0 °C were 9 days and 13 days, respectively. Meanwhile, the shelf-lives of air-packaged and vacuum-packaged fillets stored at superchilled temperature of -2.5 ± 0.5 °C were 15 and 25 days, respectively. Vacuum packaging compared to air-packaged samples. A similar increase in lipid PV and TBARS values was reported from the frozen storage of cobia fillets (Nguyen & Phan, 2018). The lowest value of TBARS recorded for superchilled vacuum-packaged snakehead fish fillet in our result was comparable to that obtained for superchilled Golden rainbow trout (Kitanovski et al., 2017). Özpola (2022) reported that the TBA content of vacuum-packaged fish ball was lower than air-packaged fish ball during chilled storage at 4 °C for 35 days. The elevation of the peroxide value in CAP and CVP indicated a higher extent of lipid oxidation that occurred in these fillet groups during chilled storage. Lipid hydroperoxide is an unstable primary lipid oxidation product, its content in fish muscle depends on the rate of formation and decomposition. The decomposition of hydroperoxide in the CAP and CVP was indicated by their increasing values in TBA-reactive substances. The accumulation of aldehydes in fish muscle can further damage protein and release rancid flavors (Taheri & Motallebi, 2012). The spoilage indication due to increased TBARS value in the snakehead fish fillets stored at chilled temperature is in agreement in combination with superchilled storage is an effective means to extend the shelf life of snakehead fish fillets. This new technique has the potential to extend the intrinsic freshness of snakehead fish fillets and ensure sustainable production.
|
2023-01-24T17:32:58.836Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "03a5c32d1cc23d1807846975a77f1ddc91b683b1",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/cta/a/9CBrmCMCKWcg3hVnNq7FBSN/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "641dd9fd41b1a3216aa0b0179a6edae51eec3943",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
4989780
|
pes2o/s2orc
|
v3-fos-license
|
Thromboprophylaxis using combined intermittent pneumatic compression and pharmacologic prophylaxis versus pharmacologic prophylaxis alone in critically ill patients: study protocol for a randomized controlled trial
Background Venous thromboembolism (VTE) remains a common problem in critically ill patients. Pharmacologic prophylaxis is currently the standard of care based on high-level evidence from randomized controlled trials. However, limited evidence exists regarding the effectiveness of intermittent pneumatic compression (IPC) devices. The Pneumatic compREssion for preventing VENous Thromboembolism (PREVENT trial) aims to determine whether the adjunct use of IPC with pharmacologic prophylaxis compared to pharmacologic prophylaxis alone in critically ill patients reduces the risk of VTE. Methods/Design The PREVENT trial is a multicenter randomized controlled trial, which will recruit 2000 critically ill patients from over 20 hospitals in three countries. The primary outcome is the incidence of proximal lower extremity deep vein thrombosis (DVT) within 28 days after randomization. Radiologists interpreting the scans are blinded to intervention allocation, whereas the patients and caregivers are unblinded. The trial has 80 % power to detect a 3 % absolute risk reduction in proximal DVT from 7 to 4 %. Discussion The first patient was enrolled in July 2014. As of May 2015, a total of 650 patients have been enrolled from 13 centers in Saudi Arabia, Canada and Australia. The first interim analysis is anticipated in July 2016. We expect to complete recruitment by 2018. Trial registration Clinicaltrials.gov: NCT02040103 (registered on 3 November 2013). Current controlled trials: ISRCTN44653506 (registered on 30 October 2013).
Background
Venous thromboembolism (VTE), including both deep vein thrombosis (DVT) and pulmonary embolism (PE), is a common complication of critical illness and is associated with increased morbidity and mortality [1]. Supported by high-quality evidence, pharmacologic thromboprophylaxis is recommended for critically ill patients [2].
The evidence regarding the effectiveness of mechanical prophylaxis including an intermittent pneumatic compression device (IPC) and graduated compression stockings (GCS) for thromboprophylaxis is less clear. A systematic review by Limpus et al. included two randomized controlled trials that compared IPC to low-molecular-weight heparin (LMWH) in critically ill trauma patients [3,4] and found no statistically significant difference in DVT rates between IPC and LMWH (risk ratio 2.37, 95 % 0.57-9.90) [5]. As such, clinical practice guidelines recommend that IPC be reserved for patients with contraindications to pharmacologic thromboprophylaxis [6]. In a cohort study using a propensity score-adjusted analysis, we found that incident VTE was lower with the use of IPC (versus no IPC) but not GCS (versus no GCS) [7]. This association of lower VTE with IPC was consistent regardless of the use and type of prophylactic heparin (unfractionated or LMWH), the type of admission (trauma or non-trauma, surgical or non-surgical). These findings suggest that IPC may provide thromboprophylaxis if used as an alternative to unfractionated heparin (UFH) or LMWH, and also when used as an adjunct to pharmacologic thromboprophylaxis. The largest trial to date on the effectiveness of IPC is the CLOTS 3 (Clots in Legs Or sTockings after Stroke) trial, which randomized 2876 stroke patients in 94 UK centers to IPC versus no IPC; the indication of pharmacologic prophylaxis was left to the appreciation of the treating team. The primary outcome was proximal vein DVT on screening ultrasound or any symptomatic DVT in the proximal veins, confirmed on imaging, within 30 days of randomization. The primary outcome occurred in 8.5 % of patients allocated to IPC and in 12.1 % of patients allocated to no IPC; with an absolute risk reduction of 3.6 % (95 % CI 1.4-5.8) [8]. Of note, fewer than 25 % of patients in CLOTS 3 received pharmacologic thromboprophylaxis. Nevertheless, the protective effect of IPC was observed whether pharmacologic thromboprophylaxis was or was not given. Regarding GCS, two trials (CLOTS 1 and CLOTS 2) documented a lack of effectiveness of GCS in thromboprophylaxis in stroke patients [9,10].
The thromboprophylactic effect of IPC is thought to be related to enhancing venous blood flow in the lower extremities, increase in endogenous fibrinolysis, stimulation of vascular endothelial cells and a reduction in venous caliber [4]. In a study on normal volunteers the use of IPC was associated with increased endogenous fibrinolysis; tissue factor pathway inhibitor and plasminogen activator activity both increased after applying pneumatic compression for 2 h [11]. The IPC may have several hemodynamic effects; as it has been shown to augment venous return, increase central venous pressure and pulmonary arterial pressure [12] and increase cardiac output in healthy volunteers [13]. While the clinical implications of the hemodynamic effects of IPC remain unknown, one manufacturer (Tyco) lists cardiogenic pulmonary edema as a contraindication for IPC. Cutaneous complications are a concern with IPC; in CLOTS 3, lower extremity skin lesions were reported in 3 % of IPC patients versus 1 % of control patients (p = 0.002) [8].
The lack of clear evidence for IPC and GCS has been reflected in the wide variation in the use of these devices in surveys from Canada, France, Australia and Germany [5,[14][15][16][17], and more importantly, in the current practice guidelines. The American College of Physicians (ACP) guidelines for non-surgical patients recommend against the use of GCS, and suggest IPC as an alternative to pharmacologic thromboprophylaxis if the patient has a contraindication; but make no recommendation about its adjunct use to pharmacologic thromboprophylaxis [6]. In contrast, The American College of Chest Physicians' (ACCP) 2012 guidelines recommend the use of GCS or IPC, although preference is given to IPC as an alternative but not as an adjunct to pharmacologic thromboprophylaxis in non-surgical critically ill patients [2].
Study objectives
The primary objective of the PREVENT trial is to assess the superiority of adjunct use of IPC to pharmacologic thromboprophylaxis compared to pharmacologic thromboprophylaxis alone on the incidence of proximal DVT in critically ill patients.
Secondary objectives
1. To study the effect of adjunct use of IPC on the incidence of pulmonary embolism and of distal lower extremity DVT 2. To study the effect of the adjunct use of IPC on intensive care unit (ICU), hospital and 90-day mortality and hospital length of stay (LOS) 3. To study the effect of adjunct use of IPC on hemodynamic status in terms of the need for vasopressor therapy 4. To study the effect of adjunct use of IPC on patients with heart failure in terms of ventilator-free days 5. To study the effect of adjunct use of IPC on VTE in the following subgroups: trauma, patients with central venous catheters in the femoral veins, stroke, postoperative patients, heart failure and shock 6. To examine whether there are differences in the effect of IPC based on whether UFH or LMWH is used 7. To examine whether there are differences in the thromboprophylactic effect between sequential and non-sequential IPCs 8. To examine whether there are differences in below-knee and above-knee sleeves 9. To examine whether IPC applied to the lower extremities reduces non-lower extremity thrombosis 10.To examine if there is dose-effect relationship of the IPC duration and incident DVT risk 11.To examine whether IPC increases the risk of skin ulcers or lower extremity ischemia and affects mobilization practice
Tertiary objective
To test whether there are differences among different LMWHs (enoxaparin, dalteparin and others) in VTE prophylaxis compared to UFH.
Design overview
The PREVENT trial is an international multicenter trial approved by the Institutional Review Board (IRB) of the Ministry of National Guard Health Affairs (MNGHA) in which eligible patients will be randomized to IPC or no IPC. The trial is registered in ClinicalTrials.gov: NCT02040103 and Current controlled trials: ISRCTN44653506. The study is sponsored by King Abdulaziz City for Science and Technology (AT 65 -34) and King Abdullah International Medical Research Center (RC12/045/R), Riyadh, Saudi Arabia.
Eligibility and enrollment
All patients admitted to the ICU will be screened for eligibility within the first 48 h of ICU admission. To enter the study, patients must fulfill the inclusion criteria and meet no exclusion criteria as detailed in Table 1. Eligible patients or their substitute decision-maker will be approached for written informed consent. No compensation is provided for enrollment in the trial.
Informed consent
The study is conducted according to Good Clinical Practice guidelines. The study protocol as well as the informed consent have been approved by the IRB of King Abdullah International Medical Research Center, King Abdulaziz Medical City, Riyadh and the respective IRBs of all the other centers. The research coordinator and/or physician investigator explains the objectives of the trial, and its potential risks and benefits, to the patient when possible or more commonly to his/her surrogate decision-maker. A witnessed written consent is obtained thereafter. The patient or surrogate decision-maker can withdraw from the study at any time without penalty or impact on patient care. A record is kept of all the patients who meet the inclusion criteria but are not randomized or withdrawn from the study.
Trial interventions
The intervention group will receive IPC in addition to pharmacologic prophylaxis ordered by the treating team; the control group will receive pharmacologic prophylaxis only. All IPC devices intended for DVT prophylaxis can be used in the study. Sequential devices (with multichamber cuffs) are preferred, but non-sequential (with single-chamber cuffs) are acceptable. The type of device will be documented.
The use of IPC will follow the manufacturer's instructions and local policies. IPC will be applied to both legs. We will use preferably thigh-length sleeves, but kneelength sleeves are acceptable. Foot pumps may be used in addition to the thigh-or knee-length sleeves. The ability to use IPC on both legs is one of the inclusion criteria. However, if during the study period the IPC could not be used on one leg, it should be continued on the other side and the use of a foot pump on the contraindicated leg considered if available. We aim to apply IPC continuously, Stopping guidelines for the intervention Table 2 outlines the stopping guidelines for the intervention and the subsequent action. The intervention may be stopped in the following situations: 1. Suspected or confirmed DVT: if there is suspicion of DVT, the IPC should be stopped until ultrasound excludes the diagnosis. If DVT is excluded, the IPC should be re-started 2. Suspected or confirmed PE: if there is suspicion of PE, the IPC should be stopped until the necessary work-up, such as spiral computed tomography (CT) scan, excludes the diagnosis. If PE is excluded, the IPC should be resumed 3. Pressure ulcer or ischemia that prevents the use of IPC 4. Change in focus of care to palliation 5. Patient is discharged from the ICU or has been in the trial for 28 days. At this point, the use of IPC is at the discretion of the treating team 6. Patient becomes fully mobile and no longer requires thromboprophylaxis in the judgment of their treating team
Randomization
We will use a central, computer-based randomization system with variable blocks to conceal allocation. We will stratify based on center and pharmacologic thromboprophylaxis regimen (UFH or LMWH).
Duration of the intervention
The study interventions will continue for the duration of the ICU stay or up to 28 days after randomization, after which the use of IPC will be at the discretion of treating team (Fig.1).
Minimizing bias Blinding
We aim to blind the radiologist interpreting the scans to detect DVTs and the study statistician. However, because of the nature of the intervention, the treating team and the ultrasonographer/technician performing the venous leg ultrasound will not be blinded to the intervention.
Minimizing contamination
Research coordinators will ensure enrollment of patients as quickly as possible after ICU admission. We will create a weekend on-call rota for the research coordinators in as many centers as possible. Patients receiving early IPC for more than 24 h will be excluded to eliminate early contamination.
Co-interventions
The ICU team will have full, independent control of patient management and as such, management other than IPC will not be influenced by the allocated intervention. The trial patients may have other risk factors that may modify the risk for development of DVT.
Research coordinators will record all such risk factors. These may include drugs such as antiplatelet agents (aspirin, ticlopidine, clopidogrel) and anticoagulants that are started after randomization (warfarin, therapeutic-dose LMWH, therapeutic heparin administered intravenously, other anticoagulants). The use of these medications will be at the discretion of the clinical team. The use of GCS will not be permitted; if used the reason will be documented.
Outcomes and follow-up Primary outcome
The primary outcome is incident proximal lower extremity DVT detected after the third calendar day of enrollment. Certified ultrasonographers will perform routine twice weekly bilateral proximal lower extremity venous ultrasound, and if DVT is clinically suspected. The first ultrasound is performed within 48 h of enrollment. Prevalent DVT is defined as DVT documented within the first three calendar days of enrollment, and is considered to reflect a baseline characteristic and less likely be related to the study intervention. Patients with prevalent DVT will be included in the main analysis but a prevalent thrombosis is not considered a primary outcome. We focus on proximal DVTs because they are much more reliably detected by ultrasound and are generally regarded as clinically more important. We will use the methodology used in the PROTECT trial to screen for proximal lower extremity DVTs [18]. The venous system will be examined at 1-cm intervals, documenting compressibility at the following six sites: common femoral, proximal superficial femoral, mid superficial femoral, distal superficial femoral, popliteal veins and trifurcation. DVT in any of these six sites qualifies as the primary outcome.
We will follow the same definitions used for VTE/DVT in the PROTECT trial [18]. We define DVT if there is a partially or completely incompressible venous segment. Venous wall thickening is not considered diagnostic of DVT. If a venous segment is not well visualized and is never well visualized on subsequent ultrasounds, the test is considered indeterminate; such events will be recorded but are not considered trial outcomes. Ultrasonographers may scan the distal leg veins at their discretion [18]. Distal DVT will be documented as a secondary endpoint.
DVTs are considered chronic if a test prior to enrollment reveals evidence of thrombus in the same or contiguous venous segment. DVTs and other VTE events are labeled as incident if they occur more than 3 calender days after randomization. We define a thrombus as catheterrelated if a catheter had been in situ in the same or a contiguous venous segment within 3 calender days of the diagnosis. This definition is consistent with the definition used in the PROTECT protocol.
If clinicians suspect any VTE event, they will perform tests as clinically indicated.
Enrolment Allocation
Post-allocation Close-out
Data management
Data are entered through a password-protected access to an electronic database through an online portal and are stored on a secure server at King Abdullah International Medical Research Center, Riyadh, Saudi Arabia. The database includes multiple logic checks for double entry and range checks for data values. Several procedures to ensure data quality and protocol standardization are undertaken including: (1) training sessions for research coordinators from participating centers prior to study commencement, (2) a detailed Study Instruction Manual which outlines each step of the protocol, and (3) startup meetings for all sites, either by a physical conference or via videoconferencing. Patient personal data are de-identified. Each site investigator and coordinator have access to patients' data from their site; the PI and the main coordinator has access to the data from all sites.
Data analysis Sample size calculation
Based on our previous observational study [7], the VTE risk (including DVT and PE) with no IPC was 7.2 % and with IPC it was 4.8 %. However, in this study no surveillance ultrasounds were performed. The PROTECT trial [13] documented a baseline risk for proximal DVT of 5.8 % in patients receiving unfractionated heparin and 5.1 % in patients receiving dalteparin. In addition, the PROTECT trial had a prevalent DVT rate at initial screening of 3.5 %. In the CLOTS 3 trial, the risk of DVT without IPC was 12.1 % and with IPC 8.5 %. We anticipate a higher baseline DVT rate than PROTECT because of the wider inclusion criteria and the inclusion of trauma patients; and lower than CLOTS 3 in which no pharmacologic prophylaxis was used routinely. Therefore, we anticipate a baseline risk of 7 % and absolute risk reduction of 3 %. Thus, a sample size of 1000 subjects in each group (accounting for a 5 % prevalent DVT rate and 5 % loss to follow-up) will have 80 % power to detect an absolute risk reduction of 3 %.
Statistical analysis
We will compare incident proximal lower extremity DVT between groups using the chi-square test. The unadjusted Cox proportional hazards model will be used to test the null hypothesis and will be used as a secondary analysis tool. A detailed statistical analysis plan will be published separately.
Research governance
The study Steering Committee members will be responsible for overseeing the conduct of the trial, for upholding or modifying study procedures as needed, addressing challenges with protocol implementation, formulating the analysis plan, reviewing and interpreting the data and preparing the manuscript. This will be achieved through meetings (in-person or by conference calls) at least quarter-yearly. The study Data Safety and Monitoring Board (DSMB) will provide independent input regarding the safety and/or efficacy of the intervention. Upon completion, the results of the trial are planned to be published in a peer-reviewed journal. Authorship will follow the Uniform Requirements for Manuscripts Submitted to Biomedical Journals [19].
Discussion
To our knowledge, this is the first RCT that compares adjunct IPC and pharmacologic prophylaxis versus pharmacologic prophylaxis alone in critically ill patients.
To enhance the external validity of our findings, patients will be enrolled from more than 20 hospitals internationally. Central randomization with concealed allocation, blinded radiologist interpretations and adherence to the intention-to-treat principle will limit potential sources of bias. In addition, trial interventions and outcome monitoring will ensure that "loss to follow-up" for the primary outcome is minimal or absent. The issue of the safety of critically ill patients is a prime concern in this randomized trial. Several measures have been taken to minimize, observe and document any potential safety concerns ( Table 3).
The main limitation to our study is the inability to blind patients, their caregivers and the ultrasonographer/technician performing ultrasound with regard to allocation due to the nature of the intervention. We gave careful consideration to applying sham IPC in the control group. After discussion and deliberation, we decided not to include a sham IPC in the control group due to the potential harm of placing a cuff on the legs without inflation.
We used twice-weekly surveillance ultrasound to assess the primary outcome, rather than relying on DVT clinically suspected by clinicians, because the latter tends to underestimate the true incidence of DVT. We used a pragmatic time window of 48 h to perform the first (baseline ultrasound), and we considered the DVTs identified on this ultrasound as prevalent DVTs. Because we are interested in examining whether IPC reduces the incidence of DVT, our primary endpoint is DVTs that occur in patients who have a normal initial ultrasound but develop DVT as documented on subsequent exams; i.e., after the third calendar day (incident DVT).
Our primary endpoint is proximal DVT by surveillance ultrasound. We will not screen for distal DVTs because of the debated clinical relevance [20] and because the diagnosis by Doppler ultrasound is technically challenging and results may be inconsistent as reported by the CLOTS 3 trial [8]. However, we will document distal DVT if diagnosed by ultrasound obtained by the treating team.
We will also capture data on other outcomes relevant to IPC use including lower extremity ischemia, pressure ulcers and mobility. We will follow patients for development of safety concerns (lower extremity ischemia and pressure ulcers) as CLOTS 3 documented lower extremity skin lesions in 3 % of patients allocated to IPC and in 1 % of those allocated no IPC (p = 0.002) [8]. We will document the incidence of lower extremity ischemia, although there is no evidence that IPC causes ischemia. Additionally, based on clinical concerns that IPC use may negatively impact mobilization, we will evaluate whether IPC application reduces the chances of patients being mobilized.
We followed a pragmatic approach in selecting IPC devices and sleeve lengths, permitting treating teams at participating centers to use their own devices (sequential on non-sequential) and to select the sleeves (knee-length versus thigh-length) because at present there is no evidence for superiority of any of these choices over others.
The results of this study will contribute to a better understanding of the effectiveness of IPC in critically ill adults. In addition, the PREVENT trial will likely contribute to future clinical practice guidelines and patient safety initiatives by providing evidence that will inform practice regarding the best thromboprophylaxis for critically ill adult patients.
Trial status
The first patient was enrolled in July 2014. As of May 2016, a total of 650 patients have been enrolled from 13 centers in Saudi Arabia, Canada and Australia. The first interim analysis is anticipated in July 2016. We expect to complete recruitment of 2000 patients by 2018.
Abbreviations ACCP, American College of Chest Physicians; DVT, deep vein thrombosis; GCS, graduated compression stockings; ICU, intensive care unit; IPC, intermittent pneumatic compression; LMWH, low-molecular-weight heparin; LOS, length of stay; PE, pulmonary embolism; RCT, randomized controlled trial; UFH, unfractionated heparin; VTE, venous thromboembolism data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. AD: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. AO: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. FH: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. KB: drafting of the manuscript, acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. MA: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. HL: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. AB: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. SM: drafting of the manuscript, acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. AA: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. YM: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. GM: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. SF: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. SA: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. LA: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. MD: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published. MS: acquisition, analysis or interpretation of data, critical revision of the manuscript for important intellectual content, and approval of the final version to be published.
|
2018-04-03T01:41:22.368Z
|
2016-08-03T00:00:00.000
|
{
"year": 2016,
"sha1": "20fa51b2026e3801efb26f180a17848a26ee5d90",
"oa_license": "CCBY",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-016-1520-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfbb468f213d60a8c1da7571a8f9183f55ab5390",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260306391
|
pes2o/s2orc
|
v3-fos-license
|
The Challenge of Overcoming Antibiotic Resistance in Carbapenem-Resistant Gram-Negative Bacteria: “Attack on Titan”
The global burden of bacterial resistance remains one of the most serious public health concerns. Infections caused by multidrug-resistant (MDR) bacteria in critically ill patients require immediate empirical treatment, which may not only be ineffective due to the resistance of MDR bacteria to multiple classes of antibiotics, but may also contribute to the selection and spread of antimicrobial resistance. Both the WHO and the ECDC consider carbapenem-resistant Enterobacteriaceae (CRE), carbapenem-resistant Pseudomonas aeruginosa (CRPA), and carbapenem-resistant Acinetobacter baumannii (CRAB) to be the highest priority. The ability to form biofilm and the acquisition of multiple drug resistance genes, in particular to carbapenems, have made these pathogens particularly difficult to treat. They are a growing cause of healthcare-associated infections and a significant threat to public health, associated with a high mortality rate. Moreover, co-colonization with these pathogens in critically ill patients was found to be a significant predictor for in-hospital mortality. Importantly, they have the potential to spread resistance using mobile genetic elements. Given the current situation, it is clear that finding new ways to combat antimicrobial resistance can no longer be delayed. The aim of this review was to evaluate the literature on how these pathogens contribute to the global burden of AMR. The review also highlights the importance of the rational use of antibiotics and the need to implement antimicrobial stewardship principles to prevent the transmission of drug-resistant organisms in healthcare settings. Finally, the review discusses the advantages and limitations of alternative therapies for the treatment of infections caused by these “titans” of antibiotic resistance.
Introduction
Growing antimicrobial resistance (AMR) poses a major threat to human and animal health [1].In 2019, the US Centers for Disease Control and Prevention (CDC) published a report showing that antimicrobial resistance causes more than 2.8 million infections and 35,000 deaths in the US every year [2].The steady rise of AMR is truly alarming, and it is predicted that if this trend is not halted, 10 million people could die from multidrugresistant diseases by 2050, at an annual cost of well over USD 1 trillion [3].This makes antimicrobial resistance a leading cause of death, behind only ischemic heart disease and stroke [4].Over the past 25 years, AMR has been recognized as a serious threat to global health, as evidenced by several high-level policy initiatives, such as the European Antimicrobial Surveillance System established in 1998, and more recently the adoption of the Global Action Plan (GAP) by the WHO in 2015 as a blueprint for combating AMR [5,6].To combat AMR, the action plan provides member states and other international stakeholders with 83 recommendations grouped into five main objectives.It emphasizes the importance of all member states developing effective national antimicrobial resistance strategies by 2017 [7].Although 140 countries have developed their own national action plans, implementation is at different stages in different countries [7].Moreover, only 77 of the 140 plans have been officially approved and are available in the WHO library [7].In 2016, the USA reported the first case of a patient with an infection that was resistant to all available antimicrobials [8].Carbapenem-resistant Klebsiella pneumoniae carrying the New Delhi metallo-β-lactamase gene (NDM-1) was isolated from the wound of a patient.One month later, the patient went into septic shock and died [8].The emergence of carbapenemresistant bacteria is only the latest event in a process which has been observed for almost all the antimicrobials developed over the years [9].This is particularly worrying because these antibiotics are considered one of the last resources in the treatment of multi-resistant Gram-negative infections [10,11].Millions of lives have been saved around the world since the introduction of antibiotics in the 1940s [11,12].Antibiotics are widely used to treat and prevent infections, including those that can occur after solid organ transplants, chemotherapy, or heart surgery [13][14][15].In addition, in just over 100 years, the introduction of antibiotics is estimated to have contributed significantly to the increase in average human life expectancy of more than two decades [14].Antibiotics have also played an important role in developing countries, helping to reduce morbidity and mortality from food-borne and other poverty-related infections [16,17].However, as antibiotics have become more widely used, the problem of antibiotic resistance has begun to grow, putting humans at increasing risk [18].In the past, resistance to penicillin and other antibiotics was successfully overcome through the discovery and development of new antibiotics.However, over time, this strategy has proved ineffective against resistant bacteria [19].There has been a steady decline in the number of new antibiotics developed and approved over the past 30 years, likely due to economic and regulatory barriers, leaving fewer options to treat resistant bacteria [14,20].The last original class of antibiotics was discovered in 1987.In recent years, only a few new antibiotics with limited clinical benefit have been approved [21].The lack of antibiotics against Gram-negative bacteria is particularly worrying.Several factors contributed to the failure of the pharmaceutical industry to develop new antibiotics [22].These include the following factors: (1) The development of new antibiotics is an extremely expensive endeavor, with a lengthy regulatory process and minimal revenues.This is because antibiotics are used for relatively short periods of time and are often curative, unlike drugs used to treat chronic diseases such as diabetes, asthma, or gastrointestinal disorders.(2) The relatively low cost of antibiotics compared to drugs used to treat neuromuscular diseases or cancer chemotherapy.(3) Lack of know-how: The research on antibiotics carried out in academia has been scaled back as a result of a lack of financial incentives due to the economic crisis.(4) Resistance can develop quickly, making it difficult to use the drug and resulting in a low return on investment for the company developing the drug [18,22].
A possible solution to the lack of antibiotic development may be to reduce the cost of the preclinical phase of the drug development process.With no guarantee that the molecule will have the desired efficacy and safety, this is the most expensive and risky phase for pharmaceutical companies [23].
Alternative methods of treating bacterial infections have also been investigated, including the use of bacteriophages and antibacterial peptides [24].Despite their importance, these technologies suffer from major limitations which prevent them from being used in medical products [24,25].So far, they could be a valuable addition to the antibiotics already available [25].
The aim of this review was to evaluate the literature on how these 'titans' contribute to the global burden of AMR.The review also highlights the need for the implementation of antimicrobial stewardship principles to prevent the transmission of drug-resistant organisms in healthcare settings.Finally, it discusses the benefits and limitations of alternative treatments for infections caused by these titans of AMR.
Causes and Effects of Antimicrobial Resistance
More than 65% of all antibiotics are produced by saprophytic bacteria (about 50% actinomycetes) and 20% by filamentous fungi [26].The others are semi-synthetic, such as clindamycin (a semi-synthetic derivative of lincomycin), and synthetic, such as sulfonamides.Naturally, in line with Darwin's theory of selection, microbes have evolved defense mechanisms against these antimicrobial substances [27].The increase in antimicrobial resistance is due to many factors, including (1) overuse of antimicrobials, (2) inappropriate prescription of antibiotics, (3) use of antibiotics as feed additives for faster growth in livestock and poultry, (4) release of antibiotics into the environment, and (5) limited development of new antibiotics [28][29][30] (Figure 1).The direct link between overusing antibiotics and the emergence and spread of antimicrobial resistance in pathogenic bacteria has been demonstrated in several studies [31,32].While some bacteria are naturally resistant to some antibiotics (intrinsic resistance), inappropriate use of antibiotics (e.g., taking an antibiotic for a viral infection) can promote antibiotic-resistant properties in non-pathogenic bacteria or it can allow resistant pathogens to multiply and replace non-pathogenic ones [33,34].Unlike most animals, which inherit genes from their parents, bacteria can acquire genes from their neighbors through a process known as horizontal gene transfer (HGT) [35].By enabling antimicrobial resistance between different species of bacteria, HGT is the main mechanism of bacterial resistance [36,37].Despite international guidelines strongly discouraging the overuse of antibiotics, overprescribing continues worldwide [38,39].In addition, several studies have shown that a significant percentage of prescribed therapies are inappropriate, mainly because the agent chosen or the dose/duration of treatment is not according to guidelines [40,41].There is also growing evidence that subinhibitory concentrations of antibiotics can enhance gene transfer, biofilm formation, and quorum sensing [42,43].There is also evidence that subinhibitory concentrations of antibiotics select for fast-growing mutants [44].Prolonged exposure of bacterial cells to low concentrations of antibiotics can also accelerate HGT between phylogenetically distant bacteria, as well as between nonpathogenic bacteria and pathogens [45].A direct link between subinhibitory concentrations of antibiotics in the environment and bacterial resistance acquired through selective pressure has been demonstrated in a number of studies [18,46,47].Numerous environmental studies have shown that fertilization and irrigation with sewage sludge introduce significant amounts of antibiotics and/or their degradation products and bioactive metabolites into water and agro-ecosystems [48][49][50].Highly water-soluble, these compounds spread rapidly in aquatic and terrestrial ecosystems [48].They are also a source of nutrients for some microorganisms.For example, P. aeruginosa, which is known to be common in domestic and hospital wastewater, produces a stronger biofilm after exposure to sub-inhibitory doses of erythromycin or sulfamethoxazole [47,50].In livestock and poultry, antibiotics are widely used as feed additives to promote faster growth.This makes the food industry a major consumer of antibiotics and a major contributor to antibiotic resistance [51,52].It is estimated that about 70 per cent of the medically important antibiotics on the US market are intended for use in animals [53].For many years, antibiotics have been given to animals on intensive livestock and poultry farms to help them grow faster.Although it is no longer legal to use antibiotics for this purpose, therapeutic use of antibiotics is still used to prevent the spread of disease to animals in close contact after the diagnosis of clinical disease (metaphylaxis) [54].Antibiotics used on farms can be ingested by humans through meat products.The transfer of resistant bacteria from farm animals to humans is the result of a series of events: (1) the use of antibiotics in food-producing animals kills susceptible bacteria, leading to the development of resistant bacteria; (2) bacteria resistant to antibiotics can spread to humans both directly through contact with infected animals and indirectly through the food chain; (3) multidrug-resistant (MDR) bacteria can cause severe infections with poor outcomes in humans [2,51].Antibiotic resistance is an old problem, dating back to the beginning of antibiotic use [12].However, this phenomenon is now reaching crisis proportions, as the emergence of antibiotic-resistant human pathogens is far outpacing the discovery of new drugs that can provide alternative treatments [21].
resistant human pathogens is far outpacing the discovery of new drugs that can provide alternative treatments [21].2) (due to inappropriate prescribed therapy) and animals (3) (used as feed additives) can create a selective pressure favoring antibiotic resistance properties in non-pathogenic bacteria or allow resistant pathogens to proliferate and replace nonpathogenic ones in animals and humans (4).Resistant bacteria and their genes can reach the environment (5), which acts as a reservoir where mobile genetic elements carrying resistance genes are exchanged with the bacterial flora of the environment by horizontal gene transfer and spread to other human and animal hosts by various routes (soil, water, air).Resistant bacteria can also spread from animals to humans through the food chain (6).
Carbapenem Resistance in Gram-Negative Bacteria
Carbapenems, together with penicillins and cephalosporins, belong to the beta-lactam antibiotics.However, they differ from these two classes of beta-lactams in that they have an unsaturated, sulphur-free beta-lactam ring [9].The carbapenems (imipenem, ertapenem, meropenem, and doripenem) are considered to be the treatment of last resort for infections caused by MDR organisms, defined as those not susceptible to at least one agent from three or more classes of antimicrobial agents [9,55].Carbapenems have a unique structure that confers protection against most beta-lactamases and have concentration-independent bactericidal activity [34].Carbapenem resistance to Gram-negative bacteria is the main contributor to multidrug resistance and is usually the last step before pan-drug resistance [56,57].Over the past two decades, there has been an overuse of carbapenems in clinical practice on all continents to combat infections caused by an increasing number of bacterial species producing extended-spectrum beta-lactamase (ESBL), which are able to hydrolyze almost all beta-lactam antibiotics except carbapenems [57].2) (due to inappropriate prescribed therapy) and animals (3) (used as feed additives) can create a selective pressure favoring antibiotic resistance properties in non-pathogenic bacteria or allow resistant pathogens to proliferate and replace nonpathogenic ones in animals and humans (4).Resistant bacteria and their genes can reach the environment (5), which acts as a reservoir where mobile genetic elements carrying resistance genes are exchanged with the bacterial flora of the environment by horizontal gene transfer and spread to other human and animal hosts by various routes (soil, water, air).Resistant bacteria can also spread from animals to humans through the food chain (6).
Carbapenem Resistance in Gram-Negative Bacteria
Carbapenems, together with penicillins and cephalosporins, belong to the beta-lactam antibiotics.However, they differ from these two classes of beta-lactams in that they have an unsaturated, sulphur-free beta-lactam ring [9].The carbapenems (imipenem, ertapenem, meropenem, and doripenem) are considered to be the treatment of last resort for infections caused by MDR organisms, defined as those not susceptible to at least one agent from three or more classes of antimicrobial agents [9,55].Carbapenems have a unique structure that confers protection against most beta-lactamases and have concentrationindependent bactericidal activity [34].Carbapenem resistance to Gram-negative bacteria is the main contributor to multidrug resistance and is usually the last step before pan-drug resistance [56,57].Over the past two decades, there has been an overuse of carbapenems in clinical practice on all continents to combat infections caused by an increasing number of bacterial species producing extended-spectrum beta-lactamase (ESBL), which are able to hydrolyze almost all beta-lactam antibiotics except carbapenems [57].Carbapenems enter Gram-negative bacteria via porins [9,56].Binding to various penicillin-binding proteins (PBPs) inhibits peptide crosslinking during cell wall synthesis, causing cell death [9].Carbapenem resistance in Gram-negative bacteria can be attributed to several main mechanisms [58].These include carbapenemase production, expression of efflux pumps, loss of porins, and alteration of PBPs [9].Gram-negative bacteria like Serratia spp, Pseudomonas spp, or Acinetobacter spp have been found to carry carbapenemase genes on their chromosome.These bacteria would have started to produce carbapenemases under the selective pressure of antibiotics [47].The overexpression of efflux pumps allows the bacteria to pump carbapenem out of the cells [56].The transfer of genes encoding carbapenemases carried by mobile genetic elements (plasmids, transposons) allowed the horizontal spread of resistance genes even between different genera [35].According to Ambler molecular classification, which is based on conserved and variable amino acid motifs of the proteins, carbapenemases belong to the classes A, B, and D [9,59].Class A and D have a serine residue in the catalytic site.Class B is known as the metallo-β-lactamases (MBLs), because they have a metal ion (usually zinc) as a cofactor for the nucleophilic attack of the β-lactam ring.Class D enzymes are oxacillinases [59].Class A includes chromosomal (SME, NmcA, SFC-1, BIC-1, PenA, FPH-1, SHV-38), plasmid (Klebsiella pneumoniae carbapenamase (KPC), GES, FRI-1), or both (IMI) encoded enzymes [56].In general, class A carbapenemases can degrade beta-lactams (for which they have a high affinity) and carbapenems.The most important and clinically relevant class A carbapenemases are KPC and, to a lesser extent, IMI and GES [60].Among these, the best-known KPCs have spread to all regions of the world [4].These enzymes are generally expressed by clinically relevant organisms such as P. aeruginosa and A. baumanni.Class A types are inhibited by beta-lactamase inhibitors (clavulanate, sulbactam, tazobactam, avibactam) [61].KPC is inhibited by boronic acid and EDTA [9].Class B carbapenemases are metallo-β-lactamases (MBLs) with the highest carbapenemase activity and, unlike class A carbapenemases, members of this group are not inhibited by β-lactamase inhibitors [59].EDTA and sodium mercaptoacetate can inhibit the carbapenemases in this class but cannot be used as a treatment due to their toxicity [62].The most clinically relevant MBLs are the Verona integron-encoded MBL (VIM), imipenemase (IMP), and the New Delhi MBL [60].Because these MBLs are usually encoded on the class 1 integron-containing gene cassettes, they spread easily among bacteria [63].In this way, they can also integrate resistance genes that code for other classes of antimicrobial agents [56,63].So far, 60 IMP-type carbapenemases have been described in Enterobacteriaceae, Acinetobacter spp., and Pseudomonas spp.VIM enzymes are among the most widespread MBLs, with >50 VIM variants reported [64].Amongst them, VIM-2 is the most commonly reported MBL globally [65].The blaNDM-1 gene, encoding New Delhi metallo-beta-lactamase 1 (NDM-1), is commonly found on plasmids carrying multiple resistance genes to many antibiotics, including fluoroquinolones, aminoglycosides, macrolides, and sulfamethoxazole, resulting in extensive drug resistance [1,66].Class D enzymes are called oxacillinases and include all OXA-type carbapenemases (e.g., OXA-48, OXA-72, and OXA-244) [9].OXA-48 and its variants are the most important class D carbapenemases in clinical practice.Some of these enzymes can hydrolyze carbapenems (e.g., OXA-23 from A. baumanii) and third-generation cephalosporins (e.g., OXA-11 from P. aeruginosa) [67].These enzymes are not inhibited by the classical inhibitors and play an important role in the acquired resistance of A. baumannii to the carbapenems [68].OXA-type enzymes are notoriously difficult to detect because they often induce only low levels of resistance to carbapenems in vitro.However, they are among the most common carbapenemases in Gram-negative bacteria because they are associated with carbapenem treatment failure [69].
The Emergence of Carbapenenase-Producing Enterobacteriaceae
According to reports from the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO), carbapenemase-producing Enterobacteriaceae (CPE) are by far the most pressing antimicrobial resistance threat [70].CPE has become a major global public health threat due to difficult-to-treat CPE bacterial infections in healthcare patients [71].Enterobacteriaceae are a large family of Gram-negative bacteria that includes many bacteria commonly found as part of the normal human intestinal flora [72].Some members of this family, including Escherichia coli, Klebsiella spp., and Enterobacter spp., are commonly isolated from clinical cultures because of their ability to cause serious nosocomial or community bacterial infections (including septicemia, pneumonia, meningitis, and urinary tract infections) [72].About half of all cases of sepsis and more than 70% of urinary tract infections are caused by these microorganisms [4,73].They are also the most common cause of opportunistic infections and are also known to cause surgical site infections, abscesses, pneumonia, and meningitis [4].A number of studies have shown that the main reservoirs of carbapenem-resistant Enterobacteriaceae (CRE) in healthcare facilities are colonized or infected patients, biofilms on medical devices, sink taps, and wastewater [74,75].Prolonged intensive care unit (ICU) stay, open wound, indwelling catheter, solid organ or stem cell transplant, severe and prolonged granulocytopenia after cancer chemotherapy in critically ill patients, and prior antimicrobial therapy are also significant risk factors for acquisition of MDR Enterobacteriaceae [74,76].Previous reports have shown an increasing frequency of AMR in E. coli and K. pneumoniae strains isolated from a variety of sources, including healthcare facilities, the community, and the environment [74].There are several mechanisms by which these Enterobacteriaceae species can develop resistance to antibiotics.The most important are the production of ESBLs and AmpCs, the synthesis of carbapenemase enzymes, and the loss of porins [9,39].Three main mechanisms are responsible for Enterobacteriaceae resistance to carbapenems: carbapenemase production, efflux pump overexpression, and porin channel mutations [77].Of these, the production of β-lactamases capable of hydrolyzing carbapenems is the most important mechanism of resistance [58].Mutations in porins reduce or prevent carbapenem uptake (e.g., altered expression of ompk35 and ompk36 in K. pneumoniae and loss of OmpF and OmpC in E. coli confer high and reduced resistance to ertapenem, respectively) [78].By recognizing antibiotics and reducing their concentration to sub-toxic levels, drug efflux pumps play a central role in the development of multidrug resistance in Enterobacteriaceae [79].Among the various efflux systems, the resistance-nodulation-division (RND) group is an important mechanism of multidrug resistance in Enterobacteriaceae [80].Furthermore, the AcrAB-TolC RND, a member of the resistance-nodulation-division (RND) group, is one of the main mechanisms of multidrug resistance of E. coli and K. pneumoniae [81].It is worth noting that inhibitors targeting this efflux pump have been shown to reverse antibiotic resistance in Enterobacteriaceae by restoring the efficacy of several drugs [81].However, the main mechanism of carbapenem resistance in CRE worldwide is the production of carbapenemases such as KPC, NDM, and OXA-48-type [82].Because these enzymes are encoded by genes carried on plasmids or other mobile genetic elements, they can be horizontally transferred to other bacterial species, making this resistance mechanism the greatest threat [35].NDM-1 was first identified in a strain of K. pneumoniae from a Swedish patient in New Delhi in 2008 [83].Since then, NDM carbapenemases have been found in Enterobacteriaceae isolates all over the world.Epidemiological studies indicate that intercontinental travel to endemic areas, such as India, Pakistan, and Sri Lanka, promotes the worldwide spread of clinical strains, especially K. pneumoniae and E. coli, harboring the blaNDM-1 gene [60,84].The presence of NDM-producing Enterobacteriaceae has already been reported in several European countries and worldwide [84].KPC-producing Enterobacteriaceae have also been reported in many regions of the world.Epidemiological studies have shown that the United States and Europe, particularly Italy and Greece, are endemic areas for KPC-producing Enterobacteriaceae [85].A total of 12 bla KPC gene variants exist globally [86].These variants have been implicated in the outbreaks in China and the Middle East [87].According to numerous reports, OXA-48like carbapenemases produced by Enterobacteriaceae are currently spreading very rapidly worldwide [88].However, the incidence of OXA-48-producing CPE is probably underestimated because most clinical microbiology laboratories do not test for these oxacillinases, which weakly hydrolyze carbapenems and lack cephalosporin resistance [89].Detection of OXA-48-producing CPE must be optimized to reduce their spread for at least two important reasons: the lack of inhibition by metal ion chelators or clavulanate, and the high level of carbapenem resistance observed (in the absence of class A and B carbapenemases) when OXA-like enzymes combine with other resistance mechanisms such as ESBL and AmpC production [56,60].E. coli and K. pneumoniae are two of the most common causes of CRE infections [11].The mechanism of resistance to carbapenemases is often linked to the NDM gene, either alone or in combination with OXA-48 [82].Class A carbapenemases have historically been susceptible to polymyxins, tigecycline, or aminoglycosides (especially gentamicin) [90].However, resistance rates to all these drugs are steadily increasing [4].Fortunately, the combination of ceftazidime-avibactam is effective against OXA-48 strains.This combination has better activity against the KPC and OXA-48 enzymes, but it lacks activity against the MBL [91].Therefore, aztreonam-avibactam combination is required in the presence of the NDM resistance mechanism.A key strategy for overcoming betalactam resistance conferred by metallo-beta-lactamases in Enterobacteriaceae responsible for nosocomial infections is the combination of ceftazidime-avibactam and aztreonam [1,92].
The Emergence of Carbapenem-Resistant Acinetobacter baumannii (CRAB) Infections
A. baumannii is a Gram-negative coccobacillus that can be found throughout the environment, especially in soil and water [93].It can also be found on the skin, in the respiratory tract, and in the gastro-intestinal tract of healthy people [93].Carbapenemresistant A. baumannii (CRAB) is an opportunistic pathogen that causes serious infections in healthcare settings [94].The ability of A. baumanni to survive for long periods of time on living and non-living surfaces and its resistance to several antibiotics have made it a major public health problem worldwide, with the World Health Organization listing CRAB as a priority 1 pathogen for which new therapies are urgently needed [9,95].CRAB rarely causes community-acquired infections, but it is emerging as a leading cause of healthcare-associated infections worldwide, including bloodstream, lung, wound, and urinary tract infections [94].However, as humans can be colonized with this microorganism, distinguishing between colonized and infected is challenging [96].For the same reasons, it is difficult to determine whether poor clinical outcomes are due to suboptimal antibiotic therapy or to underlying host factors (e.g., patients with acute kidney injury) [96].In addition, CRAB infections are difficult to treat because resistance to carbapenem antibiotics is usually associated with resistance to most other antibiotics expected to be effective against the wild-type strain [97].The main mechanisms of drug resistance in A. baumannii are inhibition of membrane permeability (reduction in porin permeability or increased efflux), modification of drug targets, and enzymatic inactivation of the drug by hydrolysis or formation of inactive derivatives [98].Regardless of the mechanisms described above, A. baumanni, like other MDR pathogens, exhibits remarkable genetic plasticity that allows it to rapidly mutate and re-adapt [99].The ability of A. baumannii to form mature biofilms on medical devices contributes significantly to both its survival under adverse environmental conditions and its exceptional antibiotic resistance [95].Recent studies have shown that available therapies are only partially effective in reducing mortality in patients with invasive CRAB infection, which is the fourth leading cause of death attributable to antimicrobial resistance worldwide [100].The susceptibility rates of A. baumanni to the carbapenemes vary according to the geographical region and are highest in Asia, Eastern Europe, and Latin America [101].As mentioned above, although antibiotic resistance in CRAB is mediated by complex mechanisms, carbapenem resistance is commonly associated with horizontal transfer of genes encoding oxacillinase (OXA-24/40, OXA-23) and sometimes also metallo-β-lactamases and serine carbapenemases [102].Previous studies have shown that the rate of resistance to key antibiotics, such as ampicillin-sulbactam and colistin, is increasing worldwide as a result of the spread of predominant CRAB clonal types [100].In addition, the majority of CRAB infections are pneumonia, which is unresponsive to antimicrobial agents that are active against CRAB in vitro but inactive in vivo due to their poor lung penetration and dose-dependent toxicities [100].Although the guidelines for the treatment of invasive CRAB infections differ from one organization to another (the European (ESCMID and ESICM) and the American IDSA guidelines), they all agree on the following: (a) Combination therapy with at least two agents (e.g., high-dose ampicillin-sulbactam in combination with another agent such as tigecycline, polymyxins) is recommended for the treatment of CRAB infections; (b) combination therapy with polymyxins and meropenem is not recommended; (c) the use of cefiderocol, a new FDA-approved betalactam with in vitro activity against CRAB isolates, should be limited to the treatment of CRAB infections refractory to other antibiotics and should be used as part of a combination regimen; (d) meropenem or high-dose imipenem-cilastatin and rifamycins or nebulized antibiotics are not recommended for the treatment of CRAB infections; (e) since polymyxin resistance develops rapidly when used as monotherapy, polymyxin B must be used in combination with at least one other agent for the treatment of CRAB infections [100,103].Several adjunctive therapies have been proposed for the treatment of CRAB infections, but there is currently limited data to determine whether these therapies provide clinical benefit in patients with CRAB infections [96,104].
1.5.Emergence of Pseudomonas aeruginosa with Difficult-to-Treat Resistance P. aeruginosa is an aerobic, non-fermenting, Gram-negative bacillus that is commonly associated with nosocomial infections [105].It can cause infections in many anatomical sites, including the urinary tract, respiratory tract, soft tissues, gastrointestinal tract, and blood, especially in patients with weakened immune systems, such as those with cancer, cystic fibrosis, burns, tuberculosis, cancer, and AIDS [106].In recent years, the presence of MDR P. aeruginosa isolates with limited treatment options has increased worldwide [107].With regard to antimicrobial therapy, the definitions of MDR P. aeruginosa have changed in recent years, as follows.In 2008, MDR P. aeruginosa was defined as non-susceptibility to at least one antibiotic in at least three classes, and carbapenems were still the treatment of choice; in 2012, resistance to carbapenems increased, making the pathogen extensively drug-resistant (XDR) and pan-drug-resistant (PDR), when P. aeruginosa isolates also showed resistance to polymyxins, especially colistin, and tigecycline [108].Finally, in 2018, the definition of P. aeruginosa with "difficult-to-treat" resistance (DTR) was adopted and defined as P. aeruginosa showing resistance to all of the following antibiotics: piperacillin-tazobactam, ceftazidime, cefepime, aztreonam, meropenem, imipenem-cilastatin, ciprofloxacin, and levofloxacin [109].P. aeruginosa infections are often a major therapeutic challenge due to the presence of innate and acquired resistance to many antibiotics.The former involves the presence of overexpressed efflux pumps and low outer membrane permeability, while the latter is due to the acquisition or mutation of genes that contribute to resistance to several classes of antibiotics, including beta-lactams, aminoglycosides, and fluoroquinolones [110].In recent years, there has been a gradual increase in P. aeruginosa isolates from healthcareassociated infections showing resistance to carbapenems, which have been one of the main treatment options for serious P. aeruginosa infections for about a decade [111].Carbapenemase resistance in P aeruginosa can also occur in the absence of carbapenemases, i.e., through the activation of different mechanisms, such as the loss of the outer membrane porin OprD associated with the overexpression of efflux pumps or ampC [112].While in the USA the resistance of Pseudomonas to carbapenems is mainly due to chromosomal genes encoding the porin OprD and the presence of specific efflux pumps, outside the USA the resistance of this pathogen to carbapenems is due to the acquisition of carbapenemases.[60,113].In addition to carbapenemase resistance, a growing concern is the emergence of extensively drug-resistant (XDR) P. aeruginosa infections associated with "high-risk epidemic clones" circulating in hospitals around the world.These clones carry transmissible genetic elements that contain multiple resistance elements, including those encoding the production of selected carbapenemases and ESBLs that confer resistance to ceftolozane/tazobactam, considered the treatment of last resort for infections caused by XDR P. aeruginosa [112].
New Weapons in the War against "Titans"
In recent years, only one new antibiotic, cefiderocol, an injectable siderophore cephalosporin, has been approved to treat complicated urinary tract infections and pneumonia caused by the WHO's most critical superbugs, including A. baumannii, P. aeruginosa, and Enterobacteriaceae [114].Therefore, the rapid spread of resistance to new antibiotics and the slow rate of discovery of new classes of antibiotics highlight the need for innovative therapeutic antibiotic options.To reduce the risk of inducing bacterial resistance, several additions to antimicrobials are being evaluated, including nanoparticles (NPs), antimicrobial peptides (AMPs), bacteriophages, the CRISPR/Cas system, and probiotics (Table 1 and Figure 2).[125,126] Enterobacteriaceae [114].Therefore, the rapid spread of resistance to new antibiotics and the slow rate of discovery of new classes of antibiotics highlight the need for innovative therapeutic antibiotic options.To reduce the risk of inducing bacterial resistance, several additions to antimicrobials are being evaluated, including nanoparticles (NPs), antimicrobial peptides (AMPs), bacteriophages, the CRISPR/Cas system, and probiotics (Table 1 and Figure 2).Nanoparticles are small particles between 1 and 100 nm in size.Nanoparticles are being investigated for a wide range of medical applications, from drug delivery systems and imaging agents to therapeutics [115].Based on their composition, NPs are generally classified into three classes: organic, carbon-based, and inorganic.Of these, metallic NPs appear to be the most promising [115].They can act directly as antibacterial agents (e.g., titanium dioxide (TiO 2 ), zinc oxide (ZnO)) or as drug delivery systems (e.g., liposomes) [116].The use of nanoparticles as a drug delivery system to target drug-resistant bacteria makes it possible to address MDR by exploiting the antibacterial activity of both Abs and NPs [116,117].Induction of oxidative stress, release of metal ions, and non-oxidative mechanisms are the main antibacterial mechanisms of NPs.The activation of multiple mechanisms of action by NPs broadens their spectrum of antimicrobial activity and prevents the development of bacterial resistance.Previous studies have shown that NPs are effective against WHO critical priority pathogens [117].
AMPs
AMPs are small, positively charged, amphipathic molecules, typically consisting of 12-50 amino acids.Their rapid bactericidal action, low resistance, and multifunctional mechanism of action make them one of the most promising alternatives to antibiotics [118].The bactericidal action of AMPs involves the activation of two main different mechanisms: the depolarization and permeabilization of the bacterial membrane or the inhibition of essential intracellular functions without membrane rupture (e.g., by nucleic acid binding) [119].A large number of antimicrobial peptides have been identified, each with a unique spectrum of activity and a different mechanism of action [119].Although many antimicrobial peptides have been identified, clinical use of AMPs remains limited due to their limited stability and high susceptibility to protease degradation.Other obstacles include the high cost of their extraction, their low bioavailability, and their cytotoxicity [120].
Phage Therapy
As evidenced by the many previous publications dealing with the subject, phagotherapy, i.e., the use of bacteriophages as a precision therapy for the treatment of bacterial infections, has received increasing attention over the last two decades [121].The use of bacteriophages as antimicrobials dates back more than 100 years, and phage therapy was used worldwide until the Second World War, when the use of antibiotics gradually restricted the use of phages [122].Increasing antibiotic resistance has led to renewed interest in bacteriophage therapy.In particular, phage therapy has been approved by the US Food and Drug Administration for the treatment of infections caused by multidrug-resistant bacteria that do not respond to available antibiotics [122].Because bacteria can rapidly develop phage resistance, phage cocktails are highly preferred in phage therapy and have been successfully used in the treatment of life-threatening infections in humans [122].However, further clinical research is needed to support the use of phage therapy in routine clinical practice.
CRISPR/Cas System
The CRISPR-Cas gene editing system is used in research labs to target and eliminate plasmids that carry antibiotic resistance genes, thus preventing the spread of antibiotic resistance [123].This technology showed immediate promise in eliminating resistance in a wide range of bacteria and has the potential to be a revolution for the future [123].Using phage-or plasmid-based delivery vehicles, this technology has been successfully used to remove plasmids encoding gentamicin resistance genes from target bacteria [124].
Probiotics
Several studies have shown promising results for a range of probiotics used to reduce the risk of infection and the use of antibiotics [125].However, despite a large body of evidence showing the promising antimicrobial activity of probiotics, further studies are needed to define the doses, clinical efficacy, safety, and mechanisms of action of probiotics in humans [125,126].This may help to reduce the development of multi-resistant bacteria.
Conclusions
Ranked by WHO as one of the top 10 global public health threats, a large and comprehensive study using data from 204 countries estimated that about 1.75 million people died from drug-resistant infections in 2019, out of 4.95 million deaths related to antimicrobial resistance, making drug-resistant infections more deadly than HIV/AIDS or malaria.This number could increase dramatically in the coming years, with AMR killing up to 10 million people a year by 2050.AMR also poses economic challenges, as it could reduce GDP by at least USD 3.4 trillion annually and push 24 million more people into extreme poverty over the next decade.Carbapenem-resistant Enterobacteriaceae (CRE), A. baumannii (CRAB), and P. aeruginosa (CRPA) have been identified by the WHO as critical priority bacteria for which novel therapeutics are urgently needed.These bacteria, along with those on the WHO's priority pathogen list (vancomycin-resistant Enterococcus faecium, third-generation cephalosporin-resistant Enterobacter spp, and methicillin-resistant Staphylococcus aureus), belong to the group of pathogens known as ESKAPE, which are responsible for the majority of life-threatening hospital-acquired infections.CRE, CRAB, and CRPA are emerging as a major public health threat, causing healthcare-associated infections and high mortality rates due to their resistance to a wide range of antibiotics.Widespread genome sequencing has re-vealed that the transmission of resistance is often acquired through mobile genetic elements, including insertion sequences (IS), transposons, and conjugative plasmids.Some strains have innate resistance to carbapenems.Others contain mobile genetic elements that lead to the production of carbapenemase, which hydrolyze a broad variety of β-lactams, including carbapenems, cephalosporins, penicillin, and aztreonam.In addition, the co-localization of carbapenemase production genes with other resistance genes in these pathogens further limits the treatment options for patients.Although the development of resistance in the microorganism occurs naturally, the misuse and overuse of antimicrobials in human health, food animal production, and agriculture are the main cause of antimicrobial resistance.In addition, several studies have shown that consumption of contaminated food and improper food handling can lead to human exposure to antimicrobial resistance.There have been numerous calls for strategies designed to prevent the further development and spread of AMR, including the World Health Organization's (WHO) approval of the Global Action Plan on Antimicrobial Resistance in 2015.In 2019, AMR has been identified as one of the top 10 public health threats facing humanity.Given the high prevalence, associated morbidity and mortality, and limited treatment options, we have focused on CRE-CRAB-CRPA infections, which also have the potential to cause hospital outbreaks worldwide and contribute to the spread of resistance.In addition, CRE-CRAB-CRPA infections have been shown to be preceded by colonization in almost all cases.Therefore, early diagnosis of colonization with CRE-CRAB-CRPA could most likely help to identify patients most at risk of developing infection.Surveillance of CRE-CRAB-CRPsA infection is essential and consists of identifying carbapenemase resistance in CRE-CRAB-CRPsA isolates to prevent transmission of these pathogens to other patients.There is a need for alternative treatment options for bacterial infections.We cannot rely on antibiotics alone.It is important to understand that the discovery of one or a few new antibiotics will not be the 'solution' to antibiotic resistance.However, since we will still need antibiotics in the short and medium term, it is important to learn from past mistakes in order to preserve every new antibiotic that comes onto the market.While it is true that there is potentially no antibiotic to which bacteria cannot become resistant, it is also true that using them carefully will slow down the process.This means that as new antibiotics come onto the market, they must be used wisely, or they will quickly lose their effectiveness as bacteria develop resistance.
Figure 1 .
Figure 1.Transmission pathways for resistant bacteria between food animals, humans, and the environment.The inappropriate use of antibiotics (1) in humans (2) (due to inappropriate prescribed therapy) and animals (3) (used as feed additives) can create a selective pressure favoring antibiotic resistance properties in non-pathogenic bacteria or allow resistant pathogens to proliferate and replace nonpathogenic ones in animals and humans (4).Resistant bacteria and their genes can reach the environment(5), which acts as a reservoir where mobile genetic elements carrying resistance genes are exchanged with the bacterial flora of the environment by horizontal gene transfer and spread to other human and animal hosts by various routes (soil, water, air).Resistant bacteria can also spread from animals to humans through the food chain(6).
Figure 1 .
Figure 1.Transmission pathways for resistant bacteria between food animals, humans, and the environment.The inappropriate use of antibiotics (1) in humans (2) (due to inappropriate prescribed therapy) and animals (3) (used as feed additives) can create a selective pressure favoring antibiotic resistance properties in non-pathogenic bacteria or allow resistant pathogens to proliferate and replace nonpathogenic ones in animals and humans (4).Resistant bacteria and their genes can reach the environment(5), which acts as a reservoir where mobile genetic elements carrying resistance genes are exchanged with the bacterial flora of the environment by horizontal gene transfer and spread to other human and animal hosts by various routes (soil, water, air).Resistant bacteria can also spread from animals to humans through the food chain(6).
Table 1 .
Novel control strategies to tackle AMR Pathogens.
Table 1 .
Novel control strategies to tackle AMR Pathogens.
|
2023-07-30T15:06:17.353Z
|
2023-07-27T00:00:00.000
|
{
"year": 2023,
"sha1": "cffd6a8ea69f2de3079751a76dc473f241d845d9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/11/8/1912/pdf?version=1690513397",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16b3c4c4d6ae8dbe19ee963bc3f666b098a16a40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6643474
|
pes2o/s2orc
|
v3-fos-license
|
Histone Deacetylase Inhibitors Induce Cell Death Selectively in Cells That Harbor Activated kRasV 12 : The Role of Signal Transducers and Activators of Transcription 1 and p 21
Histone deacetylase (HDAC) inhibitors (HDACi) show potent and selective antitumor activity despite the fact that they induce histone hyperacetylation in both normal and tumor cells. In this study, we showed that the inducible expression of kRasV12 in nontransformed intestinal epithelial cells significantly lowered the mitochondrial membrane potential (MMP) and sensitized cells to HDACi-induced apoptosis. Consistent with our finding that colon cancer cell lines with mutant Ras have reduced expression of signal transducers and activators of transcription 1 (STAT1), we showed that inducible expression of mutant Ras markedly decreased both basal and inducible expression of STAT1, a transcription factor with tumor suppressor activity. To investigate whether reduced expression of STAT1 in cells that harbor mutant Ras contributes to their increased sensitivity to HDACi, we silenced the expression of STAT1 in HKe-3 cells with small interfering RNA. Despite the fact that silencing of STAT1 was not sufficient to alter the MMP, STAT1 deficiency, like Ras mutations, sensitized cells to apoptosis induced by HDACi. We showed that the induction of p21 by HDACi was significantly impaired in HKe-3 cells with silenced STAT1 expression and showed that the ability of butyrate to activate p21 transcription was diminished in STAT1-deficient HKe-3 cells. Finally, we used cells with targeted deletion of p21 to confirm that p21 protects cells from butyrate-induced apoptosis, strongly suggesting that in these cells STAT1 deficiency promotes butyrate-induced apoptosis through impaired induction of p21. Our data therefore establish that Ras mutations, and consequent reduction in the expression of STAT1, underlie the increased susceptibility of transformed cells to undergo apoptosis in response to treatment with inhibitors of HDAC activity. [Cancer Res 2007;67(18):8477–85]
Introduction
Treatment of tumor cells with histone deacetylase (HDAC) inhibitors (HDACi), drugs that remodel chromatin and thereby modulate the expression of several genes, has been shown to result in growth arrest, differentiation, and/or apoptosis of many cancer cell lines (1,2).In contrast, normal cells seem to be relatively resistant to HDACi both in vitro and in vivo.The basis for the selective toxicity of HDACi for transformed cells remains unclear.We previously found that mutations in k-Ras, a common genetic change in human tumors, sensitize colon cancer cells to butyrate, an inhibitor of HDAC activity (3).Oncogenic k-Ras has been shown recently to be phosphorylated by protein kinase C, which promotes its translocation to mitochondria, a central organelle in apoptosis, and its association with BCL-x, a potent modulator of programmed cell death (4).In this study, we addressed the significance of constitutive Ras signaling, and Ras downstream target genes, in the responsiveness of colon cancer cells to HDACi.
We reported earlier that a subset of colon cancer cell lines that harbor mutant k-Ras has reduced expression of signal transducers and activators of transcription 1 (STAT1) and of STAT1 target genes and showed that targeted deletion of the mutant Ras allele in HCT116 cells was sufficient to restore the expression of STAT1 (5).We showed that expression of mutant Ras inhibited the basal activity of the STAT1-driven reporter gene and markedly inhibited its responsiveness to IFN-g (5), showing that activated Ras interferes with STAT1-dependent transcription.This is likely to underlie the decreased expression of IFN-dependent genes in cells harboring an activated k-Ras mutation.
Consistent with our results, genome-wide analysis of mast cells transformed with the H-Ras oncogene revealed strong downregulation of several IFN-inducible genes (6), and constitutive signaling by phosphatidylinositol 3-kinase and mitogen-activated protein kinase in cells harboring B-Raf mutations has been shown to down-regulate Janus-activated kinase/STAT signaling (7).These findings are significant because, in contrast to STAT3 and STAT5, which are frequently found constitutively activated in leukemias and in solid tumors (8,9), levels of STAT1 are often found reduced in primary tumors and in established cancer cell lines (5,10,11).Although our data showed that Ras mutations are sufficient to inhibit STAT1 expression, STAT1 and its target genes have also been shown to be epigenetically silenced by methylation after cellular immortalization (10), pointing to multiple mechanisms of STAT1 down-regulation in transformed cells.
STAT1 is a transcription factor that regulates the expression of several genes involved in proliferation, apoptosis, and differentiation (12), including the cyclin-dependent kinase (cdk) inhibitor p21, which harbors conserved STAT1-responsive elements in its promoter region (13).In many cell lines, the ability of STAT1 to induce the expression of p21 seems to be fundamental for STAT1mediated growth arrest.For example, IFN-g failed to inhibit growth of STAT1-deficient U3A cells but regained antiproliferative properties on STAT1 reintroduction (13).In addition, hypermethylation of the STAT-responsive element located within the CpG island in the p21 promoter was associated both with decreased constitutive expression of p21 as well as with IFN-ginduced activation of p21 in rhabdomyosarcoma cell lines (14).STAT1-deficient mice are prone to develop epithelial tumors, confirming the tumor suppressor properties of STAT1 (15), and we showed that a deficiency in p21 promotes formation of intestinal tumors, initiated by mutation in the APC tumor suppressor gene (16).
We have shown earlier that colon cancer cell lines that harbor Ras mutations have reduced levels of STAT1 (5) and that constitutive Ras signaling modulates the responsiveness of cells to the chemopreventive agent butyrate (3).In this study, we used nontransformed intestinal epithelial cells (IEC) with inducible expression of oncogenic kRasV12, as well as a colon cancer cell line with silenced STAT1 expression, to dissect the role of mutant Ras and STAT1 in the responsiveness of cells to inhibitors of HDAC activity.We showed that silencing of STAT1 expression, like Ras mutations, promotes apoptosis in response to inhibitors of HDAC activity, suggesting that Ras modulates apoptosis, at least in part, through down-regulation of STAT1 expression.
Inhibitors of HDAC activity are promising chemotherapeutic compounds and several are in clinical trials for a number of malignancies (2).An important characteristic of HDACis is that they induce apoptosis preferentially in transformed cells.Our data show that mutations in k-Ras and the subsequent down-regulation of STAT1 and perturbed activation of p21 in STAT1-deficient cells may constitute the molecular basis for the selectivity of HDACis.
Materials and Methods
Cell culture and Western blot analysis.The HCT116 colorectal carcinoma cell line and its clonal derivative HKe-3 that lacks the mutant k-Ras allele (17) were cultured under standard conditions in MEM supplemented with 10% FCS and antibiotics.IEC-iKRas cells, a generous gift from Raymond DuBois, (Vanderbilt University, Nashville, TN) were grown in DMEM supplemented with 10% FCS; stimulation with isopropyl-L-thio-B-Dgalactopyranoside (IPTG; 5 mmol/L) was done for 24 h.Western blot analysis was done using standard procedures.Briefly, 50 Ag of total cell lysates were fractionated in 10% SDS-polyacrylamide gels and transferred to nitrocellulose membrane.The membranes were incubated with antibodies for 1 h at room temperature or overnight at 4jC and enhanced chemiluminescence (Amersham) was used for visualization of immune complexes.
Immunofluorescence.Cells were grown on chamber slides, serum starved for 16 h, and either left untreated or treated as indicated.Cells were fixed in ice-cold methanol-acetic acid solution (95:5, v/v) for 20 min at À20jC.Incubation with antibody that recognizes activated caspase-3 (Cell Signaling) was done for 1 h at 37jC.Slides were washed with PBS and incubated with a secondary antirabbit antibody conjugated to FITC for 45 min at 37jC.Samples were examined with a fluorescent microscope and images were acquired with a SPOT CCD camera and analyzed by SPOT software.
Transient transfections and reporter gene assays.Cells were transfected with a pool of small interfering RNA (siRNA) specific for STAT1 or IRF1 (Dharmacon) using the calcium phosphate method (ProFection Mammalian Transfection System, Promega) as we described before (18).Transient transfection experiments using the 2.4-kb genomic fragment containing the p21 promoter cloned upstream of the LUC reporter gene (19) were done in 12-well plates in the presence or absence of STAT1-specific siRNA.
Apoptosis assay.Cells were resuspended in hypotonic buffer (0.1% Triton X-100, 0.1% sodium citrate) and stained with propidium iodide (50 Ag/mL) for 4 h at 4jC as described before (20).Samples were filtered through a nylon mesh (40-Am pore size) and analyzed by flow cytometry.Cell cycle distribution and the extent of apoptosis (cells with a sub-G 1 DNA content) were analyzed by the ModFit software.Mitochondrial membrane potential (MMP) was determined by flow cytometry using the fluorescent dye JC1 (Invitrogen).Cells were stained with 1 Amol/L JC1 for 1 h at 37jC, washed with PBS, and analyzed by fluorescence in the FL2 channel.
Results
Activation of oncogenic Ras in nontransformed epithelial cells is sufficient to prime cells to undergo apoptosis in response to inhibitors of HDAC activity.We recently reported that the extent of apoptosis in response to butyrate, an inhibitor of HDAC activity, is increased in colon cancer cell lines that carry mutant Ras (3).To determine whether oncogenic Ras plays a general role in apoptosis induced by HDACi, we used nontransformed IEC-iKRas intestinal cells with inducible expression of oncogenic k-Ras (21).As described for the IEC-iKRas cells (21), expression of oncogenic Ras by addition of IPTG was extensive and tightly regulated, and as expected, cells with activated Ras displayed enhanced proliferation (Supplementary Fig. S1A and B).
To determine whether induction of oncogenic Ras alters the response of cells to HDACi, we treated cells with 3 mmol/L butyrate or 1 Amol/L suberoylanilide hydroxamic acid (SAHA) for 24, 48, and 72 h in the presence or absence of IPTG (5 mmol/L).The results were expressed as growth index, which is the ratio of the number of cells in treated cultures to the number of control cells.As shown in Fig. 1A, both butyrate and SAHA induced death preferentially in cells that were induced with IPTG to express mutant Ras.Consistent with these data, both SAHA and butyrate inhibit the proportion of cells in S phase preferentially in cells expressing oncogenic Ras (Fig. 1B).Induction of Ras also sensitized cells to apoptosis induced by trichostatin A (TSA) and apicidin, two structurally unrelated inhibitors of HDAC activity (data not shown).
These findings are consistent with our published report that HCT116 cells, which harbor an activating mutation in k-Ras, respond to butyrate with increased apoptosis when compared with HKh2 and HKe-3 cells, two clones derived from HCT116 cells in which the mutant Ras allele has been deleted (3).Therefore, our findings show that activation of Ras during transformation of colonic epithelial cells is sufficient to sensitize cells to apoptosis induced by several HDACis.
Because mutant k-Ras has recently been shown to localize to mitochondria (4), we tested whether the induction of kRasV12 modulates mitochondrial functions, such as the MMP, a key indicator of cell viability.As shown in Fig. 2A, the induction of mutant Ras significantly reduced the MMP in IEC-iKRas cells but not in the parental IEC6 cell line, excluding the possibility that the decrease in MMP was caused by the addition of IPTG.In addition, the decrease in the MMP was progressive, becoming apparent only 24 h after addition of IPTG (Fig. 2B), which coincided with the kinetics of induction of mutant Ras in these cells (data not shown; ref. 21).Consistently, we showed that HCT116 cells, which carry mutant Ras, have a lower resting MMP when compared with HKe-3 cells, the isogenic clone with a targeted deletion of the mutant Ras allele (data not shown).
We next examined whether the presence of oncogenic Ras perturbs changes in the MMP in response to HDACi in IEC-iKRas cells.As shown in Fig. 2C, treatment of cells with butyrate or SAHA increased the MMP in the absence of IPTG but decreased it in cells induced by IPTG.Because dissipation of the MMP is an initial step in triggering an apoptotic cascade, this observation directly links the expression of mutant Ras to the ability of HDACis to induce apoptosis.Moreover, these data are consistent with our findings that differences in the intrinsic MMP are linked to the biological responsiveness of cells to butyrate (22).
Silencing of STAT1 in a colorectal cancer cell line promotes butyrate-induced apoptosis.We have previously reported that the expression of STAT1 is reduced in colon cancer cell lines that harbor oncogenic Ras mutations (5).Using IEC-iKRas cells with inducible activated Ras (Supplementary Fig. S1), we confirmed that the inducible expression of mutant Ras by IPTG is sufficient to down-regulate both basal and IFN-g-inducible STAT1 expression (Fig. 3A).We validated functional Ras signaling on IPTG induction by showing phosphorylation of extracellular signalregulated kinase (ERK) 1/ERK2 in IPTG-treated cells (Fig. 3B).These results therefore confirmed that Ras-induced transformation of epithelial cells leads to down-regulation of STAT1 expression, establishing STAT1 as a potentially important effector of Ras signaling.
To determine whether the reduced levels of STAT1 in Rastransformed cells contribute to their enhanced sensitivity to butyrate, we silenced STAT1 expression.However, silencing of STAT1 in IECs was not very efficient and was, in addition, transient, which did not allow us to use this system to show the role of STAT1 in HDACi-induced apoptosis.Therefore, we silenced STAT1 expression in HKe-3 cells, a cell line derived from HCT116 cells by targeted deletion of the mutant Ras allele (17).STAT1 expression was silenced by RNA interference using a pool of siRNAs directed against STAT1 as we described before (23).We achieved 80% to 90% inhibition of both the basal and IFN-ginducible expression of STAT1 at 25 nmol/L STAT1 siRNA (Fig. 4A).
Silencing persisted for at least 136 h (data not shown), which allowed us to investigate the biological significance of STAT1 deficiency in HKe-3 cells.We showed that silencing of STAT1 in HKe-3 cells was not sufficient to alter the MMP (Fig. 4B).Consistent with our data shown in Fig. 2, HCTT16 cells, which harbor mutant Ras, have lower MMP compared with the HKe-3 cells with targeted deletion of the mutant Ras allele (Fig. 4B), Figure 1.Oncogenic activation of RasV12 sensitizes cells to apoptosis induced by HDACi.A, cells were treated with 3 mmol/L butyrate (Bu ) or 1 Amol/L SAHA and the number of cells was determined 24, 48, and 72 h after treatment.Growth index (GI ) represents the ratio between the number of cells in treated and untreated (CTRL ) cultures.Pictures were taken 48 h after treatment.B, the proportion of cells in S phase in cells treated with butyrate or SAHA for 24 h was determined by flow cytometry.Bars, calculated from three independent experiments.
showing that complex changes on Ras signaling are required to alter the MMP.
We next compared the extent of butyrate-induced apoptosis in cells transfected with nontargeted siRNA or siRNA specific for STAT1.We previously reported that HKe-3 cells are, due to a targeted deletion of the mutant Ras allele, relatively resistant to butyrate and that they express relatively high levels of STAT1 (3).As shown in Fig. 4C and D, silencing of STAT1 was sufficient to sensitize HKe-3 cells to butyrate-induced apoptosis.These data suggest that reduced expression of STAT1 in Ras-transformed cells is at least in part responsible for enhanced apoptosis of these cells in response to butyrate.
Butyrate, through its ability to induce growth arrest, differentiation, and apoptosis in transformed cells, acts as a physiologic chemopreventive agent.Next, we determined whether STAT1 also regulates the responsiveness of cells to an important pharmacologic chemopreventive agent, sulindac.Control cells, or cells transfected with STAT1 siRNA, were treated with sulindac sulfide
Cancer Research
Cancer Res 2007; 67: (18).September 15, 2007 for 24 h and the extent of caspase-3 activation was determined by immunofluorescence using antibody that specifically recognizes cleaved, activated caspase-3.As shown in Supplementary Fig. S2A, the extent of activation of caspase-3 was significantly higher in cells with silenced STAT1 expression.Consistently, the amount of cleaved poly(ADP-ribose) polymerase (PARP), a caspase substrate, was enhanced in STAT1-deficient cells (Supplementary Fig. S2B).
These data established that STAT1 can protect cells from apoptosis not only in response to HDACi but also to a pharmacologic inducer of programmed cell death.However, our data also revealed that silencing of STAT1 is not sufficient to lower the MMP (Fig. 4B).Therefore, how does STAT1 protect HKe-3 cells from apoptosis in response to HDACi?
Silencing of STAT1 in HKe-3 cells interferes with induction of p21 in response to butyrate.p21 is known to play an important role in butyrate-induced growth arrest as well as in butyrate-induced apoptosis (24,25).Because STAT1 has been shown to regulate transcription of the p21 gene ( 13), we determined whether STAT1 deficiency perturbs the induction of p21 in response to butyrate and sulindac sulfide in IECs.HKe-3 cells, in which the mutated Ras allele has been deleted, have therefore elevated levels of STAT1 compared with the HCT116 cells, were transfected with nontargeting siRNA or siRNA specific for STAT1, and were either left untreated or treated with 3 mmol/L butyrate or 150 Amol/L sulindac sulfide.The levels of cleaved PARP, a marker of apoptosis, and the levels of p21 were determined by immunoblotting 24 and 48 h after treatment.Consistent with results shown in Fig. 4 and Supplementary Fig. S2, STAT1-deficient cells underwent enhanced apoptosis in response to both butyrate and sulindac sulfide, as shown by enhanced cleavage of PARP in cells with silenced expression of STAT1 (Fig. 5A).In addition, we showed that p21 induction in response to butyrate and sulindac sulfide was significantly impaired in STAT1-deficient cells, showing that STAT1 plays a crucial role in p21 induction in response to both butyrate and sulindac sulfide.
To determine whether STAT1 is required for transcriptional activation of p21 in response to butyrate, we transfected cells with a p21 promoter reporter construct in the presence of nontargeting siRNA or siRNA specific for STAT1.Butyrate induced the activity of the p21 promoter in a dose-dependent manner, and we showed that silencing of STAT1 severely impaired the transcriptional activation of the p21 promoter in response to butyrate (Fig. 5B).This result established that STAT1 plays an important role in transcriptional activation of p21 in response to butyrate.
One of the important downstream effectors of STAT1 is IRF1, a transcription factor that has been shown to regulate the activity of the p21 promoter (26) and that is expressed, as we showed, in a STAT1-dependent manner (23).To determine whether IRF1 mediates the ability of STAT1 to regulate p21, we silenced IRF1 in HKe-3 cells and examined the responsiveness of IRF1-deficient cells to butyrate.Our data revealed that, in contrast to STAT1, silencing of IRF1 did not interfere with p21 induction in response to butyrate (Supplementary Fig. S3A and B) and did not modulate butyrate-mediated apoptosis (data not shown).These data exclude the possibility that STAT1 regulates p21 activation through induction of IRF1 and support our hypothesis that STAT1 may be a direct regulator of p21 transcription.Indeed, our preliminary data suggest that overexpression of STAT1 activates p21 promoter activity (data not shown).
p21 protects cells from apoptosis induced by butyrate and other inhibitors of HDAC activity.The best understood biological activity of butyrate is inhibition of HDAC activity (27).Therefore, we next determined whether two structurally unrelated inhibitors of HDAC activity, TSA and SAHA, also require STAT1 for the induction of p21 and whether STAT1 deficiency modulates their biological activity.Cells were transfected with nontargeting siRNA or siRNA specific for STAT1 as described earlier and treated with butyrate (3 mmol/L), TSA (0.5 or 1 Amol/L), or SAHA (1 or 2 Amol/L) for 24 h.As shown in Fig. 6A, all three inhibitors of HDAC activity induced significantly higher levels of apoptosis in STAT1-deficient cells and they all failed to induce p21 in STAT1deficient cells.In contrast, the levels of another inhibitor of cdk activity, p27, were not affected by STAT1 deficiency.
These data suggested that STAT1 protects cells from apoptosis through its ability to contribute to p21 induction in response to treatment of cells with HDACi.We confirmed the protective role of p21 by showing that HCT116 cells with deletion of the p21 gene respond with increased apoptosis to butyrate (Fig. 6B).In contrast, deficiency in p53 did not affect butyrate-induced apoptosis (data not shown), excluding the role of p53 in p21 induction.
Altogether, our data suggest that STAT1 protects IECs from apoptosis through its ability to support p21 induction in response to a variety of stimuli and that signaling by oncogenic Ras promotes apoptosis in response to HDACis at least in part through its down-regulation of STAT1 expression and consequent loss of p21 induction.
Discussion
There is growing evidence that epigenetic changes that occur during transformation are as crucial for the progression of malignant disease as genetic alterations.Inhibitors of HDAC activity (HDACi), such as butyrate, TSA, and SAHA, are drugs that induce histone acetylation, modulate the expression of several genes, and thereby normalize the epigenetic state in tumor cells.HDACis exert their potent and selective antineoplastic activity through their ability to inhibit growth and to induce apoptosis of cancer cells but also through inhibition of angiogenesis, induction of differentiation, and activation of the host immune response (2).In addition, HDACis have been shown to exert anti-inflammatory properties through suppression of important inflammatory cytokines (28) or inhibition of cytokine signaling (29)(30)(31) and their ability to inhibit inflammation is likely to contribute to their antitumorigenic activity.
Although acetylation of histones is generally thought to reactivate gene expression, genome-wide expression analysis has revealed that a comparable number of genes were repressed and induced in cells that were exposed to structurally unrelated inhibitors of HDAC activity (1,32).Another intriguing feature of HDACi is that, despite the fact that they induce similar hyperacetylation of histones in both normal and tumor cells, HDACis show remarkable specificity for transformed tumor cells (1,2).The molecular basis for this selectivity has not been revealed.For example, despite the fact that HDACis have been shown to activate a death receptor pathway in leukemic cells, but not in normal hematopoietic progenitors, the expression of oncogenic fusion proteins AML1/ETO and PML/RAR was not sufficient to confer HDACi sensitivity (33,34).
Here, we present data that show that inducible expression of oncogenic kRasV12 in nontransformed IECs significantly lowers the MMP and that acquisition of the mutant Ras is sufficient to sensitize cells to HDACi-induced apoptosis.
We identified STAT1 as a downstream target of signaling by mutant Ras, whose silencing, like activation of oncogenic Ras, sensitizes cells to HDACi-induced apoptosis.Our work established that, in STAT1-deficient HKe-3 cells, HDACis failed to induce p21 expression, showing that in these cells STAT1 is required for p21 induction in response to HDACi.STAT1-null mouse embryonal fibroblasts and STAT1-deficient fibrosarcoma cells exhibit significantly lower expression of p21, showing that STAT1 is required for basal expression of p21 (35).Likewise, the basal expression of p21 in the intestine of STAT1-deficient mice was lower, and STAT1-null mice also displayed a lower level of inducible p21 in response to intestinal injury (36).Another group has recently reported that oncogenic H-Ras also promotes HDACi-induced apoptosis and that induction of p21 in response to HDACi is impaired in cells that harbor mutant H-Ras (37).
Although we showed that the ability of HDACis to activate transcription of p21 is severely impaired in STAT1-deficient cells, the mechanism of STAT1-dependent activation of p21 remains to be established.As we reported in HKe-3 cells, STAT1 is constitutively phosphorylated on serine but not on tyrosine (31).Although tyrosine phosphorylation remains a principal mechanism whereby
Cancer Research
Cancer Res 2007; 67: (18).September 15, 2007 STAT1 dimerizes and translocates to the nucleus, STAT1 has been shown to regulate gene expression in its monomeric, unphosphorylated form (38).We showed that treatment of cells with butyrate results in up-regulation of STAT1 expression (Fig. 5) but does not modulate the activation of STAT1 (data not shown).Similarly, phosphorylation of STAT1 has been shown dispensable for its ability to regulate oxysterol-induced apoptosis via the p21/ caspase-3-dependent pathway (35).
The regulatory region of p21 harbors multiple STAT1-binding sites (13), suggesting that STAT1 may act as a direct regulator of p21 transcription in response to HDACi.Transient transfection studies using a p21 promoter reporter construct have shown that induction of p21 by TSA requires an Sp1 site located in the p21 promoter region (19,39).Sp1 and STAT1 have been shown to physically interact and to synergistically regulate the expression of the intercellular adhesion molecule 1 in response to IFN-g (40).Whether Sp1 and STAT1 also cooperate in the induction of p21 in response to treatment with HDACi is not yet resolved.
We showed that both STAT1-deficient and p21-deficient cells respond to HDACi with an increased extent of apoptosis.We hypothesize that, in the absence of STAT1, like in the absence of p21, cells fail to undergo growth arrest in response to HDACis but are instead driven to apoptosis (Fig. 6C).We showed that the expression of STAT1 is markedly reduced in IECs on activation of oncogenic Ras, therefore identifying a biological situation with perturbed expression of STAT1.
We showed that STAT1 deficiency also interfered with p21 induction in response to camptothecin, 3 a commonly used chemopreventive and chemotherapeutic agent, establishing STAT1 as a transcription factor that plays a critical role in p21 induction in response to a variety of stimuli.Because we reported that p21 determines the responsiveness of colon cancer cells to camptothecin (41), it is likely that STAT1, like p21, will regulate the responsiveness of cells to camptothecin.Experiments to address this question are under way.
STAT1 was cloned as a transcription factor required for signaling by IFN (42).Its role in apoptosis, however, is not without precedent.Although STAT1 is required for the basal expression of caspases (38), we did not observe a significant effect of STAT1 deficiency on the expression of caspase-3, caspase-7, caspase-8, or caspase-9 in the HKe-3 cells (data not shown).Elevated levels of STAT1 have been shown to protect head and neck squamous carcinoma cells from radiation-induced apoptosis (43), and silencing of STAT1 sensitized prostate carcinoma cell lines to docetaxel-induced apoptosis (44).Recently, STAT1 has been shown to be acetylated in cells treated with HDACi, and its expression was induced selectively in melanoma cell lines that were sensitive to HDACi (45).In this report, the authors showed that STAT1 binds to and sequesters the NF-nB p65 subunit from the nucleus, thereby interfering with the antiapoptotic activity of NF-nB and consequently sensitizing cells to HDACi-induced apoptosis (45).It therefore seems that the interaction of STAT1 with other signaling pathways may dictate the role of STAT1 in HDACi-induced apoptosis.
In summary, our data provide the mechanistic explanation for the selective toxicity of HDACi for tumor cells.We have shown that 3 L. Klampfer, unpublished data.the acquisition of oncogenic k-Ras, which occurs in >50% of all human tumors, is sufficient to sensitize colon cancer cells to apoptosis in response to HDACi.Furthermore, we identified STAT1 as a downstream target of Ras signaling in HCT116 cells, whose deregulation is sufficient to confer sensitivity to tumor cells, through its requirement to support the induction of p21 in response to HDACis.A, HKe-3 cells were transfected with nontargeting or STAT1-specific siRNA and treated with butyrate, TSA, or SAHA as indicated.The amount of apoptosis was calculated and the levels of p21 and p27 were determined by immunoblotting.B, the amount of butyrate-induced apoptosis was determined in HCT116 p21 +/+ and HCT116 p21 À/À cells as indicated.The experiment was repeated thrice.C, schematic representation of the role of mutant k-Ras in apoptosis in response to HDACi.WT, wild-type.
Figure 2 .
Figure 2. Induction of mutant k-Ras decreases the MMP.A, IECs and IEC-iKRas cells were treated with IPTG for 24 h.B, IEC-iKRas cells were either left untreated or treated with IPTG for 2, 7, or 24 h.C, cells were treated with 3 mmol/L butyrate or 1 Amol/L SAHA in the presence or absence of IPTG for 24 h.Cells were stained with JC1 for 1 h and analyzed by fluorescence in the FL2 channel.Bars, calculated from three independent experiments.
Figure 3 .
Figure 3. Activation of RasV12 interferes with the basal and IFN-induced expression of STAT1.Cells were treated with IFN-g in the presence or absence of IPTG as indicated and the levels of total STAT1, phosphorylated ERK1/ERK2 (pERK1/ERK2 ), or h-actin were determined by immunoblotting.
Figure 4 .
Figure 4. STAT1 protects cells from butyrate-induced apoptosis.Cells were transfected with nontargeting (NSP ) siRNA or siRNA specific for STAT1.The amount of STAT1 was determined by immunoblotting (A) and MMP was determined by JC1 staining (B ).Cells were treated for 24 h with 3 mmol/L sodium butyrate.The photographs were taken by Tripix advanced digital camera system (C ) and the amount of apoptosis was determined by propidium iodide staining 24 h after treatment (D ).
Figure 5 .
Figure 5. STAT1 is required for p21 induction in response to butyrate and sulindac sulfide.A, cells, transfected with nontargeting or STAT1-specific siRNA, were treated with butyrate or sulindac sulfide (SSD ) as indicated for 24 or 48 h.The amounts of STAT1, cleaved PARP, and p21 were determined by immunoblotting.NSB, nonspecific band.B, cells were transfected with a p21 promoter in the presence of nontargeting or STAT1-specific siRNA and either left untreated or treated with 0.5 or 2 mmol/L butyrate.Data represent the average of three independent experiments.
Figure 6 .
Figure 6.Induction of p21 in response to HDACis protects cells from apoptosis.A, HKe-3 cells were transfected with nontargeting or STAT1-specific siRNA and treated with butyrate, TSA, or SAHA as indicated.The amount of apoptosis was calculated and the levels of p21 and p27 were determined by immunoblotting.B, the amount of butyrate-induced apoptosis was determined in HCT116 p21 +/+ and HCT116 p21 À/À cells as indicated.The experiment was repeated thrice.C, schematic representation of the role of mutant k-Ras in apoptosis in response to HDACi.WT, wild-type.
|
2017-04-13T03:55:00.727Z
|
2007-09-15T00:00:00.000
|
{
"year": 2007,
"sha1": "0b868f4ea62ab2a0389ddac091e313d0adc1e122",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Supplementary_Figure_1_from_Histone_Deacetylase_Inhibitors_Induce_Cell_Death_Selectively_in_Cells_That_Harbor_Activated_kRasV12_The_Role_of_Signal_Transducers_and_Activators_of_Transcription_1_and_p21/22371305/1/files/39816398.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f192e607a25d94d1ac3fa04da90ad8af76b48ed3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
259182464
|
pes2o/s2orc
|
v3-fos-license
|
Novel Nonablative Radiofrequency Approach for the Treatment of Anal Incontinence: A Phase 1 Clinical Trial
Objective: We aimed to describe the action, impact on quality of life, and side effects of perianal nonablative radiofrequency (RF) application in the treatment of anal incontinence (AI) in women. Methods: This was a pilot, randomized clinical trial conducted between January and October 2016. We enrolled women who consecutively attended the Attention Center of the Pelvic Floor (CAAP) with complaints of AI for more than six months. Nonablative RF was applied to the perianal region of the participants using Spectra G2 (Tonederm®, Rio Grande do Sul, Brazil). The reduced or complete elimination of the need for protective undergarments (diapers and absorbents) was considered a partial therapeutic response. Results: Nine participants reported treatment satisfaction, while one reported dissatisfaction with the nonablative RF treatment of AI based on the Likert scale. No patient interrupted treatment sessions because of adverse effects, although adverse effects occurred in six participants. However, the clinical and physical examination of the participants with burning sensations showed no hyperemia or mucosal lesions. Conclusions: This study showed a promising reduction of fecal loss, participant satisfaction with treatment, and improved lifestyle, behavior, and depression symptoms with minimal adverse effects.
Introduction
Anal incontinence (AI) is defined by the International Continence Society (ICS) as an involuntary elimination of gas or stool in the liquid or solid form [1,2]. AI shows a similar prevalence worldwide in both sexes varying from 18% to 47% [3], while in Brazil, the prevalence found is 10% in women [4]. This variation in prevalence is mainly attributable to the terminology and criteria used in the definition of AI [5].
Despite the high prevalence, less than half of the patients seek therapy because of the associated embarrassment in describing the symptoms, which invariably is accompanied by negative psychological experiences [6]. Therefore, AI causes discomfort, social isolation, and low self-esteem, which lowers the quality of life (QoL) index in these patients [7].
The available treatments are varied, and their success rates vary from 18% to 70%, depending on the severity and specific cause of the condition [8,9]. Between them, there is surgery, indicated in severe cases, and pharmacological and physiotherapeutic treatment, indicated in moderate and slight cases [10]. The physiopathology of AI is multifactorial, and it is known to be the relation of risk of perianal muscle damage [11]. Thus, one treatment option for controlling the loss of stool is based on pelvic floor rehabilitation, which consists of exercises to improve the tone and strength of the pelvic floor muscles [12].
However, pelvic floor muscle training does not benefit all patients with AI since its pathophysiology may also involve the smooth muscle of the internal sphincter, which does not improve its function with voluntary contractions [13]. Based on this observation, the use of ablative radiofrequency (RF) has emerged, which is targeted at increasing collagen production and, consequently, improving the passive continence of the anal 1 2 2 3 2 4 5 6 2 7 sphincter [7,8]. However, this technique requires the use of antibiotics, analgesics, and insertion of needles in the anal canal, and is associated with adverse effects such as sphincter lesions, anal mucosa necrosis, pain, and bleeding [14].
Nonablative RF has been used on the perianal area of swine, and histological examinations of the internal anal sphincter have demonstrated the hypertrophy of smooth muscle and an increase in the ratio of collagen type I to III [15]. However, there are no human studies of nonablative RF in the treatment of AI and, currently, no data on non-invasive treatments using RF to remodel the restructure of the internal anal sphincter of human beings have been reported. Despite these shortcomings, the safety of the technique and clinical benefits were shown in the genital region in the treatment of urinary incontinence in women [1]. In that study, the treatment of urinary incontinence with nonablative RF showed a few adverse effects and reduced urinary loss.
Therefore, we hypothesized that the application of nonablative RF to the perianal site may reduce stool loss and gas with few side effects. Thus, the objective of this study was to investigate the action, impact on the QoL, and side effects of nonablative RF applied to the perianal site for the treatment of AI in women.
Inclusion and exclusion criteria
The eligibility criterion was all women who consecutively attended the Attention Center of the Pelvic Floor (CAAP), with complaints of AI for more than six months. We excluded women who were pregnant; used intrauterine devices, pacemakers, or an implantable cardioverter defibrillator; had a diagnosis of anal fissure, fistula, anal abscess, rectal prolapse, and neurological conditions; and alterations that resulted in fecal incontinence or any of the following symptoms: pain, burning, pruritus, sensation of moisture or aeration in the anal region, and bleeding or lesion of the anal mucosa.
Subject evaluation
Patients were initially required to complete a questionnaire to provide demographic data and clinical history, including obstetric history, previous surgery, hormonal status, and presence of stool/gas loss. The subjects completed two self-administered questionnaires to analyze the stool loss (Fecal Incontinence Severity Index (FISI)) and impact on the QoL (Fecal Incontinence Quality of Life (FIQL)), and both were selfenforcing.
Description of procedure
Nonablative RF was applied to the perianal region by a single professional, using the Spectra G2 device (Tonederm®, Rio Grande do Sul, Brazil). The electromagnetic wave generator with a frequency of 0.5 MHz was connected to a handle with a 0.5 cm diameter metal electrode (active electrode) and metal plate (passive electrode). Each nude participant was placed in the left lateral decubitus position with their lower limbs flexed and the passive plate engaged in the left hip region. The active electrode was placed in contact with the perianal region, and circular, continuous movements were made, after the beginning of the wave passage. A digital infrared thermometer (ICEL TD-925, ICEL, Manaus, Brazil) was used to monitor the temperature up to 39-41°C, after which the movements were maintained for two minutes using a stopwatch coupled to the radiofrequency equipment used in the intervention [16].
The procedure lasted for five minutes at most. This protocol consists of five sessions, performed weekly, and a similar procedure was used on the urethral meatus and female external genitalia [17,18]. During the treatment, no other therapeutic procedures were performed. Furthermore, women who had continuously used other medication (such as hormones or laxatives on a regular basis) were instructed to maintain their usual dose over the study period.
Evaluation of therapeutic response
The assessments were performed seven days after the completion of treatment and then one month later. The FISI was used to evaluate the objective response based on the frequency of episodes of incontinence ranging from zero to 61 points, which depicted the scale from the lowest to highest severity of the loss [19]. FISI scores above 30 are more likely to be associated with impaired QoL than scores of 30 and below [20].
The reduction or complete elimination of the need for protective undergarments (diapers and absorbents) was considered a partial therapeutic response [21], since it suggests that there is less fecal loss, therefore, improvement in symptoms, which will result also in cost reduction.
To evaluate the impact of nonablative RF on AI, we assessed the FIQL parameters: lifestyle, behavior, depression, and embarrassment. The scores varied from one to four, where one is the worst state and four indicated the best QoL [22].
The subjective assessment was verified based on the level of patient satisfaction, using the question, "What is your level of satisfaction with nonablative RF treatment?" and the response was quantified using a fivepoint Likert scale, which classified the response to treatment as (1) unsatisfied, (2) not satisfied, (3) unchanged, (4) satisfied, and (5) very satisfied. To describe the adverse effects, the participants were questioned after each RF session about the following symptoms: pain, burning, pruritus, a sensation of moisture, or aeration in the anal region. The procedure was considered unsafe when bleeding and/or lesion of the anal mucosa were presented and/or when the session needed to be interrupted by any of the signs and/or symptoms described.
Statistics
The Statistical Package for the Social Sciences (SPSS) version 17.0 for Windows (SPSS Inc., Chicago, IL) was used for the descriptive and numerical analysis of the data. The normality of the variables was verified using the Kolmogorov-Smirnov test, descriptive statistics, and graphical analysis. The results are presented in tables and figures.
In the descriptive analysis, the categorical variables, including the number of pregnancies, urogynecological surgery, hormonal status, type of loss (solid, liquid, or missing), and protective garments (cloth, diaper, or absorbent), were expressed in absolute values, while the numerical variable (age) was expressed as mean ± standard deviation (SD) because it had a normal distribution.
Results
The mean age of the participants was 51.90 ± 11.50 years, and their clinical characteristics are shown in Table 1. None of the participants used laxatives. Hormones and other drugs used are described in Table 1. The immediate clinical response to perianal nonablative RF in relation to the severity of AI was measured using FISI, and eight participants exhibited a decrease in the AI severity. Seven participants (participant 1, participant 4, participant 5, participant 7, participant 8, participant 9, and participant 10) had a reduction of at least four points on the FISI score. Furthermore, five participants achieved a score that indicated a low severity of AI. Participant 1 was the only one who did not return for all evaluations after treatment, so we presented her post-treatment result for the FISI score as missing data (Figure 1). Of the seven participants who used protective undergarments (such as diapers, absorbents, and cloth), four participants stopped using them after treatment with nonablative RF. That was considered a partial therapeutic response.
The FIQL questionnaire administered after five sessions of nonablative RF revealed that six participants showed improvement of the QoL in lifestyle, behavior, and depression. In the five embarrassment parameters, the participants showed improved QoL scores, and the results are described in Table 2. The Likert scale revealed that nine participants reported being satisfied with the treatment while one reported dissatisfaction with the nonablative RF treatment in the perianal region.
During the performance of the technique, there were reports of adverse effects in six participants. Four participants reported a burning sensation, including one who also reported a "sandy" discomfort in the anal region and another who reported a moist anal sensation. However, the clinical or physical examination of the participants with burning sensations showed no hyperemia or mucosal lesions. No patient interrupted the session or protocol because of the adverse effects. Another adverse effect reported by two participants was pruritus. One of the participants who experienced pruritus reported a feeling of vaginal heaviness and constipation during treatment and was advised to consult their physician who recommended the use of Muvinlax® laxative.
Discussion
The study investigated an innovative technique of nonablative RF for the treatment of AI and we obtained promising results. There was a reduction in fecal losses, as evidenced by decreased use of protection and improved FISI. We hypothesized that the clinical effect was mediated by the induction of collagen production by controlled heating at a temperature ranging from 39ºC to 41ºC. The heating causes an acute inflammatory process with activation of the fibroblasts and a consequent increase in the production of collagen and elastin [23,24]. This phenomenon may induce morphological alterations of the sphincteric muscular structure, which contributes to the mechanism of AI.
In an experimental study, the anal sphincters of pigs were treated with nonablative RF, and the histological response was analyzed in tissue samples. It was verified that even after three months of applying the technique, the function of the sphincter muscles was preserved and histological changes occurred in the muscle fibers with hyperplasia and hypertrophy due to the increase in type I collagen relative to type III [15].
Although the technique was not tested in humans in the perianal region, this mode of application was analyzed in the external urethral meatus [16], with a reduction in urine losses. Furthermore, the application of this technique to the female external genitalia improved the appearance and had a positive effect on sexual function [18].
Fecal incontinence is known to have an important effect on patients because of the inconvenient need to use protective or absorbent undergarments, which do not usually eliminate the odor and cause embarrassment. Elimination of the need for protection in four out of seven participants indicates a promising therapeutic response to the applied technique, which was confirmed by the effects on lifestyle, behavior, and depression parameters. We believe that the positive result of the fecal incontinence-related QoL in the current study was associated with a decrease in fecal losses.
Considering the above, the fecal incontinence-related QoL improvement after the RF sessions may have influenced the increased involvement of participants in daily activities such as reduced fear of going out, visiting relatives and friends, and using public transportation, and improved their psychological state. The psychological state and QoL of participants were likely mainly improved because they no longer feared the possibility of fecal incontinence incidences with the associated unpleasant or undesirable odors. We inferred that the sensation of improved fecal losses following treatment with the nonablative RF contributed to the willingness to confront the restrictive and negative thoughts and the perspective of feeling well.
The analysis of the ablative RF technique in a study with eight patients with AI revealed that seven women and one man, with a mean age of 59 years, showed improvements in the constraint parameters [25]. Another study with 16 patients (15 women and one man), with a mean age of 72.8 years, revealed that the depression parameter did not show improvement [26]. Studies conducted by Efron et al. [27] and Walega et al. [28], in which the majority of participants were 30 to 80 years old, showed immediate satisfactory results in all items of the FIQL.
Another effect highlighted in this pilot study was the satisfactory clinical response in the group of patients studied, based on the improvement of the AI severity. In a 2009 study, the improvement in severity was considered not statistically significant [28]. Data from another study using the ablative RF technique in eight participants showed no improvement in AI severity and no resolution of fecal losses. In addition, the study also demonstrated that ablative RF is used at a power of 465 kHz, 2-5 W, and a temperature of 85°C [25]. In this application of ablative transfer RF, needles are inserted in the anal region, which promotes thermal lesioning of the mucosa or anal sphincter. The lesions in the anal region may be associated with complications such as local hematoma, fever, and necrosis of the sphincter and anal mucosa [9,[25][26][27][28].
The evaluation of the objective data and adverse effects indicated patient satisfaction. It should be noted that evidence-based medicine advocates the evaluation of satisfaction as a clinical response to the treatment performed. Based on the results of previous studies, AI loss was improved by ablative treatments, but it did not improve satisfaction. Our study demonstrated a high level of satisfaction since most participants who were treated with nonablative RF in the perianal region were satisfied. In other studies, using ablative transfer RF, this was not verified [9,25]. The associated side effects of treatment of AI with ablative RF, such as hemorrhage, anal pain, lesion, and anal bleeding, have led to the hypothesis that it does not show positive results in terms of satisfaction. Furthermore, in a study of eight participants (seven women and one man) aged 28-73 years, five reported dissatisfaction with ablative RF treatment because of complications such as pain and bleeding [25]. Another study with 10 women aged 44-74 (55.9 ± 9.2) years did not present any complications, but the authors identified through follow-up interviews that the satisfaction reported by the participants did not represent a significant score [9].
One patient in our study (participant 9) reported being dissatisfied, but it is important to emphasize that this patient presented with a depressive diagnosis and was on controlled medication. We believed that this pathology (depression) may have caused the negative response and unsatisfactory perception of this patient to the treatment response with nonablative RF. However, we did not question whether AI caused depression or vice versa, since it may have been developed because of incontinence. According to the Diagnostic and Statistical Manual of Mental Disorders V (DSM-V), depression is characterized as a disease that affects various aspects of a person's life, including mood, thoughts, health, and behavior. The satisfaction corresponds to a pondering judgment between experiences, resulting from cognitive processes with the integration of affective elements [29].
The adverse effects observed after the nonablative technique indicated its safety as no analgesics or antibiotics were required, and other effects were considered minimal compared to the adverse effects of ablative RF. Ablative RF is a strategy used in the treatment of AI, but an intra-anal electrode with high temperatures is used, which causes microlesions in the anal sphincter. Furthermore, local anesthetics are required, and antibiotics are used during the recovery period. This method increases pain, bleeding, and discomfort during application [9,[25][26][27][28].
Previous studies using the ablative RF technique to treat AI have reported different results from this present study [9,[25][26][27][28]. The results of studies using ablative RF showed adverse effects such as bleeding, pain, diarrhea, and edema in the anal region and the use of medication by the participants [9,[25][26][27][28]. The adverse effects reported in this study using nonablative RF treatment such as burning sensation, pruritus, and moist and sandy sensation in the anal region could be explained by the thermal effect, which facilitated the return of fluids outside the interstitial space and increased vascularization of the tissues and intradermal medium [14,30]. It is important to note that even with the reports of adverse effects in this study, the nonablative RF sessions did not need to be terminated. However, larger studies are needed to verify the adverse effects reported.
|
2023-06-18T05:13:38.692Z
|
2023-06-01T00:00:00.000
|
{
"year": 2023,
"sha1": "55900bc14b209a274f45e7d98e99b8ce569ab209",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "55900bc14b209a274f45e7d98e99b8ce569ab209",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
268743815
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of risk factors for lateral lymph node metastasis in T1 stage papillary thyroid carcinoma: a retrospective cohort study
Background The occurrence of cervical lymph node metastasis in T1 stage papillary thyroid carcinoma (PTC) is frequently observed. Notably, lateral lymph node metastasis (LLNM) emerges as a critical risk factor adversely affecting prognostic outcomes in PTC. The primary aim of this investigation was to delineate the risk factors associated with LLNM in the initial stages of PTC. Methods This retrospective analysis encompassed 3,332 patients diagnosed with T1 stage PTC without evident LLNM at the time of diagnosis. These individuals underwent primary surgical intervention at West China Hospital, Sichuan University between June 2017 and February 2023. The cohort was divided into two groups: patients manifesting LLNM and those without metastasis at the time of surgery. Additionally, T1 stage PTC patients were subdivided into T1a and T1b categories. Factors influencing LLNM were scrutinized through both univariate and multivariate analyses. Results The incidence of LLNM was observed in 6.2% of the cohort (206 out of 3,332 patients). Univariate analysis revealed significant correlations between LLNM and male gender (P<0.001), tumor localization in the upper lobe (P<0.001), maximal volume of the primary tumor (P<0.001), largest tumor diameter (P<0.001), multifocality (P<0.001), and bilaterality (P<0.001), with the exception of age (P=0.788) and duration of active surveillance (AS) (P=0.978). Multivariate logistic regression analysis identified male gender (P<0.001), upper lobe tumor location (P<0.001), maximal primary tumor volume (P<0.001), and multifocality (P<0.001) as independent predictors of LLNM. However, age categories (≤55, >55 years), maximum tumor diameter, bilaterality, and surveillance duration did not exhibit a significant impact. Comparative analyses between T1a and T1b subgroups showed congruent univariate results but revealed differences in multivariate outcomes. In the T1a subgroup, gender, tumor location, and multifocality (all P<0.05) were associated with elevated LLNM risk. Conversely, in the T1b subgroup, tumor location, dimensions, and multifocality (all P<0.05) were significant predictors of LLNM risk, whereas gender (P=0.097) exerted a marginal influence. Conclusions The investigation highlights several key risk factors for LLNM in T1 stage PTC patients, including gender, upper lobe tumor location, larger tumor size, and multifocality. Conversely, prolonged AS and younger age did not significantly elevate LLNM risk, suggesting the viability of AS as a strategic option in selected cases.
Background
Papillary thyroid carcinoma (PTC) constitutes the most prevalent histological variant among thyroid malignancies, accounting for approximately 89.1% of all cases.This statistic, however, shows a marginal decline between 2014 and 2018 (1).Patients diagnosed with PTC typically demonstrate favorable prognostic outcomes and low mortality rates.Nonetheless, early-stage metastasis to the cervical lymph nodes is not uncommon.Prior research (2)(3)(4)(5) indicates that lymph node metastases are present in about 20% to 90% of PTC cases.Although central lymph node metastasis does not markedly alter the prognosis for PTC patients (6), the emergence of lateral lymph node metastasis (LLNM, N1b) often necessitates more complex and prolonged surgical procedures, potentially impacting patient prognosis adversely (7,8).
A study by Sapuppo et al. (7) categorized PTC patients based on their postoperative pathologic N status.Findings indicated that individuals classified at the N1b stage showed an increased incidence of structural diseases, including locoregional lymph node and/or distant metastases, compared to those in the N0 and N1a stages.At their final follow-up, N1b stage patients exhibited a higher likelihood of persistent or recurrent disease relative to those in the N1a category.Moreover, those with lateral LN metastasis demonstrated reduced disease-free and 10-year diseaserelated survival rates (9,10).A notable study observed a 3.0% mortality rate among N1b patients, significantly higher than that in N1a and N0 patients (11), suggesting that LLN positivity is a strong prognostic indicator for poor outcomes in PTC.
The American Thyroid Association (ATA) management guidelines for differentiated thyroid cancer (DTC) recommend central and/or lateral lymph node dissection when metastasis is clinically or radiographically evident, while cautioning against routine prophylactic dissection of lateral lymph nodes (12).The efficacy of prophylactic level VI (central) neck dissection in cN0 disease remains a topic of debate (12).Confirmed LLNMs necessitate additional lateral lymph node dissection, extending the surgery's complexity and duration.Such procedures also increase the likelihood of postoperative complications, including celiac leakage, hemorrhage, nerve injury, shoulder discomfort, and restricted mobility (13).Consequently, the development of prognostic methods for LLNM is essential in managing node metastasis and recurrence in PTC.
Despite extensive research into LLNM risk factors in PTC, findings have been inconsistent (8,(14)(15)(16)(17)(18)(19)(20).Our study explored major risk factors such as patient age, gender, primary tumor location, tumor diameter, multifocality, and bilaterality.Additionally, we hypothesized that primary tumor volume and duration of active surveillance (AS) post-diagnosis could also contribute to LLNM risk.These variables were thus comprehensively integrated into our analysis.
AS involves monitoring cancer patients without immediate surgical or radiation intervention unless the disease progresses.In 1993, Dr. Akira Miyauchi proposed delayed surgical intervention as an alternative to immediate surgery for papillary thyroid microcarcinoma (PTMC, tumor diameter <1 cm) at a symposium hosted at Kuma Hospital, Japan.Subsequent trials in 2003 and 2010 corroborated the feasibility of this approach (21,22), and numerous patients in Korea have been studied (23), showcasing AS as a promising alternative for PTMC treatment.no significant mortality risk difference between 1.0-and 2.0-cm thyroid tumors.However, a tumor diameter of >2 cm independently correlates with an increased risk of cancer-related death.Therefore, for all T1 stage (<2 cm) tumors, AS may be a feasible alternative to immediate surgery (24,25).AS also appears as a potential therapeutic option for recurrent lymph node metastasis in DTC (26,27).
During AS, most low-risk PTC patients did not develop new lymph node metastases (28)(29)(30).However, the direct link between AS and the occurrence of LLN metastasis has not been comprehensively documented.
Objective
This
Data collection
Clinical and pathological data were collated from electronic medical records and pathology reports.
Patient demographic information (gender and age), and the time of initial fine-needle aspiration biopsy (FNAB) confirming PTC can be retrieved from the electronic medical records.The ultrasonography report provided tumor characteristics (maximum diameter, volume, location) and clinical details (multifocality, bilaterality, and LLNM status).The tumor volume (in mm 3 ) was computed employing the ellipsoid volume equation: π/6 × length × width × height.The features of suspect malignant lymph node involvement include (with at least one of the following features): (I) microcalcifications; (II) partially cystic appearance; (III) increased peripheral or diffuse vascularity; (IV) hyperechoic tissue looking like thyroid.The characteristic of indeterminate lymph node: disappearance of lymphatic hilum and at least one of the following characteristics: round shape; increased short axis, ≥8 mm in level II and ≥5 mm in levels III and IV; absence of central vascularization (12,31,32).We conducted lymph node assessment according to those criteria.For a few indeterminate lymph nodes, after discussing with the patient, we opted for either lymph node fine-needle aspiration with thyroglobulin washout fluid testing or immediate surgery.For the LLNM group, the period of AS before surgery was demarcated as the interval from the initial FNAB confirming PTC, to the first detection of LLNM via preoperative ultrasound, subsequently corroborated by postoperative pathology.For the non-LLNM cohort, this duration was defined as the time span between the initial FNAB diagnosis of PTC and the admission for surgery.Quantitative data, including age, primary tumor's maximum diameter and volume, and waiting time for surgery, were transformed into qualitative categories based on predetermined cut-off values.
Statistical analysis
Continuous variables were converted into categorical data for analysis using SPSS version 25.The Chi-squared test was employed to compare demographic and tumor characteristics, including gender, age, tumor size, tumor location, and AS duration between the LLNM and non-LLNM groups.Multifactorial analysis was performed using binary logistic regression.A P value of less than 0.05 (two-tailed) was considered indicative of statistical significance.
Patient characteristics and group analysis
Among 3,332 PTC patients meeting inclusion and exclusion criteria, 206 presented with LLNM.The clinical and pathological characteristics of all participants are delineated in Table 1.The average AS time of the LLNM group is 137.9±145.7 days.The LLNM group had a significantly higher proportion of males at 37.4% (77/206) compared to the non-LLNM group at 23.5% (P<0.001).The mean age in the metastasis cohort was 41.6±10.9years, which did not significantly differ from that of the non-metastasis group (P=0.729).Tumor location was categorized as upper lobe or non-upper lobe (inclusive of middle and lower lobes and the isthmus).A higher proportion of upper lobe tumors was observed in the LLNM group (38.8%) compared to the non-LLNM group (25.0%) (P<0.001).Additionally, significant differences were noted in the maximum tumor diameter (11.5±4.1 mm in the LLNM group versus 9.0±3.5
Univariate and multivariate analyses
Univariate analysis ( Gender-specific analyses (Table 3) did not reveal a significant correlation between AS duration and LLNM risk.
Subgroup analysis: T1a and T1b groups
Supplementary material present clinical and pathological data for T1a and T1b subgroups, respectively.Univariate analysis (Tables S1,S2 ) indicated common LLNM risk factors: male gender, upper lobe tumor location, larger tumor size, and multifocality.Multifactorial regression analysis (Table 4) highlighted that gender was a significant risk factor for LLNM in the T1a group (P<0.001) but not in the T1b group (P=0.097).Furthermore, no significant differences were observed in age and AS duration between LLNM and non-LLNM groups in both subgroups (Tables S3,S4).
Discussion
In this extensive retrospective analysis of 3,332 patients, we observed a positive association between factors such as male gender, upper lobe tumor location, larger tumor volume, and multifocality with the risk of LLNM.This finding underscores the necessity of routine follow-up and careful consideration of the optimal timing for surgical intervention, particularly when these factors coexist.14), our study reaffirms male sex as a risk factor for LLNM in T1 stage PTC patients, highlighting a higher propensity for LLNM in men.This gender-based disparity in LLNM incidence aligns with several studies (15,33), although it remains a subject of debate in other research (34,35).Notably, the adverse prognosis associated with PTC tends to be more pronounced in men, despite its higher prevalence in women (36).This suggests the need for rigorous evaluation of immediate thyroid surgery in male patients, particularly in T1a stage PTC.
Though the traditional belief that younger age (<55 years) is a risk factor for lymph node metastasis (17,34), our study found no significant age-related differences in LLNM risk, potentially supporting the potential role of AS in specific cases.
Primary tumor location significantly affects lymph node dissemination, as supported by our findings in agreement with prior research (19,(37)(38)(39).Given the complex drainage patterns and higher postoperative complication risks associated with upper thyroid tumors, accurate LLNM assessment in clinical practice is paramount.
While PTMC is generally considered low-risk for LLNM (12), our study suggests that larger tumor diameters do not necessarily predict LLNM, diverging from some previous research (40).By incorporating tumor volume in our analysis, we found a strong association between greater tumor volumes and an increased risk of LLNM.This finding is supported by research emphasizing tumor volume as a more reliable prognostic marker than diameter (41).These results highlight the importance of careful surgical planning for patients with larger tumor volumes.
Our analysis identified multifocality as a LLNM risk factor, contrary to some studies that suggest both multifocality and bilaterality increase LLNM risk (39).Multifocality's prognostic significance, especially in tumors larger than 1 cm, is well-established (42).However, our findings do not support the hypothesis that bilaterality, an indicator of tumor invasiveness, heightens LLNM risk.
Our study delves into the potential impact of AS duration on LLNM risk.With the emerging role of AS in managing T1a and potentially T1b stage PTC (25,43), there is increasing focus on understanding the association between surveillance duration and LLNM risk.Within Table 3, we performed separate analyses employing surveillance duration thresholds of 6, 12, and 24 months (Table 3, all P values >0.05).The selection of these thresholds values was informed by clinical practice.These analyses reveal that, for those with stage T1 PTC, there were no notable differences in the distribution of time to AS between the LLNM and non-LLNM groups.In the subgroups of T1a and T1b, we encountered equivalent results.Hence, our findings suggest that short-term AS (≤24 months) may not considerably increase the risk of LLNM in T1 stage PTC patients, which is in accordance with the conclusions of previous studies (44,45).Further research, particularly focusing on long-term surveillance, is warranted to substantiate these findings.
LLNM is associated with a poor prognosis in patients (46).Nevertheless, the current indication for performing cervical lateral lymph node dissection in PTC cases remain subject to debate.According to the ATA management guidelines, patients are recommended for LLN dissection when clinical or radiographic evidence supports the presence of lymph node disease (12).Ultrasonography is the primary tool for diagnosing cN1b, but its sensitivity was found to be only 0.70 (95% CI: 0.68-0.72;I 2 =96.7%) (47).Building on our earlier discussion, preoperative patients presenting with male gender, upper lobe tumor location, larger tumor dimension, and multifocality are indicative of a heightened risk of LLNM.These high-risk factors discourage AS and instead favor lateral lymph node dissection surgery.The identification of high-risk factors for predicting LLN metastasis contributes to decisions regarding total thyroidectomy and lateral lymph node dissection in PTC patients, as well as guides surgeons in evaluating and treating cervical lateral lymph nodes during postoperative followup.Postoperative adjuvant therapy for PTC typically comprises TSH suppression therapy and radioactive iodine treatment (48).For high-risk thyroid cancer patients, TSH levels are generally maintained below 0.1 mU/L (12).Moreover, patients with suspected or confirmed lymph node metastasis and extrathyroidal tumor extension might necessitate an increased radioactive iodine dosage to further diminish the risk of recurrence (49).Consequently, for postoperative patients meeting all the aforementioned risk factors, it seems justifiable to adopt a proactive approach in deciding on TSH suppression therapy and radioactive iodine treatment to minimize the risk of PTC recurrence after surgery.
This study, being retrospective and single-center, has its limitations, including the lack of uniformity in tumor location assessment and the absence of a separate analysis for different types of lymph node metastasis.Future multicenter, prospective studies with larger sample sizes and extended follow-up periods are necessary to address these gaps and further explore the nuances of LLNM in PTC.
Conclusions
In summary, this retrospective analysis has identified several critical risk factors for LLNM in patients with T1 stage PTC.These include male gender, tumor location in the upper third of the thyroid gland, a maximum tumor volume exceeding 603 mm 3 , and the presence of multifocal tumors.In clinical practice, patients exhibiting this constellation of risk factors should give serious thought to surgical intervention due to their increased vulnerability to lateral nodal metastasis.
On the other hand, for patients not displaying these risk factors, the consideration of a short-term AS strategy may be appropriate.Our findings indicate that extended periods of AS, especially in younger patients, do not significantly escalate the risk of LLNM.This observation offers a viable justification for adopting AS in certain patient cohorts, particularly when the risk factors mentioned above are absent.
It is also noteworthy that in patients with T1a stage PTC, male gender should trigger a careful evaluation of the need for immediate surgical intervention.However, in the context of T1b stage PTC, the influence of gender appears less pronounced.Rather, an increase in tumor size emerges as a pivotal factor in elevating the risk of LLNM, underscoring the necessity for a more assertive treatment approach in these cases.
Table 1
mm in the non-LLNM group, P<0.001) and maximum tumor volume (603.1±569.4mm 3 in the LLNM group versus 318.6±377.2mm 3 in the non-LLNM group, P<0.001).Higher incidences of multifocal and bilateral tumors were observed in the LLNM group (29.1% and 18.0%, respectively) compared to the non-LLNM group (P<0.05 for both).Clinical and pathological characteristics for LLNM risk of T1 stage PTC patients by univariate analysis (n=3,332) Data are reported as n (%), unless noted otherwise.P values represent the statistically difference between the groups with and without LLNMs, unless noted otherwise.LLNM, lateral lymph node metastasis; PTC, papillary thyroid carcinoma; OR, odds ratio; CI, confidence interval; SD, standard deviation; AS, active surveillance.
Table 3
Univariate analysis of active surveillance time of T1 stage PTC patients PTC, papillary thyroid carcinoma; AS, active surveillance; LLNM, lateral lymph node metastasis; OR, odds ratio; CI, confidence interval.
Table S1
Clinical and pathological characteristics for LLNM risk of T1a PTC patients by univariate analysis (n=2,318) Data are reported as n (%), unless noted otherwise.P values represent the statistically difference between the groups with and without LLNM, unless noted otherwise.LLNM, lateral lymph node metastasis; PTC, papillary thyroid carcinoma; OR, odds ratio; CI, confidence interval; AS, active surveillances.Table S2 Clinical and pathological characteristics for LLNM risk of T1b PTC patients by univariate analysis (n=1,014)Data are reported as n (%), unless noted otherwise.P values represent the statistically difference between the groups with and without LLNMs, unless noted otherwise.LLNM, lateral lymph node metastasis; PTC, papillary thyroid carcinoma; OR, odds ratio; CI, confidence interval; AS, active surveillance.
|
2024-03-30T15:09:55.988Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "aa6a011b5c1296c03db5a364ad732346578a8e38",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.21037/gs-23-470",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "92db5ed1f9f1ea2b9f1b6c6267ad5303c7e720d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
201170359
|
pes2o/s2orc
|
v3-fos-license
|
Hyperinsulinemia: An Early Indicator of Metabolic Dysfunction
Abstract Hyperinsulinemia is strongly associated with type 2 diabetes. Racial and ethnic minority populations are disproportionately affected by diabetes and obesity-related complications. This mini-review provides an overview of the genetic and environmental factors associated with hyperinsulinemia with a focus on racial and ethnic differences and its metabolic consequences. The data used in this narrative review were collected through research in PubMed and reference review of relevant retrieved articles. Insulin secretion and clearance are regulated processes that influence the development and progression of hyperinsulinemia. Environmental, genetic, and dietary factors are associated with hyperinsulinemia. Certain pharmacotherapies for obesity and bariatric surgery are effective at mitigating hyperinsulinemia and are associated with improved metabolic health. Hyperinsulinemia is associated with many environmental and genetic factors that interact with a wide network of hormones. Recent studies have advanced our understanding of the factors affecting insulin secretion and clearance. Further basic and translational work on hyperinsulinemia may allow for earlier and more personalized treatments for obesity and metabolic diseases.
circulating insulin in relationship to its usual level relative to blood glucose), which does not cause hypoglycemia. Dysregulated insulin secretion and/or clearance resulting in chronically elevated insulin without hypoglycemia is common in obesity and metabolic disorders, and it is referred to herein as hyperinsulinemia. Fasting insulin rises from normal glucose tolerance to impaired glucose tolerance (IGT) to T2D [13]. In subjects with obesity but without diabetes or hypertension, hyperinsulinemia and insulin hypersecretion are more prevalent than insulin resistance [14] and hence may precede and contribute to insulin resistance. Furthermore, cohort studies have shown that different subjects with similar degrees of insulin sensitivity may exhibit a range of insulin secretion. For example, in the Relationship between Insulin Sensitivity and Cardiovasular Disease (RISC) study, individuals with insulin hypersecretion tended to be older and have higher percent fat mass, worse lipid profiles, and higher liver insulin resistance indices compared with the rest of the cohort [15]. In the RISC study, preexposure to hyperinsulinemia stimulated a greater insulin-induced secretory response independently of insulin sensitivity [16]. Hence, hyperinsulinemia is self-perpetuating and is more likely to be a primary defect rather than a compensation for insulin resistance in the general population.
There are racial and ethnic differences in insulin sensitivity and b-cell function [17], and recent research provides insights into their underlying mechanisms. Here, we discuss genetic and environmental factors associated with insulin secretion and clearance and the metabolic consequences of hyperinsulinemia ( Fig. 1).
Methods
We searched PubMed/MEDLINE for English articles with the search terms: hyperinsulinemia, diabetes, race, and obesity. We limited our review primarily to human studies with exceptions when studies have relevance to translational research. We selected mostly recent pertinent publications but did not exclude high-impact older papers. We reviewed the references from key papers to identify additional articles.
Methods to Assess Hyperinsulinemia
Insulin has a similar diurnal pattern in subjects with obesity and in lean subjects but is consistently regulated at a higher concentration [18]. The 24-hour urinary c-peptide excretion is a reflection of the area under the curve (AUC) of insulin and has been shown to be negatively correlated with insulin sensitivity assessed by hyperinsulinemic-euglycemic clamp in healthy individuals [19]. Owing to logistic difficulties in obtaining repeated blood or urine samples, most studies assessed only fasting insulin. Fasting insulin has good repeatability within 4 to 8 weeks in the same subjects [20]. Hence, fasting insulin is an important metabolic parameter that is associated with the diurnal insulin exposure and insulin sensitivity and remains fairly stable over time.
Methods to assess insulin sensitivity and b-cell function include the hyperinsulinemiceuglycemic clamp, frequently sampled IV glucose test (FSIVGTT), insulinogenic index, and homeostatic model assessment (HOMA), whicheach have strengths and limitations. Hyperinsulinemic-euglycemic clamps are the gold standard to measure insulin sensitivity, but due to the logistical difficulties in performing these clamps, indices derived from an oral glucose tolerance test (OGTT) or fasting glucose and insulin are widely used. Hyperglycemic clamps provide an accurate assessment of insulin secretion capacity in response to glucose but not insulin sensitivity [21]. HOMA estimates b-cell function and insulin sensitivity based only on fasting glucose and insulin concentrations. The disposition index gives a representation of insulin secretion adjusted for insulin sensitivity.
These methods assess insulin-mediated glucose disposal, but not the ability of glucose to enhance its own disposal (independently of insulin), which is known as glucose effectiveness. Glucose effectiveness accounts for about half of overall glucose disposal, so it is relevant to hyperinsulinemia, and yet its determinants remain poorly understood [22]. Glucose effectiveness can be assessed by FSIVGTT or by pancreatic clamps, which are more difficult to perform [23].
In hyperinsulinemic-euglycemic clamp studies, whole-body insulin clearance and hepatic insulin clearance can be estimated [24]. In contrast, FSIVGTT does not differentiate between hepatic insulin clearance and whole-body insulin clearance [25,26].
A-1. Environmental factors
Diabetogenic dietary and environmental exposures may interact with hormones from the gastrointestinal tract and stimulate insulin hypersecretion under fasting conditions, leading to chronic basal hyperinsulinemia through mechanisms that remain unclear (Table 1) [27][28][29][30][31][32][33]. For example, air pollution has been associated with adverse lipid changes and higher fasting glucose and insulin [30] as well as higher childhood body mass index (BMI) trajectories [31]. This association has been hypothesized to be due to chronic ozone exposure and subsequent activation of the HPA axis and hormonal changes [34]. Acute bisphenol A exposure, an endocrine-disrupting chemical, at the maximal daily dose determined to be safe by the U.S. Food and Drug Administration was associated with changes in insulin and c-peptide response to an OGTT and a hyperglycemic clamp [32]. A study of Ghanaians in several countries highlighted the importance of environmental and cultural factors on insulin and glucose metabolism, BMI, insulin sensitivity, and fasting blood glucose [33].
A-2. Associations between hyperinsulinemia and race and ethnicity may be partially mediated by differences in body composition Ethnic differences in insulin sensitivity may be underappreciated owing to the widespread use of OGTT-based surrogate measures and HOMA of insulin resistance (HOMA-IR). HOMA-IR has been validated as a marker of insulin sensitivity in European populations but showed poor correlation with insulin sensitivity assessed by FSIVGTT or clamp in Jamaican adults without diabetes [35]. A similar discrepancy was seen between OGTT-derived surrogate markers and the hyperglycemic clamp parameters in Asian Americans, blacks, whites, and Mexican Americans [36]. Hence, widely used surrogate markers of insulin sensitivity and b-cell function may not be accurate in non-European populations. The Insulin Resistance and Atherosclerosis Study (IRAS) was a multicenter crosssectional study of insulin sensitivity assessed by FSIVGTT and cardiovascular risk factors in white, black, and Hispanic patients in the United States [37]. This study provided further insights into the genetic variations underlying the racial and ethnic differences in hyperinsulinemia and diabetes risk. Fasting insulin was higher in subjects with IGT compared with subjects with normal fasting glucose and glucose tolerance [38]. Waist circumference was positively correlated with fasting insulin in white and black patients even after adjusting for glucose tolerance [39]. A 5-year follow-up study showed that the disposition index and glucose effectiveness were independent predictors of incident T2D after adjusting for traditional risk factors [40].
The National Health and Nutrition Examination Survey also demonstrated differences between whites, Hispanics, and blacks in hyperinsulinemia and BMI. There was a significant increase in fasting insulin levels from 1988 to 1994 and 1999 to 2002, which persisted despite adjusting for BMI and waist circumference [41]. Additionally, racial disparities in the prevalence of obesity increased, with black having greater increases in BMI and waist circumference from 1988 to 2004 than did whites or Hispanics [42]. However, National Health and Nutrition Examination Survey data are cross-sectional and lack detailed metabolic assessments of insulin sensitivity and secretion.
Racial differences in hyperinsulinemia are apparent at a young age. The Bogalusa Heart Study of 377 children and adolescents who underwent an OGTT demonstrated that blacks had significantly higher insulin responses than did whites when assessed by the AUC and insulin/glucose ratios at 30 and 60 minutes [43]. These differences were consistent across Tanner stages I to V. Consistent with this, blacks had significantly higher first-and secondphase insulin secretion during a hyperglycemic clamp than did whites [44].
Racial differences in visceral and subcutaneous adipose tissue distributions in women have been reported, with blacks having less visceral fat than whites but paradoxically lower insulin sensitivity [45]. For example, in the Dallas Heart Study, blacks had less visceral fat and hepatic steatosis than did whites and Hispanics, but more insulin resistance by HOMA-IR [46]. In adolescents with obesity, despite similar total percent body fat, Hispanics had greater intramyocellular lipid deposition and blacks had lower hepatic fat accumulation compared with whites [47]. However, these differences in body composition cannot fully account for the differences in b-cell function. In women without diabetes, blacks had greater insulin secretion compared with whites across a wide range of ages, independently of adiposity and insulin sensitivity, and there were no differences in glucose effectiveness [48].
Racial differences have also been reported in the effects of visceral fat mass on serum triglycerides (TGs) and in the upper and lower body subcutaneous fat distribution in women with obesity [49]. Increased visceral adipose tissue (VAT) is associated with higher fasting insulin and insulin AUC during an OGTT, independently of subcutaneous adipose tissue (SAT), skeletal muscle mass [50], insulin resistance, and inflammation [51][52][53][54][55]. In IRAS, VAT and SAT accounted for 27% of the model R 2 for insulin sensitivity and 16% of the model R 2 for disposition index, adjusting for age, sex, ethnicity, and BMI [56]. VAT contributes to delivery of free fatty acids (FFAs) to the liver, which negatively affects hepatic insulin sensitivity and is associated with reduced insulin clearance [57,58]. Insulin has an antilipolytic effect on VAT that reduces portal FFAs, and this may be a key mechanism whereby insulin regulates hepatic glucose production in addition to its direct effects on the liver [59,60].
Central adiposity is associated with lower adiponectin, an adipokine that is normally associated with improved insulin sensitivity [61]. Adiponectin was also negatively associated with VAT, SAT, pericardial fat, and intrathoracic fat in the Framingham Heart Study [62]. There is a strong negative correlation between fasting insulin and adiponectin in whites and Pima Indians [63].
Detailed metabolic studies have shown the important contribution of hyperinsulinemia in Pima Indians, who have a remarkably high prevalence of T2D of 38% [64]. Hormonal and metabolic studies in this population included oral and IV glucose tolerance tests along with body composition. In Pima young adults, high fasting plasma insulin and higher insulin at 30 and 120 minutes were highly heritable and were all predictors of incident T2D [65,66]. As in IRAS, progression from normal glucose tolerance to IGT to T2D was associated with an increase in fasting insulin levels [67]. Hyperinsulinemia was seen even in Pima prepubertal girls and boys aged 6 to 7. There was no significant difference in the visceral or subcutaneous fat area at L4/L5 in a sample of Pima Indians and whites matched for percent body fat, yet the Pima had significantly higher fasting insulin and lower insulin sensitivity [68]. Hyperinsulinemia was also associated with weight gain and triceps skinfold thickness in the prepubertal years [69].
Racial and ethnic differences in pancreatic fat may account for some of these differences. Fasting and 2-hour insulin during an OGTT were lower in whites than blacks in a study that quantified VAT, pancreatic fat, and hepatic TGs in 100 subjects without T2D. VAT was highest in Hispanics and lowest in blacks [70]. Pancreatic TGs were significantly higher in whites and Hispanics than in blacks [70]. Hepatic TG levels were higher in Hispanics than in whites and blacks [70]. Blacks had the highest disposition index and acute insulin response (AIR) but lowest insulin sensitivity [70]. The effect of a one-unit increase in pancreatic TGs on AIR was largest in blacks compared with whites and Hispanics [70]. However, there was no association between pancreatic fat and b-cell function in another study of young German women [71]. These studies used different methods to estimate b-cell function, which may account for some of these discrepancies.
A systematic review and meta-analysis of studies that measured insulin sensitivity and AIR by the FSIVGTT in Africans, whites, and East Asians confirmed that in subjects with normal glucose tolerance, there is substantially lower insulin sensitivity and higher AIR in African cohorts compared with whites and East Asians, with some subjects exhibiting insulin hypersecretion relative to their degree of insulin sensitivity in each case [72]. Racial and ethnic differences in hyperinsulinemia, as well as glucose and lipid metabolism, are well established [73]. Hence, genetic differences may underlie some of the associations between insulin secretion, insulin resistance, and lipid stores.
A-3. Genetic and epigenetic variants associated with hyperinsulinemia act via several pathways Genetic differences and epigenetic changes during gestation may underlie the association between in utero exposure to gestational diabetes and increased risk of childhood overweight and obesity [74]. Epigenetic changes in GNAS have also been associated with early-onset obesity [75]. Subjects who had parents with T2D had higher BMI and fasting insulin compared with those who had no family history of diabetes in the RISC study [76].
Studies have implicated several genes involved in obesity and other metabolic outcomes and hyperinsulinemia. The GUARDIAN consortium study of Mexican Americans provided strong evidence for the heritability of insulin sensitivity, AIR, and metabolic clearance of insulin [77]. Distinct clusters of genes have been shown to be associated with b-cell function, body weight, and different diabetes phenotypes [78,79]. In IRAS, a genome-wide association study identified loci associated with insulin sensitivity and b-cell function in blacks and Hispanics [80], and candidate genes for the disposition index and AIR in blacks were identified [81]. In the IRAS Family Study (IRAS-FS), the heritability of insulin sensitivity assessed by FSIVGTT (0.310) was greater than the heritability of fasting insulin (0.171) and HOMA-IR (0.163) [82].
Genetic variants of FTO also influence the risk of obesity and fasting insulin [83]. Paternal transmission of a polymorphism associated with insulin gene expression conferred an 80% greater risk of early-onset obesity [84]. A genome-wide association study in Indian Asians found that a common variant near MC4R was associated with a higher HOMA-IR, increased waist circumference, and features of metabolic syndrome [85]. Finally, a Hispanic cohort study identified genetic loci that regulate insulin clearance, which has a heritability of 73% [86].
Individuals with $17 alleles that raised fasting insulin tended to have higher TG levels, more hepatic steatosis, increased risk of T2D, coronary artery disease, and high blood pressure but a paradoxically lower BMI [87]. In contrast, a Mendelian randomization study of subjects from predominantly European ancestry found a strong association between genes associated with higher insulin concentration at 30 minutes after an OGTT and a higher BMI [88]. These discrepancies may be reconciled by the fact that fasting insulin and insulin secretion may have different genetic determinants. These genetic studies underscore that there are many etiologies for abnormalities in insulin secretion and sensitivity, and they reinforce the paradigm that relative insulin hypersecretion can be pathogenic.
B. Factors Affecting Insulin Clearance With a Focus on Race/Ethnicity
Fasting insulin levels are determined by the dynamic balance between insulin secretion, insulin sensitivity, glucose effectiveness, and insulin clearance, each of which may have different determinants [89]. Estimates of the relative importance of insulin secretion and clearance to hyperinsulinemia have varied depending on the study methodology and population, but both are likely important. One study found that 75% of the hyperinsulinemia is due to a reduction in hepatic metabolic clearance of insulin in subjects with normal fasting glucose and obesity [90]. Using different methods and a different study population, Polonsky et al. [91] found that hyperinsulinemia in subjects with obesity was predominantly driven by increased secretion with a minor contribution of reduced hepatic extraction of insulin. These differences may be due to variations in the measurement of insulin clearance or in the demographic groups. Additionally, reduced clearance may contribute to the early stage of hyperinsulinemia whereas hypersecretion may contribute only to the later stage.
Insulin clearance is associated with physical fitness and metabolic health. Aging is associated with reduced metabolic clearance of insulin and hyperinsulinemia, reduced glucose effectiveness, and an increase in metabolic diseases [92]. Likewise, metabolically healthy subjects with obesity have higher whole-body insulin clearance and hepatic insulin extraction compared with age-and BMI-matched subjects who are metabolically unhealthy [93]. Consistent with this, in nonobese Japanese men without diabetes, low insulin clearance was associated with higher total body fat and lower peak oxygen consumption rate [94].
Blacks had higher insulin levels than did whites and lower fasting c-peptide, consistent with impaired insulin clearance in blacks, which could not be explained by differences in BMI, family history, smoking, or other factors [95]. Lower metabolic clearance of insulin may explain the high prevalence of hyperinsulinemia in blacks. Hepatic first-pass insulin extraction has been estimated to be two-thirds lower in blacks compared with whites, whereas extrahepatic insulin clearance was similar [96]. This low first-pass hepatic extraction was also seen in African immigrants [97]. Consistent with this, in women without diabetes, blacks had a higher insulin response than did whites, as well as lower insulin clearance, but they had similar insulin secretion during OGTT, FSIVGTT, and a mixed meal tolerance test [98].
In IRAS-FS, blacks had lower metabolic clearance of insulin than did Hispanics, which was associated with hyperinsulinemia, greater SAT and VAT, lower high-density lipoprotein, and incident T2D [56,99], and lower metabolic clearance of insulin was associated with lower insulin sensitivity, higher insulin secretion during FSIVGTT, and higher BMI across race and ethnicities [100].
Ethnic differences in insulin clearance are present in childhood. Black children had 63% higher first-phase insulin secretion and 14% lower clearance along with a 63% higher disposition index compared with whites with similar body composition and insulin sensitivity as assessed by hyperinsulinemic-euglycemic and hyperglycemic clamps [101]. Both greater insulin secretion and reduced clearance make independent contributions to the greater AIR in black children compared with white children [102]. Hispanic children also have a greater second-phase insulin secretion but have similar hepatic insulin extraction compared with whites [103]. In adolescents with obesity, glucose effectiveness was greater in Hispanics than in blacks independent of total fat mass and visceral fat mass [104].
Insulin clearance in whites was lower in subjects with obesity and insulin resistance than in lean subjects, who were similar to subjects with obesity and normal insulin sensitivity [105]. Hyperinsulinemia in whites with obesity but without insulin resistance was mediated by increases in insulin secretion [106]. Additionally, there is evidence that insulin clearance may be associated with carbohydrate intake [107], body composition [108], liver fat [109], insulin sensitivity [110], acute hyperglycemia [111], and glucose intolerance [112]. Hence, both insulin clearance and secretion underlie the racial and ethnic differences in hyperinsulinemia.
C. Diet, Incretins, and Other Hormones Affect Insulin
Dietary differences may also contribute to hyperinsulinemia in black children. Blacks had a higher ratio of dietary fat intake to carbohydrate intake (determined by 24-hour recall), which was associated with higher FFAs, and reduced insulin sensitivity and insulin clearance, as well as upregulated b-cell function [113]. A high-fat diet was associated with reduced insulin sensitivity and insulin clearance in dogs [114,115]. The short-term effects of lipid infusions on hyperinsulinemia and insulin clearance have shown mixed results [116,117], but chronically higher FFAs have been associated with a decline in insulin secretion (adjusted for sensitivity) and reduced glucose effectiveness [118,119].
During puberty, increases in GH, lipolysis, and insulin resistance contribute to hyperinsulinemia [120]. A longitudinal study showed that black girls had higher fasting insulin and AIR, earlier puberty, higher estradiol levels, higher FSH levels throughout puberty, and more rapid fat deposition after menarche compared with whites [121]. In a prospective cohort study of healthy Australian adolescent girls, insulin was negatively associated with ghrelin in boys and a positively associated with PYY [122]. Incretins may play an important role in the insulin response to glucose, as there were marked differences in glucose and insulin indices derived from OGTT and FSIVGTT in black and Hispanic adolescents with obesity [104]. GLP-1 may have paracrine and neural mechanisms to regulate insulin secretion, and hence its serum levels may provide only limited data on its metabolic effects, which makes it more difficult to study [123].
Fasting insulin was positively correlated with cortisol production rate in a study of 24 healthy men [124]. In adolescent girls with hyperinsulinemia and hyperandrogenism, free testosterone was negatively correlated with insulin resistance [125], although in normogonadal men, free testosterone was not associated with insulin sensitivity or b-cell function independent of its effects on adiposity [126]. A hyperinsulinemic-euglycemic clamp was shown to significantly increase ovarian androgen production in women [127]. Hyperinsulinemia contributes to hyperandrogenism in women with polycystic ovarian syndrome [128]. Hence, insulin secretion and sensitivity are associated with many factors, including HPA axis activation and sex hormones.
D. Reactive Oxygen Species, Redox, and Hyperinsulinemia
In vitro studies have suggested that hyperinsulinemia is associated with increases in reactive oxygen species. Exposing b-cells to excess lipids induces excess insulin secretion by increasing the mitochondrial redox state and production of reactive oxygen species, which in turn modulate the thiol redox state [129]. Supplementation with the antioxidant Nacetylcysteine was associated with an increase in HOMA-IR [130]. In healthy blacks without T2D, serum FFAs are positively associated with protein carbonyls, a marker of oxidative stress that were negatively associated with insulin sensitivity. This association was not seen in healthy whites, suggesting that blacks may be more sensitive to oxidative stress-induced insulin resistance than are whites [131]. Further studies of the redox state in vivo and its effects on insulin secretion and oxidative stress are needed.
A. Acute Experimental Hyperinsulinemia
In healthy adults, hyperinsulinemia induced by a hyperinsulinemic-euglycemic clamp for 105 minutes increased inflammatory markers and b-amyloid in the cerebrospinal fluid and peripheral circulation [132]. Clamp studies in healthy subjects also demonstrated that chronic euglycemic hyperinsulinemia for 72 to 96 hours is associated with the development of insulin resistance and impaired nonoxidative glucose disposal [133]. The consequences of exposure to hyperinsulinemia may depend on the duration and magnitude of this exposure, as only 24-hour exposure to hyperglycemia and hyperinsulinemia was associated with increased insulin action and glucose effectiveness in healthy males [134].
B-1. Hyperinsulinemia and incident diabetes
In youths with obesity, b-cell first-phase insulin secretion showed a stepwise decline from normal glucose tolerance to IGT to T2D [135]. Fasting insulin was an independent predictor of incident T2D in several cohorts [136,137]. Hence, both postprandial and fasting hyperinsulinemia are associated with incident T2D. The AIR was not associated with subsequent weight gain in a longitudinal study of normoglycemic subjects during a mean time of 26 years [138]. However, the AIR in FSIVGTT does not reflect the incretin effect, and the insulin response to an oral glucose challenge may be a more physiologically relevant outcome. Hyperinsulinemia during an OGTT was associated with an atherogenic lipid profile in a sample of healthy Israelis [139]. Hyperinsulinemia was the most significant predictor of the progression to T2D in a study of 515 normoglycemic men in Israel during a 24-year follow-up period [140,141]. In whites without diabetes, having a first-degree relative with T2D was associated with a loss of the normal relationship between BMI and insulin response to an OGTT and hyperinsulinemia even with a normal BMI [142]. Similarly, in the offspring of two parents with T2D, hyperinsulinemia was associated with the risk of developing T2D during an average follow-up time of 13 years independent of glucose removal rate [143].
Hyperinsulinemia may lead to incident T2D by affecting insulin resistance, fat storage, and/or direct effects on b-cells or other tissues. Normoglycemic women with a history of gestational diabetes are at increased risk of developing T2D and had significantly higher fasting insulin and fasting glucose, lower disposition index and insulin sensitivity, and reduced suppression of FFAs compared with women with no history of gestational diabetes [144]. The association between hyperinsulinemia and incident T2D and body composition has been seen in several other racial and ethnic groups, including Pacific Islanders [145] and Mexican Americans [146].
B-2. Hyperinsulinemia and NAFLD
In a prospective cohort study of 4954 Koreans without diabetes, baseline fasting hyperinsulinemia and increases in fasting hyperinsulinemia during a 5-year period were associated with incident NAFLD [147]. Fasting insulin was associated with hepatic steatosis in a sample of healthy Italians with normal transaminases [148]. Consistent with this, fasting insulin and insulin exposure during an IV glucose tolerance test were positively correlated with intrahepatocellular lipids, and subjects with NAFLD had higher intrahepatic insulin exposure than did healthy controls [149]. Compared with subjects without NAFLD, subjects with NAFLD had reduced hepatic insulin clearance and there was a negative correlation between hyperinsulinemia and both hepatic and whole-body insulin clearance [150].
Black women with obesity have a lower rate of TG turnover in adipose tissue and lower rates of adipose de novo lipogenesis (DNL) compared with white women with obesity [151]. DNL was originally thought to make only minor contributions to hepatic and adipose tissue lipid contents based on small studies in lean subjects [152]. However, technological improvement and studies in other populations have revealed that DNL is increased 2.4-fold from baseline fasting levels by an oral fructose challenge, and this increase in DNL was positively correlated with fasting insulin levels (r 5 0.75) [153]. Blacks also tend to have lower intrahepatic TGs than do age-and BMI-matched whites, yet once NAFLD has developed the prevalence of nonalcoholic steatohepatitis may be similar [154].
B-3. Hyperinsulinemia, hypertension, and endothelial cell function
Insulin sensitivity and systolic blood pressure are the dominant determinants of endothelial function in blacks and whites [155]. Subjects with hypertension had higher meal-stimulated c-peptide secretion and lower insulin sensitivity compared with BMI-matched subjects with obesity without hypertension [156]. Similar findings were obtained in a study of Israelis that found that subjects with obesity and hypertension had a higher rise in serum insulin levels during an oral glucose test than did subjects with obesity and without hypertension [157]. Subjects with obesity had diminished endothelium-dependent vasodilation compared with lean controls during a hyperinsulinemic-euglycemic clamp [158]. Hence, hyperinsulinemia is associated with the vascular and lipid abnormalities associated with metabolic syndrome and may underlie its pathogenesis [159]. The exact mechanisms whereby loss of normal insulin pulsatility and hyperinsulinemia can lead to metabolic complications remain under investigation and are summarized in Fig. 2, and the molecular mechanisms for these pathways have been recently reviewed [160].
A. Differences in Insulin Metabolism May Underlie the Variability in the Response to Dietary Interventions
Racial differences in the response to dietary interventions may be mediated by differences in lipoprotein metabolism [161], which may explain the paradox that blacks have lower insulin sensitivity but often have lower TGs than do whites [162,163].
Hyperinsulinemia may also modulate the effects of different dietary interventions. Compared with an isoenergetic high-fat low-carbohydrate diet, high simple carbohydrate diets are associated with higher rates of DNL and increases in TGs in lean subjects without hyperinsulinemia [164]. Subjects with hyperinsulinemia and obesity had significantly higher rates of DNL on a high-fat diet than did both subjects with obesity but without hyperinsulinemia and lean subjects without hyperinsulinemia [164]. Subjects with high insulin secretion may lose more weight on a low-glycemic index diet compared with a high-glycemic index diet [165]. Consumption of fructose has been associated with increases in serum insulin and reductions in insulin sensitivity in persons with overweight and obesity [166]. Isocaloric restriction of fructose for 10 days was associated with a significant decrease in liver fat, VAT, DNL, fasting insulin, fasting glucose, insulin secretion, and increased insulin clearance in blacks and Hispanic children with obesity [167]. The role of dietary fat and carbohydrates and insulin in the development of hyperinsulinemia and obesity is an active area of research and debate [168][169][170].
B. Bariatric Surgery Is Associated With Improvement in Hyperinsulinemia
Because obesity and hyperinsulinemia are often refractory to dietary and lifestyle changes, bariatric surgery is recommended for patients with severe obesity and comorbid conditions. Hyperinsulinemia may underlie the racial differences in bariatric surgical outcomes, such as blacks losing less weight than whites despite adjustment for clinical and behavioral factors [171] and blacks regaining more weight than whites in the years following surgery [172]. Bariatric surgery is associated with a rapid correction of hyperinsulinemia within 1 week of surgery, which may underpin its metabolic and clinical benefits. Unlike the rapid improvement in hyperinsulinemia after bariatric surgery, insulin sensitivity continues to improve between 6 and 24 months postoperatively whereas glucose effectiveness remained constant [173].
C. Exercise Training Is Associated With Improvement in Hyperinsulinemia
Male athletes have lower fasting glucose, lower insulin secretion, increased insulin sensitivity, and increased insulin clearance determined by the insulin/c-peptide ratio following a hyperinsulinemic-euglycemic clamp and arginine stimulation test compared with age-and BMI-matched sedentary males [110]. Consistent with this, exercise training has been shown to acutely lower insulin and gradually increase insulin sensitivity and glucose effectiveness [174,175]. Compared with untrained subjects, endurance trained subjects had similar nonpulsatile basal insulin secretion, but significantly reduced insulin secreted per secretory burst [176].
D. Pharmacotherapies for Hyperinsulinemia
Hyperinsulinemia is not generally recognized as a primary therapeutic target although this has been debated [27]. Weight loss is associated with improvement in hyperinsulinemia with no change in glucose effectiveness, whereas weight gain is associated with worsening of hyperinsulinemia and reduced glucose effectiveness [177,178]. Treating obesity with lifestyle modifications, dietary changes, pharmacotherapy, or metabolic surgery improves hyperinsulinemia acutely [179]. Liraglutide at 3.0 mg leads to greater weight loss and decreases in fasting insulin along with a reduction in incident diabetes in subjects with obesity but without diabetes [180]. Diagram of potential mechanisms for hyperinsulinemia with altered insulin pulsatility to induce metabolic disease. Chronic hyperinsulinemia of any potential etiology is associated with chronic hyperglucagonemia, which may lead to increased hepatic glucose output. Nutrient excess and hyperlipidemia contribute to adipose tissue expansion and dysfunction with eventual ectopic lipid deposition, which is associated with reduced muscle glucose disposal.
Several other classes of medications can also affect insulin sensitivity and b-cell function. Fenofibrate, a PPARa agonist, increases fat oxidation and decreases insulin clearance and secretion in mice on a high-fat diet and warrants further trials in humans [181]. Bezafibrate, a pan-PPAR agonist, lowers both lipids and insulin [182]. However, the effectiveness of mediations directly targeting hyperinsulinemia has been mixed [183][184][185][186]. Further trials of new classes of medications that can attenuate hyperinsulinemia are warranted [187].
Conclusion
Strong evidence implicates hyperinsulinemia as an important precursor to the metabolic diseases associated with obesity. Environmental, genetic, and socioeconomic factors all contribute to the development and progression of hyperinsulinemia. Ethnic and racial differences in hyperinsulinemia are associated with differences in b-cell function and fat distribution. Dietary interventions have differing effects depending on underlying metabolic dysfunction. More research is needed to understand the effects of various genetic and environmental factors associated with hyperinsulinemia to determine which plays a causal role in metabolic disease. Such research in diverse populations will have implications for precision medicine.
|
2019-08-23T02:03:43.808Z
|
2019-07-24T00:00:00.000
|
{
"year": 2019,
"sha1": "4903d8c4083f72b609bc64a8fa487208f2172ef1",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/jes/article-pdf/3/9/1727/29630757/js.2019-00065.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff3eff6e2fcc9603e6195cdecf5394ebaac3bf30",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55291667
|
pes2o/s2orc
|
v3-fos-license
|
The Value of Environmental Base Flow in Water-Scarce Basins : A Case Study of Wei River Basin , Northwest China
In the perennial river, environmental base flow, associated with environmental flow, is the base flow that should be maintained within the river channel throughout the year, especially in the dry season, to sustain basic ecosystem functions and prevent the shrinkage or discontinuity of a river. The functions of environmental base flow include eco-environmental functions, natural functions, and social functions. In this study, we provided a method based on these functions; this method estimated the function values per unit area, introduced the scarcity coefficient, multiplied by the corresponding water area, and summed over to quantify the value of environmental base flow from 1973 to 2015 in the Wei River Basin, the largest tributary of the Yellow River in Northwest China. We observed that there was a positive correlation between the total value of environmental base flow and its water yield, whereas this outcome was completely different in the benefit per unit discharge of environmental base flow, which was closely associated with the shortage of environmental base flow. This method can thus present the considerable value of environmental base flow in monetary terms in a simple and effective way and lay the foundation for further reasonable protection levels of environmental base flow.
Introduction
Water is the source of life.Humans have chosen to live near rivers, which are the main birthplaces of human civilization, since ancient times [1].For example, China's civilization originated in the Yellow River Basin, and that of Egypt originated from the Nile River Basin.At that time, the riverine environment was largely pristine because the ability of humans to develop and utilize water resources was low, and the gross demand was also lower.Since the Industrial Revolution, with continuously burgeoning populations and rapid urbanization, flow extraction from rivers has been increasing in terms of feeding, drinking, irrigation, industry, hydropower, etc. [2], which has caused the remaining water in the river channel, known as environmental flow (e-flow) [3], which had been appropriated in the long term, to decrease in amount, followed by irreversible impacts on riverine ecosystems [4].This phenomenon is particularly intensified in water-scarce basins.
The concept of e-flow has already been widely accepted [5] and was clearly described in the 2007 Brisbane Declaration as "the quantity, timing, and quality of water flows required to sustain freshwater and estuarine ecosystems and the human livelihoods and well-being that depend on these ecosystems" [6].
Many researchers have explored a large number of studies in this area [7][8][9][10], and according to the statistics in 2003 [11], the record numbers of methods to calculate e-flow were more than 200, which can be broadly divided into three types.(1) Hydrological methods: the representative methods include Tennant [12], Texas [13], Q95, 7Q10, basic flow method (BFM) [14], flow duration curve (FDC) [15], and range of variability approach (RVA) [16] using the indicators of hydrologic alteration (IHAs) [17].Due to lower requirements for data, the calculations are simple but the results are relatively rough, suitable for the rivers lacking in data; (2) Hydraulic habitat methods: this type of methods determines e-flow based on the hydraulic conditions needed for the indicator species [18], such as wetted perimeter [19] and instream flow incremental methodology (IFIM) [20].However, because the indicator species is single, the entire status of riverine ecosystem is hard to reflect in general; (3) Comprehensive analysis methods: in this type of methods, the riverine ecosystem is viewed as a whole, and the link between hydrology and ecology is the emphasis [21], such as building block methodology (BBM) [22] and downstream response to imposed flow transformation (DRIFT) [23].The disadvantage is the required data are complex, which present a difficult challenge for application.
The value of e-flow and the cost and related compensation of e-flow protection have been the new leading directions of e-flow research in recent years and are still at the exploration stage, but there have already been several researchers that have proposed their original viewpoints.Sisto [24] and Pang et al. [25] developed a production loss model to establish the relationship between production losses and agricultural water shortages caused by maintaining e-flow and estimated the appropriate economic compensation for different irrigation stakeholders.Perona et al. [26] proposed a simple economic model and the principle of equal marginal utility to obtain optimal water allocation rules between anthropic water use and e-flow.Akter et al. [27] presented a method for combining hydro-ecological response model outputs and nonmarket economic values of wetland inundation to estimate a unit price of e-flow.Yang et al. [28] quantified ecosystem service values by emergy analysis and used the results to design an e-flow regime for Baiyangdian Lake.It is observed that the study of e-flow value developed rapidly, but compared to the relatively complete calculation system of e-flow, it appeared late, with little research achievements, therefore further study would be needed to explore this area in depth.
Our team has long been engaged in research on e-flow value and the related compensation system [29,30] and has already adopted the fuzzy mathematics method, emergy analysis method, and opportunity cost method to estimate the value of e-flow, aiming to present the considerable value of e-flow in monetary terms and raise public awareness for e-flow protection.In this study, we focused on the e-flow within the river channel, which can be referred to as environmental base flow (EBF).In the perennial river, EBF can be defined as the base flow that should be maintained within the river channel throughout the year, especially in the dry season, to sustain basic ecosystem functions and prevent the shrinkage or discontinuity of the river; While base flow is that part of stream discharged from ground water seeping into the stream, belonged to hydrologic conception; They are distinct.Subsequently, we analyzed the functions of EBF and its difference between the dry season and non-dry season.Next, we developed a theoretical model to evaluate the EBF value based on its functions.Finally, a case study of the Wei River Basin in Northwest China was used to illustrate this method.The seasonal river is not considered in this article.
Study Area
The Wei River, the largest tributary of the Yellow River, originates from Niaoshu Mountain in Gansu Province, flows through Ningxia and Shaanxi Provinces, and joins the Yellow River at Tongguan.Wei River is a perennial river.The Wei River Basin (Figure 1), of which the northern region is the Loess Plateau and the southern region is the Qinling Mountains, has a length of 818 km and a drainage size of 1.35 × 10 5 km 2 and is dominated by semi-arid hydrological characteristics; that is, the climate is cold and dry, rain occurs mainly in summer and autumn, the mean air temperature is 7.8-13.5 • C, mean annual precipitation is between 400-800 mm (decreasing from south to north), mean potential evapotranspiration is 800-1100 mm (decreasing from east to west), and mean runoff is 195 m 3 /s [31].The Wei River has two main tributaries on the north side: the Jing River is the largest tributary, which has a length of 455 km and a catchment area (4.54 × 10 4 km 2 ) that represents 34% of the Wei River Basin, and Beiluo River is the second largest tributary, which has a length of 680 km and a catchment area (2.69 × 10 4 km 2 ) that constitutes 20% of the Wei River Basin.
The Mainstream of the Wei River in Shaanxi Province (MSX, Figure 1b), due to its smooth terrain and fertile soil, has always been the industrial and agricultural production base.At present, MSX has become the most densely populated and most developed area in Northwest China.However, its water resources have always been limited: since ancient times, water in the Wei River has been extracted for human needs, such as irrigation, industry, and drinking water supply.The present water utilization rate is even up to 70% in the dry season and results in a shortage of EBF used by vegetation, wetlands, and aquatic organisms and significantly damages the healthy river basin ecosystem [32].Therefore, we used MSX as an example for evaluating the value of EBF, which is the key to protecting EBF.
Water 2018, 10, x FOR PEER REVIEW 3 of 18 is the Loess Plateau and the southern region is the Qinling Mountains, has a length of 818 km and a drainage size of 1.35 × 10 5 km 2 and is dominated by semi-arid hydrological characteristics; that is, the climate is cold and dry, rain occurs mainly in summer and autumn, the mean air temperature is 7.8 °C 13.5 °C, mean annual precipitation is between 400-800 mm (decreasing from south to north), mean potential evapotranspiration is 800-1100 mm (decreasing from east to west), and mean runoff is 195 m 3 /s [31].The Wei River has two main tributaries on the north side: the Jing River is the largest tributary, which has a length of 455 km and a catchment area (4.54 × 10 4 km 2 ) that represents 34% of the Wei River Basin, and Beiluo River is the second largest tributary, which has a length of 680 km and a catchment area (2.69 × 10 4 km 2 ) that constitutes 20% of the Wei River Basin.The Mainstream of the Wei River in Shaanxi Province (MSX, Figure 1b), due to its smooth terrain and fertile soil, has always been the industrial and agricultural production base.At present, MSX has become the most densely populated and most developed area in Northwest China.However, its water resources have always been limited: since ancient times, water in the Wei River has been extracted for human needs, such as irrigation, industry, and drinking water supply.The present water utilization rate is even up to 70% in the dry season and results in a shortage of EBF used by vegetation, wetlands, and aquatic organisms and significantly damages the healthy river basin ecosystem [32].Therefore, we used MSX as an example for evaluating the value of EBF, which is the key to protecting EBF.
Function Analysis
First and foremost, as the most important and the most active factor in the ecosystem, water is the carrier of life and the media for substances cycling and energy exchange.In the perennial river, EBF, which ensures that the river channel always has water throughout the year, plays a key role in sustaining a healthy riverine ecosystem [4,33].Therefore, the primary function of EBF is to maintain the basic ecological and environmental functions of a river (eco-environmental functions): (1) the most significant eco-environmental function is avoiding dried up rivers, which are directly related with water column depths that are critical parameters to many species, prevented the extinction of endangered species [34]; (2) EBF can maintain the normal functions of a wetland ecosystem connected to the natural river channel and guarantee its rich terrestrial-aquatic biological resources and unique climatic environment, such as localized humidity increases and air purification [35]; (3) EBF can increase the capacity for debugging pollutants in a river and improve a river's self-purification ability and water quality; (4) Rivers are the main passageways for nutrient transport in riverine ecosystems; EBF can maintain the continuity of rivers and promote the circulation and cycle of nutrients in a river; (5) Furthermore, with large EBF, the soil in flood plains can also be nourished, and their fertility can be improved.
Second, EBF has natural functions: (1) a certain amount of EBF within a river channel can meet the water demand for evaporation and seepage and maintain the flow conversion between surface water and groundwater; additionally, EBF plays an important role in recharging groundwater throughout the year in the funnel-shaped groundwater area; these characteristics are affiliated with the functioning of the hydrologic cycle [36]; (2) EBF also has the ability to scour the riverbed, carry sediment, maintain the integrity of a natural river channel, and sustain river geomorphology, which are geological functions.
Finally, EBF has a role in social functions [37]: (1) EBF contributes to the growth of fishes and other aquatic organisms in rivers, providing fishery products for humans; (2) EBF can improve the recreational experiences and landscape of a river basin; (3) Humans choose to live near rivers, and cities are built on river banks; EBF can revitalize a polluted/dry river, revive the surrounding areas of a river, and improve the quality of life for humans.
Different Functions between the Dry Season and Non-Dry Season
Generally, a river contains a wet season, a normal season, and a dry season.In the wet and normal seasons, also called the non-dry season, abundant water in the river can simultaneously meet the demands for anthropic water use and EBF, so the guarantee rate of EBF is high.In the dry season, reduced upstream inflow and lower water levels lead to sharp water contractions, coupled with the already high water utilization rates in water-scarce basins, which is why EBF often goes without protection [38].Therefore, the dry season is the critical period for EBF protection in water-scarce basins and is also the emphasis of this study.
Compared to the non-dry season, EBF in the dry season is low; hence, differences exist in the functions between the two, as shown in Table 1.
Recommended EBF
The recommended EBF is the initial fundamental step to estimate its value, and its annual water yield is represented by "W" in this article.As mentioned earlier, a relatively complete system to calculate recommended EBF has already formed [11][12][13][14][15][16][17][18][19][20][21][22][23], and each method has advantages and disadvantages.How to choose the suitable methods and get the appropriate recommended EBF in the study area remains the challenge of EBF study.Because this is not the focus of this article, it is only covered briefly.
Y
In the dry season, EBF can ease water shortages for wetlands, sustain normal growth and development of riverine organisms, adjust microclimates (such as temperature and humidity) and improve local air quality.
Water purification Y EBF helps dilute, diffuse, transfer and purify pollutants, and increases the self-purification capacity of a river.Y In the dry season, EBF can increase river runoff, enhance the ability to dilute, diffuse, transfer and purify pollutants, and improve the self-purification capacity of a river.
Nutrient transport Y EBF maintains the nutrient cycles of riverine ecosystems and nourishes riverine organisms.Y In the dry season, EBF can ensure the connectivity of a river, sustain the nutrient cycles of a riverine ecosystem, and nourish riverine organisms.
Maintaining soil fertility Y
In flood season, EBF moistens and fertilizes the soil of inundation areas, which are the essential nutrient sources for riparian vegetation.
N
In the dry season, EBF is low, almost within the river channel; thus, its effect on wetting the soil around a river can be diminished.
Hydrologic cycle Y
EBF recharges groundwater, satisfies the water requirements of evaporation and seepage, and promotes regional hydrologic cycles.
Y
In the dry season, EBF can satisfy the water requirements of evaporation and seepage and promote regional hydrologic cycles.
Geology Y When EBF is high, it has an effect on scouring the river bank.N In the dry season, EBF is low and the functions of brushing and scouring can be diminished.
Sediment transport Y EBF can increase the capacity of a river to transport sediment and alleviate siltation.N Due to the low sediment concentration of a river in the dry season, this function of EBF can be reduced.
Social functions
Fishery production Y EBF contributes to the normal growth of fishes and other aquatic organisms in a river, which are aquatic products for humans.
Y
In the dry season, EBF can sustain the survival and multiplication of organisms in a river and guarantee a higher biomass, which is beneficial for the rapid growth of organisms in the wet season.
Recreation and landscapes Y EBF maintains the river level and meets the water requirements for recreation and landscapes.Y In the dry season, the upstream inflow of a river decreases.EBF can raise the river water level and improve the river basin landscape.
Improving quality of life Y EBF revitalizes the river, revives the areas surrounding the river, and improves human quality of life.Y In the dry season, EBF can improve pollution problems and river drought and provides a better life for humans.
Value Method of EBF
At present, the more popular method for calculating the value of an ecosystem is established by Costanza et al. [39].On the basis of "willingness-to-pay" of individuals for ecosystem services, the unit value of ecosystem services was estimated, used either (1) the sum of consumer and producer surplus; (2) the net rent (or producer surplus); or (3) price times quantity as a proxy for the economic value of the service.Then the unit values were multiplied by the surface area of each ecosystem to arrive at global total.According to the results obtained by Costanza et al. [39], Xie et al. [40] established the value equivalent factor along with the responses of ecological questionnaires from specialists of China, which helped to get ecosystem services value unit area of Chinese terrestrial ecosystems, and then the ecological assets value of China was estimated.
Therefore, we can also estimate the value of EBF with this method, the following steps are (1) estimate the unit values of EBF in the study area by means of equivalent factors; (2) calculate the corresponding water areas of EBF; and (3) multiply the unit values times the corresponding water areas to gain the value of EBF.
Value Equivalent Factor of EBF (1) Standardization equivalent
The standardization equivalent (1 standard unit value equivalent factor of ecosystem service), which is used for representing and quantifying the potential contribution of different ecosystems in ecosystem service functions, is the value of the average grain yield of 1 ha of farmland in the study area [41].The value of the average grain yield in a farmland ecosystem is calculated based on the net profits of rice, wheat, and corn.
where D is the standardization equivalent; S r , S w , and S c are the ratios of sown areas of rice, wheat, and corn to the total area of these three crops in the study area, respectively.F r , F w , and F c are the net profits of rice, wheat, and corn in a unit area of the study area, respectively.
(2) Estimation of the value equivalent factor In order to simplify the evaluate methods of the functions shown in Table 1, we reviewed the research achievements that focused on the function value methods of ecosystem [42][43][44][45] and consulted various annual public statistic data of China, such as the Chinese Statistical Yearbook, the Chinese Environmental Quality Bulletin, and the National Economy and Society Developed Statistical Bulletin, etc.Based on this literature, we obtained the function capacity per unit area and the value per unit function (Table 2).Then, we can calculate the function value per unit area and compare it with the standardization equivalent, from which the value equivalent factor of EBF can be obtained [41].
Corresponding Water Area of EBF
The corresponding water area of EBF can be obtained used the simple hydraulic rating method or the more sophisticated hydraulic/habitat simulation methods.Due to few local ecological data in MSX, we applied the simple hydraulic rating method to estimate it roughly.
The river channel can be graded into several sections according to its geometrical features.We can collect the data at generalized point j and obtain the fitting curve of EBF and its corresponding water surface width.
where B j is the water surface width at generalized point j; Q j is the flow at generalized point j.
By combining B j and the length of each section L j , we can calculate the corresponding water area S j .
Sustaining wetland ecosystems
is the value of sustaining wetland ecosystem per unit area; A dried-up river may result in large-scale destruction of wetlands and degradation of related functions, and constructed wetland is necessary to build to replace its environmental functions, so V wl is the cost per unit area of constructed wetland. ( Water purification V 2 is the value of water purification per unit area; G 1 and G 2 are the reductions in COD and NH 3 -N caused by increased purification ability; P 1 and P 2 are the treatment costs of COD and NH 3 -N. ( Nutrient transport V 3 is the value of nutrient transport per unit area; C m is the nutrient quantity per unit area; W is the water yield of EBF; P 3 is the price for nutrients. ( Hydrologic cycle V 4 is the value of the hydrologic cycle per unit area; W is the water yield of EBF; P 4 is the water price. ( Fishery production V 5 = α × β × V fs V 5 is the value of fishery production per unit area; α is the proportion of water resources in the whole fishery production; β is the conversion coefficient of the river; V fs is the fishery value per unit area. (6) Recreation and landscapes V 6 is the value of recreation and landscapes per unit area; γ is the proportion of the river's tourism income out of the total tourism income; ω is the conversion coefficient of EBF; V tr is the tourism income per unit area.
Improving quality of life Improving the city's image.
EBF can revitalize a polluted/dry river, revive the surrounding areas, and improve the quality of life for humans.
Scarcity Coefficient
The fewer the water resources, the more expensive the water value; conversely, the greater quantity of water, the cheaper the water value, and this also applies to EBF.When EBF is sufficiently abundant in one area, scarcity has little effect on its value, and with the increase in "EBF abundance", the scarcity value will decline rapidly, even approaching zero.In contrast, with the aggravation of scarcity, the scarcity value of EBF will increase, which is not linear growth, but a type of exponential growth [46].
where µ is the scarcity coefficient; w n is the scarcity equivalent factor of EBF, ξ n is the weight of w n and ∑ ξ n = 1, 0 ≤ ξ n ≤ 1, and θ is the regulatory factor of policy.
(1) w 1 : the average per capita water availability w 1 = w 11 w 12 (11) where w 11 and w 12 are the average per capita water availability in the study area and the nation, respectively.(2) w 2 : EBF where w 21 and w 22 are the measured data and baseline of EBF, respectively.(3) w 3 : precipitation where w 31 and w 32 are the precipitation rates in the study area and the nation, respectively.
EBF Value
The EBF value can be expressed in two indicators: the total value of EBF and the benefit per unit discharge of EBF (or "unit benefit"), which is the unit benefit brought by EBF.
In consideration of all of the above, the total value of EBF is where V jk is the total value of EBF in the jth section at the kth period, µ jk is the scarcity coefficient of EBF in the jth section at the kth period, e ijk is the value equivalent factor of the ith function in the jth section at the kth period, D jk is the standardization equivalent in the jth section at the kth period, and S jk is the water area in the jth section at the kth period.
The benefit per unit discharge of EBF is where BD jk is the benefit per unit discharge of EBF in the jth section at the kth period; V jk is the total value of EBF in the jth section at the kth period; W jk is the water yield of EBF in the jth section at the kth period.
Recommended EBF in MSX
So far, several Chinese researchers have calculated the recommended EBF at Linjiacun, Weijiabu, Xianyang, Lintong and Huaxian station in MSX in various ways.There not only have classic methods, but also the methods developed by their own.The results are shown in Tables 3 and 4 [21,47].Owing to the space limitation, the results estimated by hydrological methods at Weijiabu, Xianyang, Lintong and Huaxian station are not mentioned here.DMAF: driest monthly average flow method, Q90: average flow method in the driest month with 90% frequency, NGPRP: northern great plains resource program.The unit is m 3 /s.Disturbance from anthropogenic activity has seriously influenced the Wei River, which is in severe shortage of water resources.In the dry year (1996), there even had 255 days that EBF did not reach 1 m 3 /s in the upstream of Wei River.So contrasted with the reality, some calculation results are too high to satisfy, such as 7Q10, Texas, wetted perimeter, etc.Therefore, compared the results of three types of methods (10% Tennant, mean depth method, and water quantity/quality method) and took the actual situation of MSX into account, then the recommended range and the short-term baseline of EBF were obtained as shown in Table 4, which has been gained recognition by Chinese experts.The recommendation will also be constantly adjusted once the aim of recent river rehabilitate of the Wei River is achieved.
Methods
Considering the emphasis of this article is the evaluation of EBF value, we did not calculate the recommended EBF in MSX here anymore but used the short-term baseline of EBF directly.
EBF Value in MSX
The dry season (Dec.-Mar.) is the critical period for EBF protection in MSX, when the conflict between anthropic water use and EBF is much more acute, so we primarily focus on the EBF value of MSX in the dry season.Thus, "Year" in this article indicates "from Dec. of the previous year to Mar. of the following year", marked by "the next year".For example, 1973 represents the period "from Dec. 1972 to Mar. 1973".
Value Equivalent Factor in MSX
With the abovementioned method, we can obtain the value equivalent factor of EBF in Wei River Basin.On this basis, combined with D 2015 in MSX (3068 RMB/ha) (RMB: Ren Min Bi, the Chinese currency), the value per unit area for each of the EBF functions in MSX was estimated in Table 5. Notably, to eliminate the price change and make the evaluation results more comparable, we selected the price of RMB in 2015 as the constant price, written as 2015 RMB.1b), which can be divided into 5 sections: Section I (Tuoshi-Linjiacun), Section II (Linjiacun-Weijiabu), Section III (Weijiabu-Xianyang), Section IV (Xianyang-Lintong), and Section V (Lintong-Huaxian).Section I is upstream (denoted by "U"); Sections II and III are midstream (denoted by "M"), and Sections IV and V are downstream (denoted by "D").
(2) Fitting curve of EBF in MSX In the Annual Hydrological Report of the People's Republic of China (Yellow River Basin), we find a variety of measured data of Wei River, including measured flow and its corresponding water surface width at different times.To establish the fitting relationship between low flow (Q) and water surface width (B) as accurately as possible, we only chose the data in dry season (December-March), because at this time the measured flow in the river channel almost can be regarded as EBF.The results of fitting curve at 6 hydrological stations are shown in Figure 2.
(2) Fitting curve of EBF in MSX In the Annual Hydrological Report of the People's Republic of China (Yellow River Basin), we find a variety of measured data of Wei River, including measured flow and its corresponding water surface width at different times.To establish the fitting relationship between low flow (Q) and water surface width (B) as accurately as possible, we only chose the data in dry season (Dec.-Mar.), because at this time the measured flow in the river channel almost can be regarded as EBF.The results of fitting curve at 6 hydrological stations are shown in Figure 2. (
3) Corresponding water area of EBF in MSX
Based on the data of EBF and the length of each section, we can calculate the corresponding water area S j (Figure 3) in the dry season from 1973 to 2015 by means of Equation ( 9).
(3) Corresponding water area of EBF in MSX Based on the data of EBF and the length of each section, we can calculate the corresponding water area Sj (Figure 3) in the dry season from 1973 to 2015 by means of Equation (9).
Scarcity Coefficient in MSX
(1) w1: the average per capita water availability The average per capita water availability in China is less than one quarter of the world average, approximately 2200 m 3 .Moreover, the average per capita water availability in MSX accounts for only 15% of China.
(2) w2: EBF As can be viewed in Figure 4, the upstream EBF in most years cannot meet the baseline, whereas the relatively large midstream and downstream EBFs can satisfy the baseline, except for a few years.
(3) w3: precipitation As shown in Figure 5, in the dry season from 1973 to 2015, almost all of the precipitation in MSX was lower than the national average data.
(4) ξn and θ ξn: we set the same weight for wn, ξn = 1/3.θ: Considering reasonability and residents' ability to afford the EBF value, we used the residential water price as the standard; that is, the annual mean EBF value should not rise too far beyond the residential water price, preparing for further EBF eco-compensation.According to the EBF situation in MSX, θU (θ for upstream) is 0.2 due to the severe scarcity of EBF, and θM (θ for midstream) and θD (θ for downstream) are 1.
Scarcity Coefficient in MSX
(1) w 1 : the average per capita water availability The average per capita water availability in China is less than one quarter of the world average, approximately 2200 m 3 .Moreover, the average per capita water availability in MSX accounts for only 15% of China.
(2) w 2 : EBF As can be viewed in Figure 4, the upstream EBF in most years cannot meet the baseline, whereas the relatively large midstream and downstream EBFs can satisfy the baseline, except for a few years.
(3) w 3 : precipitation As shown in Figure 5, in the dry season from 1973 to 2015, almost all of the precipitation in MSX was lower than the national average data.
(4) ξ n and θ ξ n : we set the same weight for w n , ξ n = 1/3.θ: Considering reasonability and residents' ability to afford the EBF value, we used the residential water price as the standard; that is, the annual mean EBF value should not rise too far beyond the residential water price, preparing for further EBF eco-compensation.According to the EBF situation in MSX, θ U (θ for upstream) is 0.2 due to the severe scarcity of EBF, and θ M (θ for midstream) and θ D (θ for downstream) are 1.
Figure 4.
The measured data and baseline for EBF in the dry season at upstream, midstream and downstream sections of MSX from 1973 to 2015."BU" represents the baseline of the upstream section (5 m 3 /s), "BM" represents the baseline of the midstream sections (8 m 3 /s), and "BD" represents the baseline of the downstream sections (16 m 3 /s).The measured data were provided by the Annual Hydrological Report of the People's Republic of China (Yellow River Basin), and the baseline data were from Table 4. 4.
EBF Value in MSX
We entered the above results into Equations ( 14) and ( 15) and obtained the total value and the unit benefit of EBF in MSX from 1973 to 2015 respectively (Figure 6).
EBF Value in MSX
We entered the above results into Equations ( 14) and ( 15) and obtained the total value and the unit benefit of EBF in MSX from 1973 to 2015 respectively (Figure 6).
EBF Value in MSX
We entered the above results into Equations ( 14) and ( 15) and obtained the total value and the unit benefit of EBF in MSX from 1973 to 2015 respectively (Figure 6).For one thing, as we can observe in Figure 6a, the total value in the downstream was higher than that in the midstream and upstream, ranging from 7.84 × 10 8 RMB to 36.82 × 10 8 RMB.The total value of EBF rose along with the water yield of EBF; this is because the EBF value is a functional value; the higher the EBF, the more service area, and the greater the total value of service function.
For another, with regard to Figure 6b, the unit benefit in the upstream was well above that in the midstream and downstream, and the difference between midstream and downstream was narrow.The unit benefit not only increased along with the reduction in EBF, but the lower the EBF, the higher the unit benefit, ranging from 1.56 RMB/m 3 to 23.22 RMB/m 3 .The annual mean unit benefit in the upstream is 6.03 RMB/m 3 , slightly higher than the residential water price in MSX (5.64 RMB/m 3 ).The reason for this is that a serious shortage of EBF in the upstream section led to a high unit benefit, and the strong security of EBF in the midstream and downstream reduced its unit benefit to approximately 3.70 RMB/m 3 .
Comparison with Similar Studies
We compared our study with two classic articles: the first was The Value of the World's Ecosystem Services and Natural Capital, written by Costanza et al. in 1997 [39], who estimated the economic value of 17 ecosystem services for 16 biomes for the first time; the second was Ecological Assets Valuation of the Tibetan Plateau, written by Xie et al. [40] in Chinese, who established the ecosystem services value unit area for Chinese terrestrial ecosystems for the first time, according to partial global ecosystem services value evaluation results obtained by Costanza et al. [39] along with responses to ecological questionnaires from specialists in China.
There have been several differences in the research content between our study and the two articles; hence, to conduct contrastive analysis, we selected and processed their results, including the following: (1) EBF is part of a river, and its function is also part of a river's function.Thus, we only selected the functions of a river related to EBF for comparison; these functions are "water purification" to "waste treatment", "nutrient transport" to "nutrient cycling", "hydrologic cycle" to "water regulation", "recreation and landscapes" to "recreation" and "fishery production" to "food production"; (2) We believed EBF can maintain the normal functioning of wetland ecosystems connected with the natural river channel, so all functional values of a wetland could be included, that is, "sustaining wetland ecosystems" to "wetland"; (3) We converted all prices to 2015 RMB based on the exchange rate and discount rate for the sake of more comparable evaluation; (4) Qualitative analysis was not compared here.
Regarding the data comparison in Table 6, the values of "fishery production" and "recreation and landscapes" were slightly higher than the results in the other two articles.Both of them are associated with social function in the dry season of water scarce basins; EBF still provides for humans, supporting human recreation and fisheries, even if it is very costly for the ecosystem, and that is why the value of these two functions is high.In addition to these, the rest of the values are almost equal between the two.Overall, the values per unit area of our study are in a reasonable range.Therefore, it can be assumed that our estimated EBF value for MSX was reliable.
Typical Years
Due to the strong security of EBF in the midstream and downstream sections, we only analyzed the typical years in the upstream section.On the basis of the EBF in the dry season, combined with the annual inflow, we chose 1975 (25%), 2012 (50%), 2009 (75%), and 1997 (90%) for the typical years in the upstream.
As shown in Table 7, the average EBF in Dec.-Mar.cannot meet the baseline, except for in a wet year (25%).The total value dropped but the unit benefit increased along with the scarcity level of EBF.
Limitations
In this study, we developed a relative simple method to evaluate the value of EBF, but there still exist some limitations in our study.
First, the equivalent factor method allows easily calculating the value of EBF.However, a typically encountered challenge in this method depends on the reliable value equivalent factor because different quantitative approaches and price bases lead to various evaluation results of equivalent factors [39].The equivalent factor adopted in this study was calculated with Equations ( 2)- (7).The applicability of these equations to other areas may need to be further research.
Second, we estimated the corresponding water area of EBF in Wei River by means of a simple hydraulic rating method.Lower requirements for data in this method, the calculations are simple but the results are relatively rough.Actually, it has been replaced by the more sophisticated hydraulic/habitat simulation methods [18], which can get more accurate results.However, there exists little ecological data on the Wei River at present, which causes difficulties for hydraulic/habitat simulation.This is the future research orientation.
Conclusions
E-flow, which improves the human survival environment, is important for ecosystem protection, and this improvement is the value of e-flow.The estimation of e-flow values has been the new leading direction in e-flow research in recent years.It can scientifically measure the benefits accompanied by e-flow protection and is also the key to resolving conflict between anthropic water use and e-flow.
In this study, we focused on the e-flow within the river channel, referred to as EBF.The dry season is the critical period for EBF protection in water-scarce basins.During this period, the primary function of EBF is to maintain the basic ecological and environmental functions of rivers (eco-environmental functions), such as avoiding dried-up rivers in a perennial river, sustaining wetland ecosystems, water purification, and nutrient transport.Additionally, EBF also plays a role in the hydrologic cycle, fishery production, recreation and landscapes, and improving quality of life.
Using the Wei River Basin in Northwest China as a case study, we developed a method based on EBF functions to estimate the value of EBF in MSX from 1973 to 2015.The EBF value can be expressed in two indicators: the total value and the unit benefit.For the total value, we determined that the total value in the downstream was higher than the midstream and upstream, ranging from 7.84 × 10 8 RMB to 36.82 × 10 8 RMB, and that the value increased along with the water yield of EBF.This is because the value of EBF is a functional value; the more EBF, the more service area, and the greater the total value of service function.For the unit benefit, the unit benefit in the upstream was well above that in the midstream and downstream, ranging from 1.56 RMB/m 3 to 23.22 RMB/m 3 , and increased along with the reduction in EBF.The reason is that a serious shortage of EBF led to a high unit benefit, and strong security will reduce the unit benefit.
The results of our analysis provide a simple and effective way to present the considerable value of e-flow in monetary terms.Compared with similar studies, it can be assumed that our estimated EBF value of MSX is reliable, and this approach is achievable.However, additional work needs to be completed to advance our approach as a practical management tool.For example, it would be helpful to provide a decision-support tool that enables easier estimation of the reasonable protection levels of EBF with different water uses and hydrologic conditions.In addition, although there are still some functions of EBF that are difficult to evaluate with currency, such as avoiding dried-up rivers and improving quality of life, the values of these functions are irreplaceable in the areas of scientific research, aesthetic art and nurturing spirituality.Future research will attempt to address these problems.
Figure 1 .Figure 1 .
Figure 1.(a) The digital elevation map of Wei River Basin, which can be divided into the Mainstream, Jing River, and Beiluo River Basins.(b) Overview map of Mainstream of the Wei River in Shaanxi Province (MSX); Linjiacun is the hydrological control station at upstream, Xianyang is the hydrological control station at midstream, and Huaxian is the hydrological control station at downstream.
[21,47].Tuoshi station established in 2004, for lack of long series hydrologic data, just viewed as the starting point of MSX.
Figure 2 .Figure 2 .
Figure 2. The fitting curve of low flow (Q) and water surface width (B) at 6 hydrological stations.Both Q and B were measured data provided by the Annual Hydrological Report of the People's Republic of China (Yellow River Basin).
Figure 3 .
Figure 3. Corresponding water area of EBF in the dry season from 1973 to 2015.
Figure 3 .
Figure 3. Corresponding water area of EBF in the dry season from 1973 to 2015.
Figure 4 .
Figure 4.The measured data and baseline for EBF in the dry season at upstream, midstream and downstream sections of MSX from 1973 to 2015."BU" represents the baseline of the upstream section (5 m 3 /s), "BM" represents the baseline of the midstream sections (8 m 3 /s), and "BD" represents the baseline of the downstream sections (16 m 3 /s).The measured data were provided by the Annual Hydrological Report of the People's Republic of China (Yellow River Basin), and the baseline data were from Table4.
Figure 5 .
Figure 5.The precipitation in the dry season at upstream, midstream and downstream sections in MSX from 1973 to 2015.The upstream meteorological stations include Longxian and Baoji, the midstream include Fengxiang, Wugong, Qindu and Xi'an, and the downstream include Jinghe, Yaoxian, Tongchuan, Pucheng, Huaxian, and Huashan.The precipitation data from the upstream, midstream and downstream sections are the average values of the abovementioned stations over the same period.The national precipitation data are the long-time average annual values of China.All the precipitation data were provided by the National Meteorological Information Center of China.
Figure 6 .
Figure 6.(a) The total value of EBF in MSX from 1973 to 2015.(b) The unit benefit of EBF in MSX from 1973 to 2015."WU" and "WD" represents the water yields of EBF in the upstream and downstream sections in the dry season, respectively.
3 )Figure 5 .
Figure 5.The precipitation in the dry season at upstream, midstream and downstream sections in MSX from 1973 to 2015.The upstream meteorological stations include Longxian and Baoji, the midstream include Fengxiang, Wugong, Qindu and Xi'an, and the downstream include Jinghe, Yaoxian, Tongchuan, Pucheng, Huaxian, and Huashan.The precipitation data from the upstream, midstream and downstream sections are the average values of the abovementioned stations over the same period.The national precipitation data are the long-time average annual values of China.All the precipitation data were provided by the National Meteorological Information Center of China.
Figure 5 .
Figure 5.The precipitation in the dry season at upstream, midstream and downstream sections in MSX from 1973 to 2015.The upstream meteorological stations include Longxian and Baoji, the midstream include Fengxiang, Wugong, Qindu and Xi'an, and the downstream include Jinghe, Yaoxian, Tongchuan, Pucheng, Huaxian, and Huashan.The precipitation data from the upstream, midstream and downstream sections are the average values of the abovementioned stations over the same period.The national precipitation data are the long-time average annual values of China.All the precipitation data were provided by the National Meteorological Information Center of China.
Figure 6 .
Figure 6.(a) The total value of EBF in MSX from 1973 to 2015.(b) The unit benefit of EBF in MSX from 1973 to 2015."WU" and "WD" represents the water yields of EBF in the upstream and downstream sections in the dry season, respectively.
3 )Figure 6 .
Figure 6.(a) The total value of EBF in MSX from 1973 to 2015; (b) The unit benefit of EBF in MSX from 1973 to 2015."W U " and "W D " represents the water yields of EBF in the upstream and downstream sections in the dry season, respectively.
Author Contributions: S.Y. and H.L. conceived and designed the model; B.C. and Z.G. contributed modelling calculation.Funding: This research was funded by the National Natural Science Foundation of China (Grant No. 51479162).
Table 1 .
Different functions of environmental base flow (EBF) between the dry season and non-dry season."Y"represents the possession of this function; "N" represents the absence of this function.In the dry season, EBF can avoid dried riverbeds.The necessary water level required for normal growth and development of fishes and other aquatic organisms is guaranteed, and rare species are protected from extinction.
Table 2 .
The function value of EBF in the dry season.The functions in the non-dry season were not included here.
Table 3 .
The results of recommended EBF estimated by hydrological methods at Linjiacun station, cited from
Table 4 .
[21]comparisons of recommended EBF estimated by 3 types of method at 5 stations in MSX, cited from[21]The unit is m 3 /s.
Table 5 .
The value equivalent factor of EBF in Wei River Basin.The equivalent factor is unitless; the unit of value per unit area is RMB/ha in 2015.
Table 5 .
The value equivalent factor of EBF in Wei River Basin.The equivalent factor is unitless; the unit of value per unit area is RMB/ha in 2015.
Table 6 .
The comparisons of studies on the value per unit area.USD: United States Dollar; 1994 USD: the price of USD in 1994; 2003 RMB: the price of RMB in 2003.The exchange rate for USD was from 1994, and the discount rate over the years was sourced from the People's Bank of China.
Table 7 .
The total value and the unit benefit of EBF in typical years in the upstream."BU" represents the baseline of the upstream section (5 m 3 /s).
|
2018-12-13T22:14:59.886Z
|
2018-06-26T00:00:00.000
|
{
"year": 2018,
"sha1": "b8c19ccba3bb91324c9cf4ef58f60da6b4eab71b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/10/7/848/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b8c19ccba3bb91324c9cf4ef58f60da6b4eab71b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
90872248
|
pes2o/s2orc
|
v3-fos-license
|
Four new species of Trichoderma with hyaline ascospores from southwest China
Collections of Trichoderma were made from southwest China and examined. Four new species producing hyaline ascospores, T. fructicola, T. medogense, T. palidulum and T. virgineum, were found, and are described and illustrated. Their phylogenetic positions were allocated based on sequence analyses of the combined RNA polymerase II subunit b and translation elongation factor 1 alpha genes. Trichoderma fructicola, appearing as a lone lineage among hyaline-ascospored groups, is diagnostic by cortical tissues of textura epidermoidea, remarkable colour change of peridium in KOH, and verticilliumto trichoderma-like conidiophores. As a sister of T. voglmayrii, T. medogenseis similar to T. voglmayrii in having yellow-brown to purplish red stromata and trichoderma-like conidiophores, but differs in apapillate ostioles, subcortical tissues of textura intricata, narrow phialides, andsmaller conidia. Trichoderma palidulumis located in the Viride clade andis distinct from its alliesin stroma and colony morphology. Trichoderma virgineum is closely associated with T. henanense and T. odoratum. The three species are similar in having yellowish stromata, monomorphic ascospores, white colonies produced on three standard media, simple-branched conidiophores, and hyaline conidia for which anew clade is proposed. Morphological distinctions and sequence divergences between the new species and their close relatives are discussed.
Introduction
Trichoderma Pers. is a hyperdiverse genus with an extraordinarily high number of species.It is cosmopolitan on various substrates in a wide range of geographic distributions (Chaverri & Samuels 2003, Kredics et al. 2010).Some species of the genus are found to be beneficial and applied to agriculture, industry and environmental protection (Harman 2006, Bischof et al. 2016, Saravanakumar & Kathiresan 2014, Adnan et al. 2017).Recognizing its biodiversity and exploring continuously new recourses of the group attract the attention of many researchers from diverse regions of the world.
Trichoderma species are characterized by perithecia immersed in fleshy stromata of different colours, cylindrical asci with hyaline or green ascospores which disarticulate at the septum, conidiophores forming several types of branch patterns, and hyaline or green conidia (Jaklitsch et al. 2006, Qin & Zhuang 2016c).Bissett et al. (2015) listed 254 species of the genus with either cultures or DNA sequence data available.More recently, about 40 taxa have been added to the
Materials &methods
Specimens investigated were collected from Tibet Autonomous Region and Yunnan Province, and deposited in the Herbarium Mycologicum Academiae Sinicae (HMAS).Strains were isolated from ascospores (Jaklitsch 2009), and deposited in the State Key Laboratory of Mycology, Institute of Microbiology, Chinese Academy of Sciences.
Mycelium was harvested to extract genomic DNA following the methods of Wang & Zhuang (2004).Fragments of RPB2 and TEF1were amplified with the primer pairs and method reported by Jaklitsch (2009) and sequenced at the Beijing Tianyihuiyuan Bioscience and Technology, China.
The sequences used in phylogenetic analysis were listed in Table 1, including sixty Trichoderma species representing fifteen named clades and several independent terminal branches.They were assembled, manually adjusted and aligned using the DNAStar Seqman program 7.1.0(DNASTAR.Inc., Madison) and ClustalX 1.83 (Thompson et al. 1997).
Maximum parsimony (MP) and Bayesian inference (BI) analyses were conducted to allocate the phylogenetic positions of the new species.MP analysis was carried out using PAUP v. 4.0b10 (Swofford 2002) with following settings: all characters were equally weighted, gaps were treated as missing data, starting trees were obtained by random taxon addition with 1000 replicates, branchswapping algorithm was tree-bisection-reconnection (TBR), steepest descent option and MulTrees option were not in effect.BI analysis was implemented with MrBayes v. 3.1.2(Ronquist & Huelsenbeck 2003), and the best-fit model GTR+I+G was selected by MrModelTest v. 2.3 (Nylander 2004) using Akaike information criterion (AIC); Metropoles-coupled Markov chain Monte Carlo (MCMCMC) searches were run for 1000000 generations sampling every 100 generations; the first 2500 trees were discarded as the burn-in phase.The statistic supports were evaluated by maximum parsimony bootstrap proportion (MPBP) and Bayesian inference posterior probability (BIPP).Trees were viewed in FigTree v. 1.4.3 (Rambaut 2016).
Phylogenetic analyses
The partition homogeneity test (P = 0.01) indicated that the individual partitions were not highly incongruent (Cunningham 1997), and thus, RPB2 and TEF1 were combined for sequence analyses.The combined dataset contained 64sequences and 2437 characters (1082 characters forRPB2, 1355 characters for TEF1), of which 62sequences represented 60Trichoderma species.In the MP analysis, 1248 characters were constant, 171 variable characters were parsimonyuninformative, and 1018 were parsimony-informative.The MP analysis resulted in two most parsimonious trees, and one of them was depicted in Fig. 1 (Tree length = 8970, Consistency index= 0.2591, Homoplasy index= 0.7409, Rescaled consistency= 0.1377,Retention index = 0.5314).The BI tree is similar to the MP tree in topology.The phylogenetic tree and DNA sequence alignments are deposited in TreeBASE(no.S21884).
Sixty Trichoderma species representing 15named clades, a newly proposed clade, and some independent terminal branches were analyzed.All the investigated species clustered together with Stromata solitary, gregarious or aggregated in small numbers, pulvinate or discoidal, outline circular or elongate to irregular, rounded, broadly attached, pale orange-yellow when fresh, apricot yellow when mature, 0.5-2.5 mm diam., 0.4-0.7 mm thick (n = 35).Surface nearly smooth, the stroma bases surrounded with whitish hyphae.Ostiolar dots dark brown, distinct, densely distributed.Rehydrated stromata deep chrome to orange brown, darkened in 3% KOH.
Colony on CMD 29-33 mm in radius after 72 h at 25 C, mycelium covering the plate after 9 d.Colony circular, margin well defined; aerial hyphae short and sparsely disposed, longer and denser towards the distal margin.Conidiation noted after 3 d on aerial hyphae.No distinct odor and diffusing pigment observed.
Colony on PDA 27-31 mm in radius after 72 hat 25 C, mycelium covering the plate after 9 d.Colony nearly circular, zonate; aerial hyphae cottony.Conidiation noted after 1 d on aerial hyphae near the plug.No distinct odor and diffusing pigment observed.
Colony on CMD 36-45 mm in radius after 72 h at 25 C, mycelium covering the plate after 5 d.Colony circular, greenish; aerial hyphae abundant in the distal part of the colony.Conidiation noted after 3 d on aerial hyphae.Pigment yellowish, no odor formed.
Colony on PDA 37-40 mm in radius after 72 hat 25 C, mycelium covering the plate after 5 d.Colony nearly circular, slightly zonate, greenish; aerial hyphae short, erect, and downy.Conidiation noted after 3 d on aerial hyphae.Pigment yellowish, no odor formed.
The regional monographic treatments of Trichoderma species having hyaline ascospores were carried out by Jaklitsch (2011) and Jaklitsch & Voglmayr (2015).The hyaline-ascospored species of the genus were in ten clades, including the recently added Asterineum clade specialized by ostiolar dots surrounded by stellate cracks.Among the known species, T. virgineum shares phenotypic similarity with T. odoratum and T. henanense, and is closely related to them with high statistic supports(MPBP/BIPP = 100%/100%).These three species are common inyellowish stromata, white colonies on three standard media, monomorphic ascospores, acremonium-or verticillium-like conidiophores, and hyaline conidia.The Virgineum clade is here proposed.Among the existingclades, the Hypocreanum clade forms also simple conidiophore branch patterns and hyaline conidia, however, speciesof that clade have extensively effused stromata and mostlypossess vertically parallel and warted hyphae above stroma surface (Overton et al. 2006).It is remotely related to the Virgineum clade (Fig. 1).
Many new species of Trichoderma have been published successively based on the materials collected from this country in the past four years (Zhu & Zhuang 2014, 2015a,b, Qin & Zhuang 2016a,b,c, 2017, Chen & Zhuang 2017a,b,c).We believe that more taxa will be discovered in the unexplored regions, which will definitely renew our understanding of species diversity, resources, taxonomy and phylogeny of the genus, and which will certainly broaden our knowledge of application of useful fungal resources.
Figure 1 -
Figure 1 -Maximum parsimony tree of selected Trichoderma species inferred from the combined partial sequences of RPB2 and TEF1showing the phylogenetic positions of the new species.MPBP above 50% (left) and BIPP above 90% (right) are indicated at the nodes.New species and clade proposed are in boldface (TreeBASE no.S21884).
Table 1
Trichoderma species used in phylogenetic analyses.
|
2018-08-15T02:46:49.532Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "049238fb181cffbeda6efe42e5eab4f8d69bcb60",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.5943/mycosphere/8/10/14",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "049238fb181cffbeda6efe42e5eab4f8d69bcb60",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
257050478
|
pes2o/s2orc
|
v3-fos-license
|
Axion Star Explosions: A New Source for Axion Indirect Detection
If dark matter is composed of axions, then axion stars form in the cores of dark matter halos. These stars are unstable above a critical mass, decaying to radio photons that heat the intergalactic medium, offering a new channel for axion indirect detection. We recently provided the first accurate calculation of the axion decay rate due to axion star mergers. In this work we show how existing data concerning the CMB optical depth leads to strong constraints on the axion photon coupling in the mass range $10^{-14}\,{\rm eV}\lesssim m_a\lesssim 10^{-8}\,{\rm eV}$. Axion star decays lead to efficient reionization of the intergalactic medium during the dark ages. By comparing this non-standard reionization with Planck legacy measurements of the Thompson optical width, we show that couplings in the range $10^{-14}\,{\rm GeV}^{-1} \lesssim g_{a\gamma\gamma} \lesssim 10^{-10}\,{\rm GeV}^{-1}$ are excluded for our benchmark model of axion star abundance. Future measurements of the 21cm emission of neutral hydrogen at high redshift could improve this limit by an order of magnitude or more, providing complementary indirect constraints on axion dark matter in parameter space also targeted by direct detection haloscopes.
I. INTRODUCTION
Numerical simulations of axion-like dark matter cosmologies have shown that a solitonic core forms in the center of every dark matter halo, see [1,2] for the first simulations and [3][4][5][6][7][8] for further studies.In Ref. [9] we have explicitly calculated the number and energy density evolution of axion stars from hierarchical structure formation using semi-analytic models starting from an initial adiabatic perturbation spectrum.Ref. [2] demonstrated a power law relation between the mass of an axion star and its host halo: where z is the redshift, M h is the virial halo mass, α is a power law exponent, and [2]: where m a is the axion-like dark matter mass and M min is the smallest halo mass that could host a soliton of mass M S at a given redshift (given by the Jeans scale) [10,11].* miguel.escudero@cern.ch;-0000-0002-4487-8742 † charis.pooni@kcl.ac.uk; -0000-0002-6658-3478 ‡ malcolm.fairbairn@kcl.ac.uk; -0000-0002-0566-4127 § diego.blas@gmail.com;-0000-0003-2646-0112 ¶ xdu@carnegiescience.edu; -0000-0003-0728-2533 * * david.j.marsh@kcl.ac.uk; -0000-0002-4690-3016 Ref. [2] found a core-halo mass relation of the form M S ∝ M 1/3 h while Refs.[4,6,7] have found M S ∝ M 3/5 h .Refs. [8,12] suggest that there is an intrinsic diversity in the core-halo mass relation.Importantly, all numerical simulations of axion-like dark matter cosmologies show that the core-halo mass relation is bounded between α = 1/3 and α = 3/5.The Monte Carlo merger tree model we employed in Ref. [9] shows that an average slope α = 2/5 captures this diversity well for both the soliton mass function and, more importantly, the merger rate.In Figure 1 we show isocontours of constant M h for various values of M S and m a for the case α = 2/5.For the other cases we refer the reader to Appendix A. We note that while the simulations of Refs.[1][2][3][4][5][6][7][8] are performed for m a ∼ 10 −22 eV their results should apply to any axion mass for three reasons: 1) the axion stars appear to form during the gravitational timescale of the halo, 2) the primordial power spectrum expected from inflation is almost scale-invariant, and 3) the evolution of non-relativistic axion dark matter features a scaling symmetry that allows extrapolation to any axion mass, see e.g.[2].
II. AXION STAR EXPLOSIONS
Solitonic cores, also referred to as axion stars have ultrahigh phase-space occupation numbers that can trigger collective processes which cannot occur in vacuum.Parametric resonance can lead to exponentially fast decays of axion stars into photons [13][14][15][16][17][18][19] or into relativistic axions [20][21][22], meaning axion stars become unstable above a certain mass [13,19] (vector dark matter solitons have where g aγγ is the axion-photon coupling.Super-critical solitons explode into photons with E γ = m a /2.This happens by a collective process where photons produced by axion decays stimulate other axions to decay such that the axion star decay happens on a short timescale given by the light crossing time of the soliton: The energy released in the explosion of a supercritical soliton is E = M S − M decay S .There are, however, two factors that could prevent this explosion: 1) If the axion-like particle possesses self interactions, then axion stars can decay into axions leading a "Bosenova" above a critical mass M B−Nova set by the quartic self coupling λ [20][21][22][24][25][26].For a quartic coupling typical of a cosine instanton potential, λ = m 2 a /f 2 a , and axion-photon coupling g aγγ ≃ α EM /(2πf a ), one finds that M decay S ≃ 600M B−Nova .This means that for nominal values of g aγγ the axion stars will not actually decay into photons but rather into relativistic axions, with possible limits from such a decay explored in Ref. [27], see also [28].However, there are many models that feature significantly enhanced g aγγ interactions, see e.g.[29][30][31][32][33][34][35], or suppressed quartic interactions, see e.g.[36,37]. In hat follows, we will consider that the relevant effect is decay into photons and thus our results will apply to scenarios such as those previously mentioned.
2) In the parameter space of interest for our study, axion stars are always hosted by halos of mass M h < 10 5 M ⊙ which means that they lie below the baryonic Jeans scale and are too light to capture any significant amount of baryons from the intergalactic medium (IGM) [38,39].This means that on average the axion stars formed will simply be surrounded by a baryon environment with average IGM properties.Nevertheless, the IGM contains ionized particles, which could kinematically block the axion decay by plasma effects.More concretely, the photon gets an effective mass-squared that is proportional to the number of free electrons in the plasma [40]: Axion star decay is blocked until the plasma frequency drops to ω p (z decay ) < m a /2, resulting in an explosion of all the axion stars with M S > M decay S at specific redshift (see Appendix C): Once the plasma frequency has dropped such that parametric resonance decay is no longer blocked, then supercritical axion stars will explode as soon as they are formed.In Ref. [9] we calculated the cosmological evolution of the number/energy density of critical axion stars using the extended Press-Schechter formalism and Monte Carlo merger trees.We considered the case that an axion star will only explode either if it is plasma blocked until it is super critical, or if it is formed by a major merger defined by the merger of two axion stars of comparable mass, such that mass increase happens rapidly.
Soliton explosions lead to axion dark matter decay, and can inject energy into the IGM comparable to or greater than energy emission from astrophysical processes, such as core collapse supernovae [9], and thus offer a new method of indirect detection of axions.
III. HEATING THE INTERGALACTIC MEDIUM
Figure 2 outlines qualitatively the mechanism just described: axion stars start to form and grow when halos form and merge, which happens appreciably only at z ≲ 100.Once an axion star becomes massive enough and the plasma frequency is low enough, parametric resonance can take place and the star rapidly explodes into low energy photons of E γ = m a /2.These photons are absorbed by the IGM, heat it, and in turn via collisional ionizations lead to an increased ionization fraction of the Universe.
Axion star decay results in the release of a huge number of low energy photons with E γ = m a /2.We will be interested in m a < 10 −8 eV, whose associated low energy photons are efficiently absorbed via inverse Bremsstrahlung on ionized particles in the intergalactic medium, namely FIG. 2. Schematic of reionization caused by axion star explosions.Appreciable formation of dark matter halos hosting sufficiently massive axion stars occurs for z ≲ 100, see [9].When an axion star decays, it releases a huge number of low-energy photons, which are absorbed by inverse Bremsstrahlung, leading to heating of the IGM.If the IGM becomes hot enough, reionization occurs.Energetically, reionization requires a fraction of around f decay DM ≈ 10 −9 of the dark matter to decay.For the lowest mass axion-like particles, ma ≲ 5 × 10 −13 eV, axion star decay is kinematically blocked at early times, leading to a population of super critical stars which decay all at once in a burst once the plasma frequency falls low enough to allow the decay.This leads to patchy reionization.At higher axion masses, plasma blocking is not efficient at the relevant redshifts, and instead super-critical stars formed by major mergers decay immediately as they form, leading to a more uniform and continuous reionization history.
via γ e − p → e − p processes [41].The net result of this absorption is to heat the IGM.In turn, once the temperature of the IGM goes above T ∼ 1 eV, collisional ionization processes e − H → 2e − p ionize the Universe.As a result, the decay of axion stars may result into a period of early reionization, which is strongly constrained by Planck legacy data [42,43].This forms the basis of the constraints derived in this paper, see also [44][45][46][47][48][49][50][51] for constraints of this type for other dark matter candidates.
Importantly, depending upon the absorption length scale of the photons produced by axion stars the effect on the IGM can be different.Considering the γ e − p → e − p absorption length, see [41] and Appendix D, we find that for m a ≲ 5 × 10 −13 eV the photons are absorbed within very small volumes and this generates a shockwave very much like the one formed by supernova explosions as the amount of energy released is very similar [52].On the other hand, for m a ≳ 5 × 10 −13 eV the absorption length-scale is larger than the inter-separation of axion stars results in a homogeneous cosmological increase in the global IGM temperature.It is important to highlight that in principle a fraction as small as f DM ∼ 3 × 10 −9 ≃ 13.6 eV n b /ρ DM of dark matter converted into heat in the IGM would be enough to fully reionize the Universe, see Appendix B.
IV. METHODOLOGY
We model the IGM temperature by including the heating generated by axion star decays and accounting for cooling from Compton processes, collisional excitations and ionizations, and the expansion of the Universe.Collisional excitations and ionizations dominate in our scenario which we implement using the fitted rates in [53,54].We then track the free electron density x e using the effective 3-level atom approximation [55,56], including free protons and He + .We use the same rates and expressions as the recombination code RECFAST including their fudge factor [57,58] -modulo that we change T b → T γ in the terms in Eqs.(1) and (2) of [58] that account for photon ionization as the approximation T b ≃ T γ breaks down for z < 200.Finally, in the regime m a ≲ 5×10 −13 eV the photons from axion star explosions are absorbed on scales smaller then their typical hosting halo and generate shockwaves which ionize small patches of the Universe.We track such explosions until they are unable to ionize more IGM.We follow standard methods used for supernova remnants [52,[59][60][61], and find results very close to those expected from the Sedov-Taylor solutions [62,63].We refer the reader to Appendices E and F where we describe in detail all the relevant heating and ionization for the two regimes of interest, respectively.
V. CONSTRAINTS FROM THE CMB OPTICAL DEPTH
We derive conservative constraints on reionization driven by axion star explosions by using the 95% upper limit on the integrated Thomson optical depth to reionization from Planck CMB observations: τ reio < FIG. 3. Parameter space excluded by Planck measurements of the Thomson optical depth.We highlight also reach of future 21cm surveys and constraints from X/γ-Ray observations [64][65][66][67][68][69] and CAST [70], see [71].The evolution of the free electron fraction and the baryon temperature is shown for the points highlighted by colored symbols in Figure 4.
0.068 [42,43].In practice, and as customarily done for other types of energy injections [49][50][51], since observations show that the Universe should be fully ionized by redshift z ≃ 6 [73], and this implies a minimum contribution of τ reio = 0.0384, we then consider a region of parameter space excluded if the contribution to the optical width to reionization from axion star explosions at 50 > z > 6 is τ reio > 0.03.We note that this is a conservative approach as it assumes the Universe was not reionized until z = 6 by standard astrophysical sources.
The axion parameter space excluded as a result of this constraint is shown in the two regions in dark red in Figure 3, and the resulting evolution of the ionization fraction and IGM temperature for representative models is shown in Figure 4. Constraints in the region m a ≲ 5 × 10 −13 eV are a result of patchy reionization generated by the decay of all critical axion stars in the Universe once the axion decay is kinematically allowed, namely when ω p (z decay ) = m a /2.For m a ≳ 5 × 10 −13 eV the photon plasma mass is small compared to the axion mass, and the constraints arise instead from supercritical axion stars formed via mergers providing continuous heating and uniform reionization.The intermediate region of parameter space is constrained only at 1σ because in this regime the plasma frequency rises and blocks further axion star decay, see green curve of the upper panel of Figure 4.This region of parameter space is, however, expected to be tested with large scale CMB polarization data from LiteBIRD [74].
We note that the Planck constraints do not extend to arbitrarily large g aγγ -the larger the value of g aγγ , the smaller the mass of the halos that host critical soliton stars.In axion-like dark matter cosmologies structure formation is suppressed at M h < M min , see Eq. ( 2), and couplings g aγγ ≳ 10 −10 GeV correspond to axion stars For the scenarios we show there are no astrophysical sources of heating or reionization.We also show in black the ΛCDM evolution for xe from the Planck best fit cosmology and T b from the fiducial model of [72].In the lower panel we highlight scenarios that can be tested with future 21cm observations, conservatively (long dashed) and optimistically (dashed).
hosted in halos whose abundance is suppressed (which may in turn lead to further cosmological constraints, see e.g.[75,76]).In addition, we notice that the bounds change their shape for m a ≳ 5 × 10 −11 eV.For this region of parameter space the photon absorption efficiency in the IGM is smaller than one and this weakens the constraints.
VI. LYMAN-α AND SPECTRAL DISTORTIONS
We considered two other existing measurements that can limit this scenario for IGM heating.Since axion star explosions occur when Compton scattering is inefficient (z ≲ 10 4 ), they generate y-type distortions of the CMB energy spectrum.The change to the photon energy density from such an effect must satisfy δρ γ /ρ γ ≲ 5 × 10 −5 based on COBE/FIRAS measurements [47,77], which translates to f decay DM ≲ 2 × 10 −7 for decaying DM: significantly weaker than the Planck optical depth constraint.
At redshifts 2 ≲ z ≲ 7 the Lyman-α forest can measure the IGM temperature see e.g.[78][79][80].The IGM temperature at these redshifts satisfies T b ≲ 1.5 × 10 4 K which can also be used to constrain axion star decays.This is shown as the upper contour in Figure 3, with an accompanying T b (z) in Figure 4 (bottom panel).Lyman-α data could be competitive with the Planck optical depth constraint, although a dedicated study including other sources of heat and cooling mechanisms would be needed to set definitive constraints.
VII. 21 CM COSMOLOGY
Measurements of the hyperfine 21cm transition are expected to revolutionize our understanding of the cosmic dawn and the epoch of reionization, see [81,82] for reviews.There have already been several interesting bounds on the emission power of the 21cm line at various redshifts [83,84], and even a putative detection [85] (which is disputed see, e.g.[84]).The observational status of the cosmological 21cm line is still in its infancy but it is expected that ongoing and upcoming experiments such as HERA [86] and SKA [87] among others should be able to robustly detect the 21cm line and provide a view of the thermal state of the Universe at redshifts 4 ≲ z ≲ 30 [88], see [89] for a compilation of experiments targeting this line.
As highlighted in Figure 4, axion star explosions greatly enhance the IGM temperature at the redshifts where the 21cm line will be targeted z ≲ 35.In particular, the rise of T b at large redshifts z ≳ 20 is a feature that is not easy to achieve astrophysically, see e.g.[90].The sky-averaged 21 cm brightness temperature can be written as [91]: The sensitivity of SKA in this redshift range is expected to be ∆T 21 ∼ 10 mK, which means that SKA should be able to clearly differentiate between these two cases and thus future 21cm observations will test scenarios where axion star explosions heat the IGM.
In Figure 3 we highlight in light red the region of parameter space that could be tested by an experiment such as SKA by demanding that T 21 < 30 mK at z ≃ 20 (equivalent to T b ≳ 500 K by z ≃ 20).A tighter limit can be arrived at if one could differentiate any model from ΛCDM that causes T b > T γ at z ≳ 10.This is shown by the bottom light line in Figure 3.
VIII. CONCLUSIONS
Parametric-resonance instability of axion stars leads to an enhanced decay rate of axion dark matter into low energy radio photons.We have shown how the production of such photons heats the IGM, leads to reionization, and alters the optical depth of CMB photons.Planck legacy measurements of the optical depth form the basis of our new constraints on g aγγ highlighted in Figure 3, which is the strongest limit on axion dark matter in the relevant mass range by more than an order of magnitude.Future 21cm measurements of the IGM during the dark ages could improve this limit by more than an order of magnitude.The overlap of these indirect probes with the target regions of the DM-Radio haloscope program [93] invites further careful consideration of axion star explosions as a new tool in the hunt for dark matter.
IX. OUTLOOK
Aspects of this model that require further exploration include primarily a detailed study of soliton merger rates in simulations displaying a core-halo diversity in the redshift and particle mass ranges of interest.While our Monte Carlo merger trees in [9] suggest that M S ∝ M 2/5 h accounts well for diversity in the soliton merger rate only simulations in the redshifts of interests can fully support or modify this conclusion.Furthermore, the cosmological simulations of axion stars are typically carried out for m a ∼ 10 −22 eV.In these simulations axion stars appear to form upon halo collapse, and given the almost scale-invariant primordial power spectrum and the fact that the Schrodinger-Poisson equations solved in them features a scaling symmetry, their results can then be extrapolated to any axion mass.Nevertheless, it would be very interesting to perform explicit simulations of axion star formation and growth within the mass range of interest for our study, 10 −14 eV ≲ m a ≲ 10 −6 eV.
On the side of particle physics, the strongest assumption of our analysis is that either axion quartic couplings are suppressed or g aγγ couplings are enhanced compared with canonical expectations.While models of these types exist, construction of further explicit axion-like dark matter models above the QCD line would be required to address how natural such couplings are.Nevertheless, simulations of axion star explosions including both quartic and photon couplings may yet find that Bosenova triggered photon parametric resonance can occur for more standard coupling ratios.Furthermore, determining the true reach of 21cm measurements requires a full simulation of the 21cm anisotropies including anomalous lowenergy photon injection from axion stars.We suspect that anisotropies in this model will differ strongly from ΛCDM, and thus bounds may even be more powerful than our estimates.
In the main text we have shown all results for the benchmark core-halo mass relation in Eq. ( 1) with α = 2/5.Here we show the results for other two benchmark core-halo mass relations: α = 1/3 and α = 3/5.The former was the first one found in the literature [1,2].The case α = 3/5 was found in Refs.[4,6,7].In addition, Refs.[8,12] have shown evidence not for a strict corehalo mass relation but rather for diversity.Importantly, all numerical simulations show that axion stars lie somewhere in between α = 1/3 and α = 3/5.In addition, our Monte Carlo merger tree [9] shows that α = 2/5 captures well the effect of diversity in the merger rate of axion stars and in consequence represents our nominal choice for the core-halo mass relation.For completeness, in this appendix we show the resulting Planck constraints for these other two scenarios.In particular, Figure 5 shows the M S -M h relation for the two cases while Figure 6 shows the Planck CMB constraints for each of them.Planck constraints on the axion parameter space are strongly related to the amount of decaying dark matter that is injected as heat in the IGM.There are two distinct regions of parameter space.For m a ≲ 5 × 10 −13 eV there are three key effects: 1) axion stars cannot decay until the photon plasma mass is low enough, 2) the explosion thus occurs as a burst of energy when ω p (z) < m a /2, and 3) the photons generated from axion star explosions are absorbed on very small volumes and this generates shockwaves and bubbles of ionized IGM.This leads to patchy ionization.In this case, the constraints on the parameter space closely resemble regions of constant decaying dark matter density evaluated at the redshift of decay, z decay .This is explicitly shown in the left panel of Figure 7.There we can see that the region of parameter space excluded by Planck closely follows a region of parameter space where f decay DM ≃ 10 −6 .It is important to note that this is almost three orders of magnitude larger than what would energetically be needed if all the energy were to be injected homogeneously as heat in the IGM.However, since the observable τ is strongly sensitive to the volume of ionized IGM in the Universe this patchy reionization is much less efficient in fully ionizing the entire Universe.This is analogous to what happens in standard reionization where the most efficient sources of the global ionization of the Universe are UV and X-ray photons with mean free paths similar to the size of the observable Universe.
In the region of parameter space with m a ≳ 5 × 10 −13 eV the absorption of photons from axion star explosions takes place over distances that are larger than the typical inter-separation of axion stars.That means that the process can effectively be seen as homogeneous.In addition, in this case the emission is continuous because the photon plasma mass is small compared to m a /2.In this regime what matters is how much energy is released into heat and on which timescale does the IGM cool.The former we explicitly calculated in Ref. [9].The latter is simply given by Compton cooling and reads (see Eq. (E1c)): Thus, given energy arguments we expect regions of parameter space where df decay DM /dt/τ Compton ≳ 3 × 10 −9 can lead to reionization of the Universe.This is explicitly shown in the right panel of Figure 7 where we see that the Planck exclusion region is parallel and very close to this line.We notice that 21 cm observations will be sensitive to rates that are ∼ 2 orders of magnitude smaller than those currently tested by Planck measurements of τ reio .h .We can clearly see that for a fixed axion star mass the mass of a halo hosting it is much smaller for the case α = 3/5 than for the one with α = 1/3.This in turn means that significantly smaller gaγγ couplings can be probed.We highlight in grey regions of parameter space for which such axion stars cannot have possibly formed by z = 20 as their masses would lie below the effective Jeans mass generated by quantum pressure, see Eq. ( 2).
Appendix C: Cosmological Photon Plasma Mass
There are always some charged particles in the early Universe and this changes the global propagation properties of photons in cosmology.In particular, the photon gets a mass-squared that is proportional to the number of free electrons in the plasma [40]: where α EM is the electromagnetic fine structure constant, n e is the (free) electron number density and m e is the electron mass.Photons with energy E γ < ω p (z) cannot propagate.This in turn has important implications for the axion decay into photons because by pure kinematics the stars cannot explode into photons unless m a > 2ω p (z).In Figure 8 we show twice the plasma frequency as a function of redshift for the Planck best fit ΛCDM cosmology (as well as for considering a fully ionized universe).We see that axions with masses m a < 2×10 −14 eV cannot decay and thus our constraints are restricted to sufficiently massive axion-like particles.
The main cosmological implication of the plasma mass is that all the axion stars will not be able to explode until ω p (z) ≤ m a /2.For the range of redshifts z reio ≲ z ≲ 800 the relation between the redshift of decay and the axion DM /dt/τCompton.We can clearly appreciate how the constraints follow the same shape and follow closely the case 3×10 −9 as expected from energy arguments.The region ma ≳ 10 −10 eV deviates from this expectations because the probability of photon absorption in the IGM is smaller than 1.In addition, in dashed red we show regions of parameter space with τreio = 0.13, namely, with an optical depth that exceeds the Planck bound by a factor of 2. mass is approximately given by: This means that for axion masses in the range 2×10 −14 ≲ m a ≲ 10 −12 eV all axion stars that are massive enough will explode and convert all their energy into very low energy photons that can subsequently heat the intergalactic medium.For m a ≳ 10 −12 eV we will instead expect that axion stars to explode as soon as they form.In this regime, we expect a continuous emission of energy across redshift.
Effect of Over/Under-densities: We note that the plasma frequency displayed in Figure 8 is the one corresponding to the average electron density in the Universe.However, one could imagine that axion stars are at some point surrounded by either overdense or underdense baryonic environments.First, axion stars do form in the centers of dark matter halos and thus one could at first sight expect them to be surrounded by an overdense baryonic medium [94].However, the halos that host the axion stars that we consider in our study have masses M h < 10 5 M ⊙ .This in turn means that their potential wells are not deep enough to capture any significant amount of baryonic gas [38,39] and would thus we expect them to be surrounded by the average baryon density.It is also possible that some axion stars form in regions where the baryon density is below the cosmic average.This would in turn mean that one could in principle extend the constraints we discuss to slightly lower axion masses and this has been discussed in the context of light dark photon dark matter in [45,46,95,96].However, it is important to notice that the explosion mechanism of axion stars is independent on the value of the plasma frequency surrounding the star.The only effect of the plasma frequency would be to potentially delay the explosion itself.Furthermore, noting that the plasma frequency is only mildly sensitive to the baryon density, ω p ∝ √ n e , this means that we do not expect a significant effect from considering the small effect of over or under densities.
Appendix D: Absorption of photons from axion star explosions in the IGM Axion star explosions generate a huge number of very low energy photons with E γ = m a /2.For m a ≲ 10 −8 eV the most efficient absorption process of these photons in the IGM is free-free absorption also known as inverse Bremsstrahlung, γ e − p → e − p.The rate at which these photons are absorbed by the plasma is [41]: where σ T is the Thomson cross section, T b is the temperature of the electron-baryon fluid and where .This means that as the temperature of the baryons raises the free free absorption becomes less effective.Importantly, it also scales as x 2 e which means that as the Universe becomes ionized the absorption becomes highly efficient.
This rate should be compared with the Hubble expansion H(z) to see if these photons will be absorbed, which is well described around the redshifts of interest by Figure 9 explicitly shows the absorption lengthscale of photons produced from axion star decays for two characteristic baryon temperatures, T b = 9 K, and T b = 1 eV, which correspond to a cool IGM and one where the IGM is would be fully ionized.From this figure we notice three distinctive regions of parameter space: 1.For m a ≳ 10 −8 eV the photons have mean free paths larger than the size of the observable Universe and thus axion star explosions will not lead to relevant cosmological implications.
2. For 5 × 10 −13 eV ≲ m a ≲ 10 −8 eV the photons produced from axion star explosions will be absorbed by the IGM on length scales larger than the one it will take for this region to reach T b ∼ 1 eV, which is: FIG. 9. Typical length scale over which photons from axion star decays are absorbed in the IGM via inverse Bremsstrahlung on free electron-proton pairs with a plasma at two different temperatures, T b = 9 K, 1 eV.We show it for the reference value of z = 20 and with a pre-reionization free electron fraction xe = 2 × 10 −4 .We can appreciate three regimes: at ma ≳ 10 −8 eV the photons have mean free paths larger than the size of the Universe and therefore there are no cosmological signatures, for ma ≲ 5 × 10 −11 eV the lengthscale of absorption is always smaller than 1/H which mean that all of the photons will be absorbed even if T b ∼ 1 eV which is approximately the temperature of an ionized IGM.Finally, for ma ≲ 5 × 10 −13 eV the photons from axion stars are absorbed on very small lengthscales and thus generates an intense shock-wave which in turn generates a sort of patchy reionization.
That means that the absorption of photons in this region of parameter space can be seen as homogeneous, so it will take several nearby axion star explosions to heat up the IGM.This scale is labelled as typical bubble size in Figure 9.
3. For m a ≲ 5 × 10 −13 eV the photons are absorbed on very small lengthscales.Since a large amount of energy is released per axion star decay E ∼ M S ∼ 10 −4 M ⊙ the injection of these photons will lead to shockwaves in the IGM very similar to supernova remnants.The lower limit on m a of 2 × 10 −14 eV arises due to the photon plasma mass in the Universe.
It is important to note that these considerations have been made with x e ∼ 2 × 10 −4 as expected prereionization in ΛCDM.As soon as x e grows photons from axion star decays will be absorbed faster in the IGM, see Eq. (D3).
In what follows, in Appendix E we outline our modeling of energy injections for m a ≳ 5 × 10 −13 eV and in Appendix F we consider the case of m a ≲ 5 × 10 −13 eV.For m a ≳ 5×10 −13 eV axion stars explode into photons that are absorbed across lengthscales which are larger than the typical interseparation of axion stars.In this regime, the injection of energy can be seen as homogeneous.As discussed in the main text, we track the evolution of the baryon temperature including the new heating source from axion star explosions and also account for Compton and adiabatic cooling.We also consider for the net baryon cooling generated by collisional ionizations e H → 2e p, as well as collisional excitations e H → eH * → eHγ.We track the free electron fraction using the effective 3-level atom and follow the evolution of protons (p) and HeII ≡ He + .x i ≡ n i /n H so that x e ≡ n e /n H = x p + x HeII .The evolution equations that describe them are: which represent the rate equations for the free electron fraction contribution due to Hydrogen, Eq. (E1a) Helium, Eq. (E1b), recombination (α), photoionization (β) and collisional processes (coll).Both recombination collisional processes depend on the baryon temperature, T b , which is solved via the rate equation, Eq. (E1c), where the first term is simply adiabatic cooling, the second corresponds to Compton cooling, the third to the net gas cooling generated by possible collisions in the plasma, and the last term is a result of heating by axion star explosions.The energy deposited as heat in the IGM by axion star explosions is explicitly given by: where here the Θ function ensures that no emission is generated if the photon plasma mass blocks the decay, the last factor is an efficiency factor that takes into account that photons may not be absorbed if their mean free path is large, and finally the fraction of dark matter decaying into photons from axion star mergers is obtained from [9] and approximately reads as: The relevant energy transitions in the effective-three level model correspond to the ground state (1s), the second level with two quantum states (2s and 2p), and the third level (c), which denotes the continuum.Direct recombination and photoionization are prohibited, and the Lyman-α decay channel is heavily dampened by the optically thick plasma during the recombination epoch.Therefore, the only possible route to recombination (the ground state, 1s) is to go via the 2s− →1s γγ-photon decay channel.This occurs at a much slower rate compared to Lyman-α, Λ H,2s1s = 8.22458 s −1 which has a transitional energy, E H,2s1s = E H,2p1s = 10.2 eV, for Hydrogen, and Λ He,2s1s = 51.3s −1 with an energy transition of E He,2s1s = 20.62 eV, for Helium.This creates an overall effective decay rate for energy transitions between 2− →1, where γγ-decay channel dominates at earlier times.This effective decay rate is encapsulated by the Peebles Cfactor, which scales the rate equations in Eq. (E1a) and (E1b) due to these physical effects.The C-factor for Hydrogen, C H , and Helium, C He , are given by, C He = 1 + K HeI Λ He,2s1s n H (f He − x HeII )e −E2s,2p/Tγ (1 + K HeI (Λ He,2s1s + β HeI )n H (f He − x HeII )e −E2s,2p /T γ ) , (E5) respectively, where K H = λ 3 H,Lyα /8πH(z), λ 3 H,Lyα = 121.56nm, and for Helium, E 2s,2p = 0.60 eV and K HeI = λ 3 HeI, Lyα /8πH(z), where λ HeI, Lyα = 58.43nm [58].The recombination rates for Hydrogen, α H (T b ) and Helium, α HeI (T b ) in Eq. (E1a) and (E1b), are taken from, [97,98] and [99], respectively.
Assuming local thermal equilibrium and applying the fact the absorption (α) and emission (β) processes are in detailed balance, the photoionization rates are given by β H (T γ ) = (2πm e T γ ) 1.5 α H (T γ )e where f He = n He /n H , and where ⟨σv⟩ H and Helium, ⟨σv⟩ HeII , are the collisional ionization rates for Hydrogen and Helium taken from [54].The net cooling generated by collisional ionizations and excitations is given by where E H = 13.6 eV and E HeII = 24.6 eV.The first two terms arise due to cooling generated by ionization while the last term corresponds to cooling generated by collisional excitations.Namely, by processes of the type eH → eH * → eHγ, this is, collisions that are not able to ionize the neutral atom can nevertheless excite it and it will subsequently decay back to the ground state by emitting a photon.Since these photons are not absorbed by the gas the whole process cools it.We take these rates from [53] and which read: x e (f He − x He−II ) .
To cross check our code we considered also the decaying dark matter scenario studied in Ref. [47].There the authors consider a scenario with dark matter decaying into very low energy photons as we do.In their case the decay is modelled by a simple exponential law and in Figure 10 we show the excellent agreement for the evolution of the free electron fraction when we use their model of dark matter.In this reference, the authors did not include the cooling from collisional excitations and it is in that case where we find good agreement.In our study of axion star explosions we do include it since it has a relevant impact (as shown also in Figure 10).
Thomson CMB Optical Depth: Finally, after solving Equations (E1) we then calculate several integrated FIG. 10.Here we compare the free electron fraction evolution in a scenario of very light decaying dark matter that decays homogeneously and following a typical exponential decay law [47].For the comparison we have used the very same values as in Figure 15 of [47].We can appreciate the excellent agreement when collisional excitations are not considered, as is the case of [47].When collisional excitation cooling is considered, the effect on xe is somewhat reduced.
quantities.In particular, the Thomson optical depth to recombination which reads: In practice we choose z max = 50 and since we know the Universe was reionized by z = 6 we use x e = 1 for z ≤ 6 irrespective of the reionization evolution that axion star explosions lead to.Solutions for this reionization evolution are shown in Figure 4. Spectral Distortions: Additional heating of the IGM due to axion star explosions can generate a y-type distortion of the CMB at the redshifts of interest.The maximum amount of energy that could go into the CMB from axion star explosions is given by, where dQ/dz is the heating rate and ρ γ is the energy density of CMB photons.In Eq. (E11), we have chosen to integrate from z min = 0 to z max = 400, given z = 400 is a high enough redshift before axion stars have started to decay.With this expression, we have found that for the parameter space that is currently excluded by Planck constraints on τ the y distortion is much smaller than the one currently tested by COBE/FIRAS.In particular, for the red circle benchmark point in Figure 3 we find |y| = 3 × 10 −7 .This is two orders of magnitude smaller than the current sensitivity.For the diamond point in purple that can be tested by 21cm observations, we find |y| = 1 × 10 −8 , which could be within the sensitivity of future CMB observations.
Appendix F: Patchy Reionization: Growth of Bubbles and Initial Ionization of the Plasma For m a ≲ 5 × 10 −13 eV photons generated by axion star explosions in the early Universe are absorbed on very small lengthscales (see Figure 9).Axion star explosions release a huge amount of energy M S ∼ 10 −4 M ⊙ (see Figure 1) and will generate a shockwave very similar to those generated by supernova explosions that would be very violent and fully reionize the IGM around it.In this appendix we describe how we track the evolution of the bubbles generated by these explosions and calculate the volume average ionization fraction of the Universe.This scenario leads to patchy ionization.
The expansion of post explosionary shock-waves in a dense medium was first studied in the context of nuclear bombs [62,63].It was later realised that this could also be applied to astrophysical environments such as around supernova explosions [100] and the equations were subsequently developed to include the internal and external pressure changes, the heating of the interior of the bubble [101], energy lost to the shockwave due to the emission of radiation [102], the self gravity of the shell and the expansion of the Universe [52,59,103].
In order to model the behaviour of the expanding bubble with radius R and enclosed mass M , we use the following two coupled equations (see, e.g., [59][60][61]): where a dot represents derivative w.r.t time, Ω m is the total density parameter, Ω b the baryon density parameter, and H is the Hubble expansion rate.L tot represents the total luminosity and p is the bubble pressure resulting from this luminosity.The first term in equation (F1a) represents the driving pressure of the outflow, the second is drag due to accelerating the IGM from velocity HR to velocity Ṙ, the third is the self gravity of the expanding shell the fourth is the self gravity of the entire halo.The first term in equation (F1b) represents the increase in pressure caused by injection of energy while the second term is the drop in pressure caused by adiabatic expansion.In particular, the total luminosity is given by: where here L Explosion is the luminosity that is generated by the explosion of the axion star into photons, L Compton is the luminosity lost via Compton cooling against the CMB, and L Ionization takes into account the energy lost in ionising the swept IGM by the shockwave.Since the timescale for the explosion is very short compared to any other time scale, see Eq. ( 3), and the one for absorption as well, see Eq. (D3), in practice we assume that the blastwave expands freely until the mass contained in the bubble is comparable to the energy ejected by the axion star explosion.At that point we start the integration of Eqs.(F1a) and (F1b) neglecting the explosion luminosity but taking into account the fact that a large pressure has been generated by the explosion.This is similar to what is done for supernova explosions, see Section 8.6.1 of [104], and the starting radius, the velocity and the pressure read as follows: R 0 = 0.7 pc We thus assume that the pressure density is at that point essentially given by the energy output in the axion star explosion in the volume filled by the bubble and that the bubble is moving relativistically at a speed of v = 0.5 c.
From the initial conditions in Eq. (F3) we can then solve Eqs.(F1) with L Explosion = 0.The last thing needed is the other luminosities which can be written as [59]: where T γ is the CMB temperature at a given redshift, and where here n b is the background baryon number density, E H ≃ 13.6 eV is the energy it takes to ionize a hydrogen atom, and f m is the fraction of the baryonic mass kept in the interior of the bubble which by construction is f m ≪ 1.In our calculations, for concreteness we take f m = 0.1 but we have checked that smaller values yield very similar results.Finally, we stop the integration when the pressure of the interior of the bubble is p ≃ 2 T crit n b with T crit ≃ 15000 K which roughly corresponds to the temperature at which the IGM becomes is fully ionized.For smaller pressures the bubble will not be able to ionize the swept IGM and the radius at this point will tell us the region that has been ionized by the axion star explosion.Figure 11 shows the numerical solution to Eqs. (F1) for one energy injection example.For concreteness, we show the result for z decay = 20 and for two different axion star masses.There we can clearly see the steep decrease of the pressure as a result of the increase in the bubble radius.We note that we do not find any significant dependence of the results with respect to the redshift of injection.
By numerically solving for all the axion-star masses of interest we find that the comoving size that the bubbles
FIG. 1 .
FIG. 1.In blue dashed: isocontours of halo masses that host a given axion star mass at redshift z = 20 depending upon the value of the axion mass for the benchmark core halo mass relation MS ∝ M 2/5 h .In red we show isocontour lines of critical axion stars (M decay S ) as a function of several gaγγ values.
FIG. 4 .
FIG.4.Evolution of xe (upper) and T b (lower) for the six scenarios highlighted in Figure3with symbols.For the scenarios we show there are no astrophysical sources of heating or reionization.We also show in black the ΛCDM evolution for xe from the Planck best fit cosmology and T b from the fiducial model of[72].In the lower panel we highlight scenarios that can be tested with future 21cm observations, conservatively (long dashed) and optimistically (dashed).
where x HI is the fraction of neutral hydrogen in the Universe and T S is the spin temperature.At z ≲ 25 the spin temperature is T S ≃ T b[92] and we then see that for T b > T γ the 21cm signal could reach a maximum in emission of ∼ 35 mK at z ≃ 20 provided T b ≫ T γ .This is in strong contrast with what is expected in most cold IGM models where T b (z = 20) ∼ 7 K and which would lead to a strong absorption feature with T 21 ∼ −200 mK.
Appendix B: Decaying dark matter fraction and axion star merger evolution
FIG. 5 .
FIG. 5. Isocontours of halo masses (dashed blue) that host an axion star mass (y axis) as a function of axion mass (x axis) at a redshift z = 20, see Eq. (1).In red we show the value of a critical axion star above which the star can explode into photons given an axion-photon coupling in red, see Eq. (3).The left panel corresponds to the Schive relation, with MS ∝ M 1/3 h while the right panel corresponds to the case MS ∝ M 3/5
FIG. 6 .
FIG. 6. Equivalent to Figure 3 but for two other core-halo mass relations MS ∝ M 1/3 h (left) and MS ∝ M 3/5 h (right).In light red dashed lines we show the benchmark case with MS ∝ M 2/5 h for comparison.
FIG. 7 .
FIG. 7. Left: isocontours of f decay DM compared with the constraints obtained from Planck.Right: isocontours of df decayDM /dt/τCompton.We can clearly appreciate how the constraints follow the same shape and follow closely the case 3×10 −9 as expected from energy arguments.The region ma ≳ 10 −10 eV deviates from this expectations because the probability of photon absorption in the IGM is smaller than 1.In addition, in dashed red we show regions of parameter space with τreio = 0.13, namely, with an optical depth that exceeds the Planck bound by a factor of 2.
FIG. 8 .
FIG.8.Minimum mass of an axion that can decay into photons as a function of redshift, ma > 2 ωp(z).In red we show the evolution according to the Planck ΛCDM best fit cosmology and in purple we show the evolution in a fully ionized universe.
/2 T b m e − 7 / 2 . 3 π 2 × 10 −4 2 × 10
(D2) g BR is the Gaunt factor that for E γ ≪ T b can be approximated by g BR = √ log(2.25Tb /E γ ).Plugging in numerical values we find: Γ abs ≃ 2.6 × 10 −22 eV x e where here we have normalized the rate to the thermodynamic values of T b and x e as expected pre-reionization in ΛCDM at z = 20.We note that the rate has a strong redshift dependence and that it scales as m −2 a and T −3/2 b
10 −14 10 − 6 m
Photons are not absorbed in the Universe Photon absorption can be treated homogeneously Appendix E: Continuous Axion Star Mergers andExplosions: Homogeneous IGM Heating and Reionization
|
2023-02-22T06:42:41.553Z
|
2023-02-20T00:00:00.000
|
{
"year": 2023,
"sha1": "33bb344360496d0353535f74fb25aa282824714d",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.109.043018",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "49363fc96308e0a370b12102965e4b695a74541e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
270273123
|
pes2o/s2orc
|
v3-fos-license
|
Evaluating Laparoscopic Sleeve Gastrectomy for Morbid Obesity: A Prospective Follow-Up Study
Background Laparoscopic sleeve gastrectomy (LSG) has become a primary option within bariatric surgery (BS), exhibiting favorable outcomes in terms of weight reduction and improvement of associated health conditions. This study was conducted to assess the outcomes of LSG in morbid obesity (MO) in terms of weight reduction and improvement of comorbidities. Materials and methods A prospective follow-up study was conducted from January 2021 to January 2023 at the Department of Surgery, 7 Air Force Hospital, Kanpur. The study was approved by the institutional ethical committee with protocol no. IEC/612/2020, including 25 patients diagnosed with MO (BMI >40kg/m2) who underwent LSG. Patients were followed up at 1, 3, 6, and 12 months after surgery to track improvements in comorbidities and weight loss. Pre- and post-operative photos were taken, and any complications during the follow-up period were noted. Results Most participants in the study were middle-aged individuals, and 84% of the cohort had common comorbidities such as hypertension (HTN) and diabetes mellitus (DM). LSG led to significant and sustained weight loss, with patients achieving an average reduction of 31.56 kg by the 12th month following the surgery. Moreover, substantial improvements in comorbidities, particularly HTN (76.9%) and DM (80%), were observed. However, not all comorbidities exhibited similar rates of recovery, highlighting the need for tailored management strategies. Using a correlation test, no significant correlation was found between the percentage over ideal body weight (IBW) and the reduction in excess weight, as indicated by a p-value exceeding 0.05. Conclusion LSG is an effective treatment for severe obesity, delivering significant weight loss and notable improvements in metabolic health and overall quality of life.
Introduction
Morbid obesity (MO), defined as a condition with a BMI >40kg/m 2 , is a significant global health concern and is associated with numerous debilitating comorbidities and a reduced quality of life [1].By 2020, it was projected that approximately 30 million adults in India, or about 5% of the adult population, would be obese [2].Contrary to assumptions, this issue affects both affluent and non-affluent communities across urban and rural landscapes.As non-surgical treatments have their limitations, bariatric surgery (BS) has become an important therapeutic option, especially for those with severe obesity (BMI > 40 kg/m2) [3][4][5].
Laparoscopic sleeve gastrectomy (LSG) has become increasingly popular among bariatric procedures due to its positive results in sustainable weight loss and improving obesity-related comorbidities [6,7].LSG and laparoscopic Roux-en-Y gastric bypass (LRYGP) are the primary choices for BS in India [8].LSG has garnered attention for its effectiveness in achieving weight loss, with patients typically experiencing a reduction of 50% to 70% of excess weight within the initial postoperative year [9].Furthermore, LSG shows beneficial outcomes in addressing obesity-related comorbidities like type 2 diabetes and hypertension, while also presenting a lower risk profile compared to alternative surgical methods [10].
The surge in obesity prevalence underscores the imperative for a rigorous evaluation of BS to inform evidence-based clinical practice.A prospective follow-up study offers the optimal method for examining the long-lasting efficacy and safety profile of LSG in the treatment of MO.By systematically tracking patient outcomes over an extended period, this study aims to provide comprehensive insights into weight loss, metabolic improvements, and postoperative complications associated with LSG.These longitudinal evaluations allow healthcare professionals to analyze the enduring effects of LSG on maintaining weight loss, resolving metabolic syndrome, and enhancing overall patient welfare.Furthermore, the inclusion of a prospective cohort facilitates the exploration of potential predictors of surgical success and the identification of patient subgroups most likely to benefit from LSG.
This study aims to detail the characteristics and outcomes of patients diagnosed with both MO and metabolic syndrome who underwent LSG at a tertiary care hospital in India.Through a comparative analysis of available literature, our goal is to advance understanding regarding the effectiveness and relevance of LSG in addressing the complex interaction between obesity and metabolic disorders.
Materials And Methods
A prospective follow-up design was utilized in this study to evaluate laparoscopic sleeve gastrectomy (LSG) outcomes in morbidly obese patients.This study was conducted from January 2021 to January 2023 at the Department of Surgery, 7 Air Force Hospital, Kanpur, and the research proposal was approved by the Institutional Ethics Committee of the Surgery Department at 7 Air Force Hospital, Kanpur, with protocol number IEC/612/2020.Subjects aged 18 to 70 years, who met the WHO guidelines for obesity surgery and expressed willingness for LSG, were enrolled.Only patients who had unsuccessfully attempted conservative management were considered.Individuals below 18 or above 70 years of age were excluded.Since the study focuses on a very specific population subgroup, a small sample size was inevitable due to the limited pool of eligible participants.A confidence level of 95% with a margin of error of 5% and a total population size of 25 were required.After rigorous screening, 25 highly motivated patients were selected.Before the surgery, patients were provided comprehensive counseling regarding LSG specifics, associated risks, and potential postoperative complications.
Detailed preoperative assessments were conducted, including patient history, duration of weight gain, associated comorbidities, previous weight loss attempts, medical therapy history, general physical examination, weight record, and optimization of various physiological systems such as endocrine, cardiology, pulmonary, and psychiatry.Additionally, patients underwent specific investigations, including an echocardiogram, pulmonary function test (PFT), serum insulin, serum cortisol (to rule out Cushing's syndrome), and ultrasound (to confirm or exclude gallstone disease).Patients diagnosed with gallstone disease underwent laparoscopic cholecystectomy (LC) during LSG.Spirometry incentives were initiated preoperatively, and patients were advised to adhere to a fluid diet for one week before the procedure.
Under general anesthesia, laparoscopic gastric sleeve gastrectomy (LSG) was carried out using four ports.The patient was placed in a steep "anti-Trendelenburg" position with legs separated, while the surgeon was positioned between the patient's legs.Additional ports were added if multiple LC operations were required simultaneously.Postoperatively, patients wore pneumatic compression stockings, received antibiotics during induction, and received oral fluids on the evening of surgery.Detailed dietary instructions were provided upon discharge on the fourth postoperative day.BS group's weight loss progress was measured using the percentage change in BMI and the percentage of excess weight loss.The percentage change in BMI was calculated as 100% × ([initial BMI-current BMI]/initial BMI), where the initial BMI (BMI0) is the BMI at the time of surgery and the current BMI (BMI) is the BMI at the latest follow-up.The percentage of excess weight loss was determined by 100% × ([initial weight -current weight]/excess weight at the time of surgery), where the initial weight (W0) represents the weight at the time of surgery, the current weight (Wi) represents the weight at the latest follow-up, and the excess weight (EW0) represents the excess weight at the time of surgery.Excess weight was estimated using the formula described by Devine BJ [11].
Follow-up visits were scheduled for participants at 1, 3, 6, and 12 months post-surgery.Documentation included tracking improvements in comorbidities and weight loss and capturing pre-and post-operative images.Any complications during the follow-up period were also recorded.Statistical analysis was carried out using SPSS version 21, employing the paired t-test with a significance level of p ≤ 0.05.
Results
The baseline features of the study participants are summarized in Table 1.Variables included gender distribution, age groups, the presence of comorbidities, and associated diseases.The data are presented as counts and percentages within each category.Out of the total 25 patients who underwent LSG during the study, 5 (20.0%) were male and 20 (80.0%) were female.Participants were divided into different age categories, with the majority (40%) being in the age group of 41-50 years.Among the participants, 4 (16.0%) had comorbidities, while 21 (84.0%) had no comorbid conditions.The study also examined various associated diseases among the participants; hypertension (HTN) was present in the majority (13 or 52.0%), followed by diabetes in 10 (40.0%) and skin infection in 6 (24.0%).
S. No.
Variables N (%) The weight loss progression of individuals who underwent BS over the course of one year is detailed in Table 2.The data were presented at various time intervals following the surgery, starting from baseline and progressing to the 3rd, 6th, 9th, and 12th months.The participants exhibited a statistically significant decrease in weight at each subsequent time point compared to baseline, evidenced by the declining mean weight and p-values below 0.05.The most substantial weight loss was observed by the 12th month after surgery, with an average reduction of 31.56 kg compared to baseline.P-value <0.05 was considered statistically significant.
Table 3 presents the postoperative outcomes of study participants with various comorbidities one year after undergoing surgery.The outcomes are categorized into three groups: complete recovery, partial recovery, and no change.For comorbidities such as skin infection, hypothyroidism, and polycystic ovary syndrome (PCOS), all participants experienced complete recovery (100%) after 12 months.In conditions like GERD and DVT, there was either no improvement or complete absence of recovery observed in the participants.For HTN and diabetes mellitus (DM), a significant proportion of study participants experienced complete or partial recovery, indicating the positive impact of the surgery on these conditions.Figure 1 shows the correlation between percentage over ideal body weight (IBW) and percentage loss of excess weight.Using a correlation test, no significant correlation was identified between the percentage over IBW and the percentage of excess weight loss.
Discussion
The findings of this prospective follow-up study highlight the efficacy and outcomes of LSG in managing MO and its associated comorbidities.The distribution of age groups shows that middle-aged individuals, particularly those aged 41-50 years, comprised a significant portion of the study population.This demographic profile is typical of BS candidates, who often experience a peak in obesity-related health issues during middle age [12,13].
The prevalence of comorbidities and associated diseases among the study cohort underscores the multifaceted nature of morbid obesity and its impact on overall health.HTN and DM were the most prevalent comorbidities, consistent with previous research demonstrating the strong association between obesity and these conditions.Similarly, Shuvo SD et al. reported a comparable prevalence of comorbid conditions among obese patients [14].Other associated diseases, such as skin infections, osteoarthritis, and hypothyroidism, further highlight the diverse health challenges faced by individuals with morbid obesity [15,16].The significant reduction in mean weight at each subsequent time point reflects the effectiveness of LSG in achieving sustainable weight loss outcomes.These findings are consistent with existing literature demonstrating the superior efficacy of LSG in promoting significant weight reduction compared to nonsurgical interventions [17][18][19][20].
The significant decrease in weight noted as soon as the 3rd month after surgery emphasizes the swift and substantial influence of LSG on weight loss.By the 12th month, patients witnessed a noteworthy average weight reduction, underscoring the enduring efficacy of LSG in promoting weight loss maintenance.These results reaffirm LSG's status as a preferred bariatric procedure for achieving considerable and lasting weight loss outcomes in individuals with MO [21,22].The significant rates of full recovery for comorbidities such as HTN, DM, and skin infections underscore the extensive metabolic and health benefits of LSG beyond simple weight loss.These results align with prior studies showing the positive effects of LSG on resolving comorbidities and enhancing metabolic indicators [23,24].
However, it is noteworthy that not all comorbidities exhibited similar rates of recovery, with conditions like GERD and DVT showing limited or no improvement postoperatively.This underscores the need for continued monitoring and tailored management of comorbidities among BS participants, particularly those with complex medical histories.The absence of a significant correlation between percentage over IBW and percentage loss of excess weight indicates that factors beyond initial excess weight may influence the magnitude of weight loss following LSG.This highlights the importance of individualized subject assessment and consideration of various clinical factors in predicting postoperative outcomes.
Number of patients Complete recovery (% change) Partial recovery (% change) No change (% change) HTN
FIGURE 1 :
FIGURE 1: The correlation between percentage over IBW and percentage loss of excess weight.IBW: Ideal body weight.
|
2024-06-06T15:09:14.715Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "eae74d7c37c24b50cf5edf6a92c58e41e7405add",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/255114/20240604-9563-8r973s.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93c69379010e32ff30766e1b8851ae0e54b25d54",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6242092
|
pes2o/s2orc
|
v3-fos-license
|
Fast, automated measurement of nematode swimming (thrashing) without morphometry
Background The "thrashing assay", in which nematodes are placed in liquid and the frequency of lateral swimming ("thrashing") movements estimated, is a well-established method for measuring motility in the genetic model organism Caenorhabditis elegans as well as in parasitic nematodes. It is used as an index of the effects of drugs, chemicals or mutations on motility and has proved useful in identifying mutants affecting behaviour. However, the method is laborious, subject to experimenter error, and therefore does not permit high-throughput applications. Existing automation methods usually involve analysis of worm shape, but this is computationally demanding and error-prone. Here we present a novel, robust and rapid method of automatically counting the thrashing frequency of worms that avoids morphometry but nonetheless gives a direct measure of thrashing frequency. Our method uses principal components analysis to remove the background, followed by computation of a covariance matrix of the remaining image frames from which the interval between statistically-similar frames is estimated. Results We tested the performance of our covariance method in measuring thrashing rates of worms using mutations that affect motility and found that it accurately substituted for laborious, manual measurements over a wide range of thrashing rates. The algorithm used also enabled us to determine a dose-dependent inhibition of thrashing frequency by the anthelmintic drug, levamisole, illustrating the suitability of the system for assaying the effects of drugs and chemicals on motility. Furthermore, the algorithm successfully measured the actions of levamisole on a parasitic nematode, Haemonchus contortus, which undergoes complex contorted shapes whilst swimming, without alterations in the code or of any parameters, indicating that it is applicable to different nematode species, including parasitic nematodes. Our method is capable of analyzing a 30 s movie in less than 30 s and can therefore be deployed in rapid screens. Conclusion We demonstrate that a covariance-based method yields a fast, reliable, automated measurement of C. elegans motility which can replace the far more time-consuming, manual method. The absence of a morphometry step means that the method can be applied to any nematode that swims in liquid and, together with its speed, this simplicity lends itself to deployment in large-scale chemical and genetic screens.
Background
The nematode, Caenorhabditis elegans, has been used extensively for tackling fundamental questions in biology, including the modeling of aspects of human disease, drug screening and development [1][2][3][4][5]. It was the first complex animal genome to be sequenced [6]. Its high fecundity, short life cycle and the availability of many mutants facilitate the exploration of gene function, while its complete cell lineage and transparency are useful in studies of development. The discovery of the RNA interference method for selective gene knockdown by Fire and Mello [7] and the ease of delivery, via Escherichia coli on which the worms feed, of specifically-targeted double-stranded RNA has paved the way for genome-wide RNAi screens [8]. However, despite the facility with which large-scale genetic and chemical screens can be applied to C. elegans, the lack of automation of phentoyping has limited its use in highly automated screening applications.
To fully harness the potential of the growing number of C. elegans disease models for discovering new drugs, it is therefore necessary to develop automated methods of assaying locomotor phenotypes. For example, screening for new anthelmintics using C. elegans or parasitic worms could be accelerated if automated phenotyping could be deployed in screening chemical libraries. Parasitic nematodes present a major challenge to both human and animal health [9,10] so automation could also assist in accelerating screening for the discovery of new antiparasitic drugs. In addition, automation would enhance the utility of C. elegans models mimicking aspects of human nervous system and neuromuscular diseases.
Assays of C. elegans motility frequently deploy the "thrashing assay". This is performed by placing worms in liquid medium and counting the number of lateral swimming movements (thrashes min -1 ), thereby quantifying an important aspect of locomotion. The effects on locomotion of genetic manipulations and/or drugs can thus be studied with little training, yielding a numerical output that is easy to relate to behaviour. The manual assay, however, suffers two major disadvantages. First, it is very time consuming. Each measurement takes at least 30 s to perform, making it unsuitable for high-throughput assays. In addition, since it requires a human operator, it can be prone to errors in counting, especially at high thrashing rates (over 4 s -1 in a healthy, wild-type worm) with the added hazard of investigator repetitive strain. These difficulties, when set against the tremendous potential for drug/chemical discovery if C. elegans could be used for automated chemical and genetic screens, point to the urgent need for a simple, automated system of counting thrashes.
Current approaches to automating the thrashing assay follow two general strategies [11]. The first is to emulate the human measurement using computer vision. Typically, the worm is distinguished from the background and its shape estimated. The body angles through which the worms pass are then determined, from which the frequency of thrashing can be measured. This approach provides a readout that can be compared directly with manually-derived values. Only two such analyses, designed specifically for the thrashing assay, have been described. In both cases, several parameters in addition to thrashing rate have been measured [12,13] but the software for these analyses has not been published. Several tools for the analysis of nematode locomotion on agar, rather than in fluid, have been made available which could be applied to analyzing thrashing worms [14][15][16][17][18][19][20]. Although providing extra information additional to thrashing rate, this strategy is subject to several difficulties. For example, its accuracy depends on precise morphometry, which in turn depends upon accurately distinguishing the worm from its background (referred to as "segmentation" in the context of computer vision). These processes are usually highly sensitive to recording conditions and can fail under uneven illumination. Even after accurate morphometric data have been obtained, estimating thrashing frequency from the data presents further problems. The most obvious method, that of counting the peaks in a graph of angle against time, can be confounded by the presence of random fluctuations in the measured angle, and setting the threshold to distinguish real peaks from noise in different experimental trials can be difficult. The alternative method of using Fourier analysis to derive the strongest component in the frequency domain can fail at low frequencies, where the signal components are hard to distinguish from the DC signal.
A second approach replaces the manual assay with an alternative, less direct method of estimating thrashing frequency. For instance, Simonetta and Golombek [21] measured thrashing of several worms simultaneously by recording the interruption of an infrared beam. Although such indirect measures do have potential for automation and high-throughput screening, the output of such an assay is not easily related to manual methods, making the results difficult to validate and to interpret biologically.
Implementation
Here we present a method of automatically measuring thrashes per minute, but without using morphometry or even segmentation of the worm from the background. In our approach, the covariance matrix of film frames of thrashing worms is used to measure the time interval between frames that are statistically similar.
Our algorithm follows three steps ( Figure 1). First, the presence of the image background is reduced using Principal Components Analysis [22,23]. To accomplish this, a movie of the worm, consisting of d m × n pixel images, is loaded into a m × n × d 3-dimensional array, which is then transformed into a 2 dimensional matrix, with each frame being contained in a single column. The first Principal Component of this matrix accounts for more than 99% of the variance. Examination of the image data contained in this component showed that it represents, as would be expected, the background of the images, while the remaining components are attributable to the worm conformation ( Figure 1A, B). Thus, subtracting the first Principal Component from the 2-dimensional matrix reduces the presence of background, after which the movie can be reconstructed.
After reduction of the background, the next step is to calculate the covariance matrix. The covariance matrix of the films after removal of the first Principal Component Covariance analysis of worm swimming facilitates automated phenotyping shows a diagonal band of high values, representing the variance of each frame, with similar, parallel bands of lower peak heights (Figure 1). These parallel bands represent frames with high covariance separated by frames with low covariance. The number of frames separating the peaks correspond to the interval between similar images and hence to the interval between similar worm conformations.
The third step in our algorithm consists of deriving the thrashing frequency from the covariance matrix. We compared two methods of automating the measurement of the interval between these peaks. First, we tried converting the covariance matrix into the frequency domain using the Discrete Fourier Transform. This gave accurate measurements of the peak interval at high thrashing rates, but at low rates the peaks corresponding to thrashes became hard to distinguish without human intervention. We therefore adopted the peakdet.m routine generously released into the public domain by Mr. Eli Billauer http:/ /www.billauer.co.il/index.html in which frames separating peaks greater than 50% of the total range of data are counted for each point along the diagonal of the covariance matrix. The thrashing frequency is taken as the product of the median of these values and twice the film acquisition frequency. The median is preferred over the mean, as the former is less sensitive to the effect of outliers. This median is multiplied by twice the acquisition frequency because the strongest covariance is between worm conformations at the same phase of swimming, which represents the movement between two thrashes, as one thrash is counted as a single side-to-side movement. This method has proved robust at all thrashing rates up to about 300 s -1 , this upper limit being presumably determined by the film acquisition frequency.
Worms were synchronously grown to early adult stage and placed in individual wells of a 96-well microtiter plate containing 50 μl M9 with or without drug. After a 10 min exposure period to M9, thrashes were counted at 21°C for 30 s. A single thrash was defined as a complete change in the direction of bending at the mid body. Worms were either counted and filmed at the same time, or filmed and subsequently counted from the movies. Manual counting was performed independently by two trained experimenters. For automated analysis, each experiment was processed as a batch. In accordance with accepted practice, during manual thrash counts the data for a particular worm was dropped if the worm remained still for > 10 s, or if the worm was visibly damaged. In automated assays, all worms were included. Worms were filmed using an XLI 2 Mpixel camera attached to a Nikon SMZ 1000 dissecting microscope and the 640 × 320 pixel images acquired at 10 frames s -1 using XLI imaging software. Before filming, the image was magnified and positioned so that the circular well of the microtitre plate occupied the field of view. Movies were stored in the Microsoft "wmv" format and subsequently analysed using the Matlab software package running on a desktop PC running under Windows XP. To speed computation, images were reduced to 20% of their resolution.
Results
To assess the performance of the system, we carried out a series of thrashing assays on wild-type (N2) worms along with several mutants of the levamisole receptor (unc-29, unc-38, lev1, unc-63 and lev-8), which have a range of effects on motility. We used the x26 allele of the unc-63 mutant because the mutation which opens up the dicysteine loop mimics a mutation producing the same effect in patients with one form of human congenital myasthenia [24]. Plotting the individual machine counts for all these receptor mutants against manual counts revealed a linear relationship between these measurements with a correlation coefficient greater than 0.9 (Figure 2A), suggesting that the machine performance was comparable to manual measurement. When individual scores obtained using machine and manual methods are compared ( Figure 2B), it can be seen that the automated system tends to slightly overestimate the thrashing frequency, but if the median of each trial is taken (the mean is used in manual assays), the effect of these outliers is limited and the system compares very favourably with manual counts ( Figure 2C).
To test the usefulness of our method in analyzing the effects of drugs on motility, we performed a thrashing analysis of the dose-dependence of the actions of the anthelmintic drug, levamisole, on wild-type (N2) worms, comparing counts obtained by a human observer with those obtained using our automated covariance method. Both approaches revealed a similar dose-dependent inhibition of thrashing frequency ( Figure 2D).
Because the method makes no assumptions about the shape of the worms, it is adaptable for use with other nematode species. We repeated our measurement of levamisole effects using 3rd instar larvae of the parasitic nematode, Haemonchus contortus. The parasite's swimming behaviour at low rates often involves the animal completely coiling up on itself, which would represent an insurmountable difficulty for most, if not all, machinevision approaches. The algorithm revealed a dosedependent action of levamisole closely resembling that measured manually ( figure 2D).
Discussion
Our method of automating thrashing assays has a number of advantages over traditional machine-vision approaches. Avoiding the use of morphometry speeds computation. As implemented in our Matlab script, the method is capable of analyzing a previously recorded 30 s movie off-line in less than 30 s, and can therefore be applied to directly-acquired images without the need to store movies. This is an important consideration in developing high-throughput screens, as moving images place high demands on computer storage space. In addition, errors introduced by failures in accurately distinguishing the worm from its background or in abstracting morphometric parameters are avoided. Finally, because it makes no assumptions about the shape of the worms, it is robustly adaptable for use with other species, as shown here by our study on the parasitic worm, H. contortus.
Comparison of automated thrashing assay with manual measurements Figure 2 Comparison of automated thrashing assay with manual measurements. (A) The performance of the machine method on C. elegans is compared with that of two human observers. The variance in the automated system's performance can therefore be compared with the difference in performance between two trained observers. Each point represents the mean of counts for 8 worms, in accordance with the customary thrashing assay. (B) Comparison of results for individual C. elegans with mutations in nicotinic acetylcholine receptors with ranges of motilities. (C) Although the machine frequently underestimates the counts because of outliers, the effect of these outliers in each batch of 8 worms is reduced when the medians for each method are plotted. (D) A comparison of automated and manual thrashing assays on C. elegans or the parasitic nematode, Haemonchus contortus, in the presence of two concentrations of levamisole. Despite the challenges presented by H. contortus, the automated assay produces similar results to manual measurements.
We have implemented this novel method using films of single worms in individual wells of a 96-well plate so that its output with manual assays can be directly compared. Deployed in this way, it can only be applied to one worm at a time, as the presence of more than one worm in the field of view (unless it is significantly smaller) will make it difficult to derive an accurate measurement of thrashing frequency from the covariance matrix. However, adaptations of the same principle could be used to increase the number of worms that can be measured simultaneously. For example, an array of cameras could be used to acquire data from several wells simultaneously. Alternatively, if the worms can somehow be spatially restricted, for example, in plate wells, a large number of worms could be monitored simultaneously with a single camera if the images are split to cover each worm separately.
Conclusion
Here we present a method for automating an established swimming ("thrashing") assay widely used for measuring locomotor phenotypes of the model genetic organism, Caenorhabditis elegans. Measurements of the frequency of thrashing and their reliability compare well with manual counting over the entire range of thrashing rates likely to be encountered in studying C. elegans. Thus, our method of automating worm thrashing assays using covariance is robust, reliable and fast. Deployed in a fully automated system incorporating liquid-handling systems to deliver worms in suspension to microplates and computer-controlled positioning of the plate, this method should facilitate large-scale chemical and genetic screens for effects on motility.
Availability and requirements
Project name: Automated thrashing assay using covariance
Authors' contributions
DBS guided the development of the study and its application to screening, SDB conceived of, developed and tested the software.
|
2017-06-19T16:42:23.521Z
|
2009-07-20T00:00:00.000
|
{
"year": 2009,
"sha1": "77d9c85f45126e0333ddbf3a1245088829a65a11",
"oa_license": "CCBY",
"oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/1471-2202-10-84",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77d9c85f45126e0333ddbf3a1245088829a65a11",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
16022021
|
pes2o/s2orc
|
v3-fos-license
|
Treatment of refractory simple partial status epilepticus by the addition of oral lacosamide in neurosurgical patients. Report of 3 cases☆
Lacosamide is a new antiepileptic drug that has been successfully used for the treatment of partial seizures. We report three neurosurgical cases of simple partial status epilepticus refractory to multiple antiepileptic medications. The addition of oral lacosamide in doses of 200–400 mg in combination with the existing treatment had successfully controlled the seizures within four days.
Introduction
Simple partial status epilepticus (SPSE) is characterized by partial seizures, without impairment of consciousness or secondary generalization and preservation of neurovegetative regulation.
Lacosamide is a new antiepileptic drug, licensed in the USA and Europe, for adjunctive therapy of partial onset seizures in 2008. In this article, we present three patients with refractory SPSE, which was successfully controlled with the addition of oral lacosamide in their treatment.
1st case
A 52-year-old female underwent a craniotomy in our department for a metastatic tumor in the left parietal lobe. The patient had no complications and was discharged on the 7th postoperative day, under levetiracetam on a dosage of 1500 mg bid and phenytoin on a dosage of 100 mg tid. Two months later, she presented again with focal seizures in her right arm. The blood levels of phenytoin were slightly over 20 μg/ml. The electroencephalogram (EEG) showed spikes and δ waves. Oral lacosamide was added starting with an initial dosage of 100 mg bid and four days after, it was increased to 200 mg bid. On the fourth day after the addition of oral lacosamide, seizures were totally controlled, and phenytoin treatment was interrupted. The patient was discharged home seizure-free ten days after the initiation of lacosamide.
2nd case
A 66-year-old male underwent a craniotomy in our department for an acute subdural hematoma after a head trauma. Postoperatively, he remained in the intensive care unit for one month.
Three days after his transfer to the ward, he presented with myoclonic jerks of his right arm. He was already under phenytoin on a dosage of 150 mg tid. We added levetiracetam with a starting dosage of 1000 mg bid, without complete control of the status. We increased levetiracetam up to 1500 mg bid, but the seizures were refractory to the treatment. We finally added 100 mg bid of oral lacosamide. After gradual increase of the dosage at 200 mg bid, the seizures were totally controlled three days after, and the patient was discharged seizure-free.
3rd case
An 80-year-old female was admitted to our department because of a small right acute subdural hematoma, with no midline shift, after a head injury. Her neurological examination revealed mild left-sided postictal weakness. We decided to treat her conservatively. Phenytoin was started on a dosage of 125 mg tid. Six hours after her admission, the patient presented focal motor seizures in her left hand. Although we added 500 mg bid of levetiracetam to her treatment, she continued to have seizures. On the second day, we started 100 mg bid of oral lacosamide and three days after, the seizures were completely controlled. The patient was discharged home seizure-free twelve days after her injury.
Discussion
Administration of antiepileptic drugs (AEDs) is the mainstay of treatment for SPSE. Although almost 50% of patients become seizure-free with the first AED regardless of the agent, a substantial proportion of patients still have inadequate seizure control despite treatment with currently available AEDs [1].
Lacosamide has been approved for add-on treatment of partial seizures with or without secondary generalization in patients 16 years of age or older at a daily dose of 200 mg to 400 mg. Lacosamide has a novel dual mechanism of action. The active substance is (R)-2acetamido-N-benzyl-3-methoxyproionamide. It selectively enhances slow inactivation of voltage-gated sodium channels, resulting in stabilization of hyperexcitable neuronal membranes and inhibition of repetitive neuronal firing without exhibiting effects on physiological neuronal excitability [2].
Being a newly introduced pharmacologic agent until now, there are only a handful of case reports and series in the literature demonstrating the benefits of lacosamide as an adjuvant therapy. Kellinghaus et al. analyzed 39 patients, who were admitted for status epilepticus, with a mean age of 63 years old. Out of the abovementioned patients, 16 presented SPSE. All but two patients previously received benzodiazepines, and lacosamide was started, in an initial dose between 200 and 400 mg, after a median latency of 30 h. Lacosamide successfully terminated the seizures in 44% of the patients [2]. Rantsch et al. concluded that administering lacosamide, as a fourth drug, in doses of 50-100 mg daily, effectively terminated SPSE in 20% of the cases. However, they supported the observation that the outcome at discharge seemed to depend considerably on the age of the patient and not on the therapeutic approach [3].
Chen et al. presented a case of a 57-year-old male with refractory SPSE, already under levetiracetam, who became seizure-free after the initiation of lacosamide at an initial dosage of 50 mg bid and then gradually increased to 200 mg bid [4].
Our series includes three patients, with a mean age of 66 years old, who suffered from SPSE because of a space-occupying lesion. Our experience demonstrates that oral lacosamide, in doses of 200-400 mg daily, can successfully control partial motor status, as an adjuvant to the already established antiepileptic treatment. In all cases, SPSE was totally controlled within four days, and no adverse effects were noted.
Conclusion
Oral lacosamide is a useful antiepileptic drug for refractory SPSE. Its mechanism of action might indicate that the drug could be advantageous in combination therapy. However, further reports are necessary to determine optimal timing, dosing, and duration of lacosamide treatment as well as its effectiveness as a first choice antiepileptic drug.
|
2016-10-19T14:20:02.672Z
|
2013-05-04T00:00:00.000
|
{
"year": 2013,
"sha1": "83ee4bb757e6c95374bfbd208c9b6183d6001925",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ebcr.2013.03.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b68cdf64d1e5f93a8467f5a9391532d056056e3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237353662
|
pes2o/s2orc
|
v3-fos-license
|
The VC-dimension and point configurations in ${\Bbb F}_q^2$
Let $X$ be a set and ${\mathcal H}$ a collection of functions from $X$ to $\{0,1\}$. We say that ${\mathcal H}$ shatters a finite set $C \subset X$ if the restriction of ${\mathcal H}$ yields every possible function from $C$ to $\{0,1\}$. The VC-dimension of ${\mathcal H}$ is the largest number $d$ such that there exists a set of size $d$ shattered by ${\mathcal H}$, and no set of size $d+1$ is shattered by ${\mathcal H}$. Vapnik and Chervonenkis introduced this idea in the early 70s in the context of learning theory, and this idea has also had a significant impact on other areas of mathematics. In this paper we study the VC-dimension of a class of functions ${\mathcal H}$ defined on ${\Bbb F}_q^d$, the $d$-dimensional vector space over the finite field with $q$ elements. Define $$ {\mathcal H}^d_t=\{h_y(x): y \in {\Bbb F}_q^d \},$$ where for $x \in {\Bbb F}_q^d$, $h_y(x)=1$ if $||x-y||=t$, and $0$ otherwise, where here, and throughout, $||x||=x_1^2+x_2^2+\dots+x_d^2$. Here $t \in {\Bbb F}_q$, $t \not=0$. Define ${\mathcal H}_t^d(E)$ the same way with respect to $E \subset {\Bbb F}_q^d$. The learning task here is to find a sphere of radius $t$ centered at some point $y \in E$ unknown to the learner. The learning process consists of taking random samples of elements of $E$ of sufficiently large size. We are going to prove that when $d=2$, and $|E| \ge Cq^{\frac{15}{8}}$, the VC-dimension of ${\mathcal H}^2_t(E)$ is equal to $3$. This leads to an intricate configuration problem which is interesting in its own right and requires a new approach.
where for x ∈ F d q , hy(x) = 1 if ||x − y|| = t, and 0 otherwise, where here, and throughout, ||x|| = x 2 1 + x 2 2 + · · · + x 2 d . Here t ∈ Fq, t = 0. Define H d t (E) the same way with respect to E ⊂ F d q . The learning task here is to find a sphere of radius t centered at some point y ∈ E unknown to the learner. The learning process consists of taking random samples of elements of E of sufficiently large size.
We are going to prove that when d = 2, and |E| ≥ Cq 15
Introduction
The purpose of this paper is to study the Vapnik-Chervonenkis dimension in the context of a naturally arising family of functions on subsets of the two-dimensional vector space over the finite field with q elements, denoted by F d q . Let us begin by recalling some definitions and basic results (see e.g. [2], Chapter 6). Definition 1.1. Let X be a set and H a collection of functions from X to {0, 1}. We say that H shatters a finite set C ⊂ X if the restriction of H to C yields every possible function from C to {0, 1}. Definition 1.2. Let X and H be as above. We say that a non-negative integer n is the VC-dimension of H if there exists a set C ⊂ X of size n that is shattered by H, and no subset of X of size n + 1 is shattered by H.
We are going to work with a class of functions H 2 t , where t = 0. Let X = F 2 q , and define (1.1) The second listed author's research was supported in part by the National Science Foundation grant no. HDR TRIPODS -1934962. The fourth listed author's research was supported in part by the 2021 Simons Travel Grant.
where y ∈ F 2 q , and h y (x) = 1 if ||x − y|| = t, and 0 otherwise, where here, and throughout, ||x|| = x 2 1 + x 2 2 . Let H 2 t (E) be defined the same way, but with respect to a set E ⊂ F 2 Our main result is the following.
, so 3 is a clear improvement over this general estimate. It is not difficult to see that the VC-dimension is < 4, so the real challenge to establish the 3 bound. Moreover, our result says that in this sense, the learning complexity of subsets of F 2 q of size > Cq 15 8 is the same as that of the whole vector space F 2 q .
Remark 1.5. The higher dimensional case of this problem is somewhat easier from the point of view of the underlying Fourier analytic techniques, but is more complex in terms of geometry. We shall address this issue in a sequel ( [3]).
We can prove that the VC-dimension is at least 2 under a much weaker assumption.
Theorem 1.6. Let H 2 t (E) be defined as above with respect to E ⊂ F 2 q , t = 0. If |E| ≥ Cq 3 Remark 1.7. The discrepancy between the size thresholds in Theorem 1.3 and Theorem 1.6 raises the question of whether the VC-dimension is, in general, < 3, if |E| is much smaller than Cq 15 8 . We do not know the answer to this question and hope to resolve it in the sequel.
Learning theory perspective on Theorem 1.3
From the point of view of learning theory, it is interesting to ask what the "learning task" is in the situation at hand. It can be described as follows. We are asked to construct a function f : E → {0, 1}, E ⊂ F 2 q , that is equal to 1 on a sphere of radius t centered at some y * ∈ E, but we do not know the value of y * . The fundamental theorem of statistical learning tells us that if the VC-dimension of H 2 t (E) is finite, we can find an arbitrarily accurate hypothesis (element of H 2 t (E)) with arbitrarily high probability if we consider a randomly chosen sampling training set of sufficiently large size.
We shall now make these concepts precise. Let us recall some more basic notions.
where P x∼D means that x is being sampled according to the probability distribution D.
The following theorem is a quantitative version of the fundamental theorem of machine learning, and provides the link between VC-dimension and learnability (see [2]). Theorem 2.3. Let H be a collection of hypotheses on a set X. Then H has a finite VC-dimension if and only if H is PAC learnable. Moreover, if the VC-dimension of H is equal to n, then H is PAC learnable and there exist constants C 1 , C 2 such that Going back to the learning task associated with H 2 t (E), as in Theorem 1.3, suppose that h y is a "wrong" hypothesis, i.e y = y * , where f = h y * is the true labeling function. Moreover, assume that Since the size of a sphere of non-zero radius in F 2 q is q plus lower order terms, and D is the uniform probability distribution on F d q , so one must choose just slightly less than 1 q to make the results meaningful. It follows by taking δ = that we need to consider random samples of size ≈ Cq log(q) with sufficiently large C to execute the desired algorithm. Moreover, since 3 points determine a circle effectively means that if is just slightly less than We warm up to Theorem 1.6 by first showing the VC-dimension of H 2 t (E) is at least 1.
The existence of a set of size 1 that is shattered by H 2 t (E) means that there exists x ∈ E with the property that there exist y ∈ E such that ||x − y|| = t, and y ∈ E such that ||x − y|| = t. To find x and y, we require a result of the first listed author and Misha Rudnev [5], stated below for convenience.
By Theorem 3.1, since |E| ≥ 4q 3 2 , there exist x, y ∈ E such that ||x − y|| = t. Since |E| is much greater than q, there also exists y such that ||x − y || = t. Hence, the VC-dimension of H 2 t (E) is at least 1.
To prove Theorem 1.6, we must show that there exists {x 1 , x 2 } ⊂ E that is shattered by H 2 t (E). This means that there exist y 1 , y 2 , y 12 , y 0 ∈ E such that the following hold: Thus proving the existence of a set {x 1 , x 2 } that is shattered by H 2 t (E) amounts to establishing the existence of a chain z 1 , z 2 , z 3 , z 4 , z 5 ∈ E, such that ||z j+1 −z j || = t, j = 1, 2, 3, 4, ||z 1 − z 4 || = t, ||z 2 − z 5 || = t. Here, x 1 = z 2 , x 2 = z 4 , y 12 = z 3 , y 1 = z 1 , and y 2 = z 5 (see figure 1). Since |E| q, we may select y 0 from E outside the union of the circles of radius t centered at x 1 and x 2 .
We shall need the following result due to Bennett, Chapman, Covert, Hart, the first listed author, and Pakianathan ( [1], Theorem 1.1).
The existence of a chain of length 4 (4 edges and 5 vertices) with gap t = 0 follows from this immediately, provided that |E| ≥ Cq 3 2 , but we need to work a bit to make sure that we can find such a chain with ||z j+1 − z j || = t, j = 1, 2, 3, 4, ||z 1 − z 4 || = t, ||z 2 − z 5 || = t. To this end, we are going to show that 2 . This suffices since by Theorem 3.2, and, by |E| ≥ Cq 3 2 . To prove (3.2), observe that the left hand side of (3.2) is equal to It is not difficult to see that since x = y and two circles intersect at at most two points. On the other hand, To show the VC-dimension is at most 3, we claim no subset {x 1 , x 2 , x 3 , x 4 } of size 4 can be shattered by H 2 t (F 2 q ), let alone H 2 t (E). If there were, all four points would be forced to live on the same circle centered at y 1234 , say, and at the same time, there must exist y 123 such that x 1 , x 2 , x 3 live on a circle of radius t centered at y 123 , while x 4 does not. This is impossible since three points determine a circle.
Proofs of Theorem 1.3
We know already the VC-dimension of H 2 t (E) is at most 3 from the argument in the previous paragraph. Now we must show that there exists C of size 3 that is shattered by H 2 t (E). This leads to the following question. Do there exist x 1 , x 2 , x 3 , y 123 , y 12 , y 13 , y 23 , y 1 , y 2 , y 3 , y 0 ∈ E, such that ||x i − y 123 || = ||x i − y ij || = ||x i − y i || = t, i, j = 1, 2, 3, and all the remaining pair-wise distances between x's and y's do not equal t?
There are several results in literature that prove the existence of a general point configurations in F d q inside sufficiently large sets. Let G be a graph and let t = 0 be given. We say that G can be embedded in E ⊂ F d q , if there exist x 1 , . . . , x k+1 ∈ E such that ||x i − x j || = t for (i, j) corresponding to the pairs of vertices connected by edges in G. The second listed author and Hans Parshall proved in [4] that if the maximum vertex multiplicity in G is equal to t and |E| ≥ Cq , then G can be embedded in E. In the case of the configuration above, t = 4, so the threshold exponent in [4] is 1 2 + 4 > 2, so very different methods are required in this situation.
Proof. We first claim that less than half the pairs in {(x, y) ∈ E × E : x − y = t} satisfy x − y = ±v. This follows from where the second inequality follows from |E| > 8q, and the third follows from |E| ≥ 4q 3 2 and Theorem 3.1. By pidgeonholing on the remaining directions, there exists u with u = t and u = ±v for which Let E denote the collection of x's from the set above. The hypothesis |E| ≥ 4q and so Theorem 3.1 guarantees there are at least |E | 2 2q 2 pairs (x, w) ∈ E × E with x − w = t. Next, we must ensure x − w = ±v nor ±u. By proceeding as above, we find where the second inequality follows since |E | > 8q, and the third follows again from Theorem 3.1. Hence, there exists some pair (x, w) in the right-hand set but not the left-hand set.
To summarize, we have found (x, y, w, z) ∈ E 4 for which We shall also need the following pigeon-holing observation.
Proof. Using Theorem 3.1 once again, we see that if |E| ≥ Cq 15 8 , then Since the circle of non-zero radius has at most q + 1 points, the conclusion follows.
It follows from the assumptions of Theorem 1.3 and Lemma 4.2 that there exists v ∈ F d q , ||v|| = t = 0, such that Using Lemma 4.2 and Lemma 4.1, we see that there exist distinct x, y, z, w ∈ E ∩ (E − v), with v from (4.1) such that where ±v = x − y, y − z, z − w, x − w. We are now ready to move into the final phase of the proof of Theorem 1.3.
Let y 123 = y, x 1 = x, x 2 = y + v, x 3 = z, y 12 = x + v, y 23 = z + v, y 13 = w. Note that ||y 123 − x i || = t, i = 1, 2, 3, Figure 2 for reference. To see that ||y ij − x k || = t when k = i, j, note that otherwise the circles of radius t centered at y ijk and y ij would intersect at x 1 , x 2 , and x 3 , implying that y ij = y ijk , contradicting the construction. Since |E| q, we may also select y 0 for which y 0 − x i = t for each i = 1, 2, 3.
We are almost there, but we still need to come up with y 1 , y 2 , y 3 such that ||x i − y i || = t, and we need to make sure that ||x i − y j || = t, i = j. This is where we now turn our attention.
Recall that whenever |E| ≥ Cq 15 8 , we have constructed a configuration {x 1 , x 2 , x 3 , y 123 , y 12 , y 13 , y 23 , y 0 } with the desired edges in the distance graph (see Figure 2). In particular, provided the constant C is large enough, we can construct such a configuration in E := |{x ∈ E : E * S t (x) ≥ 100}|, a subset of E in which every vertex has degree at least 100 in the distance graph on E. In particular x 1 , x 2 , x 3 each have degree at least 100, so they each have at least one neighbor in addition to the ones listed, i.e. there exist distinct y 1 , y 2 , y 3 with y 1 , y 2 , y 3 / ∈ {x 1 , x 2 , x 3 , y 12 , y 13 , y 23 , y 123 }, and ||y i − x i || = t for i = 1, 2, 3, and y i − x j = t for i = j.
|
2021-08-31T01:16:13.073Z
|
2021-08-30T00:00:00.000
|
{
"year": 2021,
"sha1": "a4d0105700616b3534101a975f4dd8379516e29f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a4d0105700616b3534101a975f4dd8379516e29f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
265250911
|
pes2o/s2orc
|
v3-fos-license
|
The Role of the Use of Information Technology in Orderly Financial Administration and Tax Compliance in Digital Business
This study aims to examine the role of information technology on financial administrative orders and tax compliance. We collected data through a survey on 136 digital business or startup owners/financial managers in Malang. The results show that the use of information technology has a positive effect on financial administration orders; financial administration order has a positive effect on tax compliance; and the use of information technology has a positive effect on tax compliance mediated by the financial administration order. This research contributes to the knowledge by determining the relationship between the use of information technology and financial administration orders as well as tax compliance to increase tax revenue in Indonesia. The digital businesses or startups use technology to maintain businesses' financial administration orders and hence, make easier calculation of taxable amounts. This research implies that the ease of taxes payment through the online tax administration is expected to improve startup's tax compliance from the financial administration orderliness in fulfilling the tax obligations.
INTRODUCTION
Taxes are the main component of Indonesia's largest source of income currently.However, tax revenue only reached 84.4% in 2019.The government hopes to get potential tax revenue from digital business which is currently very developed in Indonesia.Ironically, compliance is still one of the main sources of failure to achieve tax targets.Tax compliance from digital startups in Indonesia is still low (Supardianto, Ferdiana, & Sulistyo, 2019).According to the Directorate General of Taxes, there are two things that cause the low participation rate, namely the digital business turnover rate in startups is very high and the tax financial administration process is still very low.Lack of participation can also be caused by the taxpayer's lack of knowledge regarding tax obligations.
Startupsis a term used for people who are starting a digital business.Pateli & Giaglis (2005) stated that the accelerated growth of Information Technology and Communication (ICT) is able to increase trends that change traditional business models and encourage the establishment of new businesses (startups) that tend to take advantage of technological opportunities, and are followed by innovative technological trends (Sheung, 2014).
Financial administration, which is the basis for management decision making, must be carried out as well as possible to support the fulfillment of tax administration obligations because the tax payable must be in accordance with the company's financial reports.A large number of digital businesses have not carried out financial administration well because they are still focused on achieving business turnover targets.Very rapid global developments mean that digital businesses experience changes very quickly, so digital businesses must adapt to avoid bankruptcy (Bermen, Knight, & Case, 2008).Unfortunately, digital businesses do not have adequate human resources to take care of financial administration, so tax administration is not carried out.This condition results in the inability of businesses to report tax obligations comprehensively.
On the other hand, there are several digital businesses that have an existing financial administration systemusing information technology.However, this system does not guarantee that tax administration conditions are carried out well.Digital business holds the potential for tax payment capabilities because businesses that help solve social, economic or health problems will attract investors to invest in the business.According to research by Jaya, Ferdiana, & Fauziati (2017), digital businesses are still in the initial phase of opening which requires qualityhuman resources for business continuity, while business models and administrative systems are still not really needed.However, digital businesses that are in the developing stage have started to implement good business models and administration systems to attract more investors.However, implementing good financial administration order does not guarantee that a business has implemented tax administration (Fauziah, 2019).Research regarding the influence of the use of information technology on financial management and tax compliance is still very rare.
The relationship between the use of information technology on orderly financial administration and tax compliancestill very rarely researched.The role of information technology is to help the financial governance process in business to achieve efficiency in the use of resources (Kreher, Sellhorn, & Hess, 2017).Through resource efficiency, profits can also be maximized so that the company can survive and develop longer.A study conducted by Aldalayeen, Alkhatatneh, & Al-Sukkar (2013) shows that information technology can improve the performance of financial management in large companies.Thus, taxes are indirectly influenced by the use of information technology which is related to public compliance in paying taxes, how to pay taxes, and calculating taxes for the country (Ramaswamy et al., 2011).
Based on this background, the aim of this research is to examine the role of useinformation technology on financial administration and tax compliance in digital business.This research uses a computer-delivered survey technique as a form of utilizing technology and minimizing face-to-face contact during the pandemic in collecting data, namely via the internet (Hartono, 2015).Hariyanto (2010) concluded that the use of information technology has a significant influence on management performance.The role of information technology is considered capable of increasing work productivity with a high level of accuracy, speed and convenience.This shows that there is a relationship between information technology and financial management, so it will have an impact on tax compliance.This research provides a theoretical contribution in the use of information technology for financial administration compliance and can increase tax revenues in Indonesia so that development for an independent and independent Indonesia can be achieved immediately.
LITERATUR REVIEW Theory of Planned Behavior
Theory of Planned Behavior(TPB) was discovered by Ajzen (1991) which predicts individual intentions and behavior in responding to something.The greater a person's intention to behave, the greater the possibility that the behavior will be carried out (Ajzen, 1991).TPB emphasizes the possible influence of perceived behavioral control in achieving goals for a behavior, so that this can be used to predict individual behavior in the situation that occurs.
Digital Business
Nugraha & Wahyuhastuti (2017) explained that the majority of businesses emerging today tend to utilize online media (e-commerce).E-commerce is part of digital business.Digital business is a business activity that uses electronic (digital) media to carry out activities.In the world of digital business, the term startup is a familiar term.A startup is a non-permanent organization that can be developed further (Blank & Dorf, 2012)because startups are very synonymous with technology; Startup business models are also different from conventional businesses.Ries (2011) believes that a startup is an organization that solves problems and is able to adapt in an agile environment.
Startupsis a term for people who are starting a digital business.Sheung (2014) explains that startup businesses are followed by innovative technological trends.Pateli & Giaglis (2005) state that the accelerated growth of information technology can increase trends that change traditional business models or encourage the establishment of new businesses (startups) that tend to take advantage of technological opportunities.The existence of startups can open up new opportunities for the younger generation, especially those who are willing to adapt by changing the traditional market model to a virtual market.The old business model is starting to change to an online business model (startup), namely an inventory of physical goods that is being replaced by information and digital products.
Use of Information Technology
According to Sutabri (2014), information technology is a technology used to process data, including processing, obtaining, compiling, storing and manipulating data in various ways to produce quality information, namely information that is relevant, accurate and timely.It is used for personal, business and government purposes and is strategic information for decision making.The application of information technology is adjusted to the plans of the company concerned, for example by using Information and Communication Technology (ICT) applications to run a business.
Financial Administration Order
Saidah (2020) explains that generally financial administration has two meanings.In a narrow sense, financial administration is all financial incoming and outgoing records to finance a work organization's activities.In a broad sense, financial administration is a policy in the procurement and use of finance to realize organizational activities in the form of planning activities, accountability arrangements and financial supervision.Financial administration is the entire process that covers company finances starting from planning to use, both for expenditure and income to achieve company goals.
Tax Compliance
Compliance is a state of understanding and knowing about taxation (Jotopurnomo & Mangoting, 2013).The better the understanding of taxpayers, the better the tax revenue for the government (Suyatmin, 2004).Apart from that, compliance also includes awareness of respecting and paying taxes.As taxes are one of the country's largest sources of income, the public is obliged to pay taxes.
As a country that adheres to the self-assessment system, tax compliance is an important factor in accepting and implementing tax obligations.In the self-assessment system, taxpayers calculate, deposit and report tax obligations independently.Tax compliance can be defined as the willingness of taxpayers to comply with tax regulations in a country (Andreoni, Erard, & Feinstein, 1998).Taxpayers who always comply will have a good impact on a country.Taxpayers who are always obedient and obedient in paying their obligations will increase tax revenues.Increased tax revenues enable national development to continue.The results of this national development also aim to ensure the welfare of society.
Use of Information Technology Influences Administrative OrderFinance
Financial administration is the totality of company activities related to efforts to obtain the necessary funds with minimal costs and on the most favorable terms, as well as efforts to use these funds as efficiently as possible (Mamesah, 1995).Technology also plays a role in financial management from revenue to profit.Hariyanto (2010) concluded that the use of information technology has a significant influencesignificant to the orderliness of financial administration.The role of information technology, including computers, is considered capable of increasing work productivity with a high level of accuracy, speed and convenience (Aldalayeen, Alkhatatneh, & Al-Sukkar, 2013).This shows that information technology and financial administration order have a close relationship.Companies can use ICT such as computer applications which can improve employee performance.The use of information technology will be very helpful in the orderly process of financial administration.Thus, the first hypothesis is formulated as follows: H1: The use of information technology influences orderly financial administration
Financial Administration Order Affects Tax Compliance
Administration, acceptance and financing in a business process must be carried out correctly, completely and regularly.This will make it easier for companies to calculate the amount of tax owed in each period.Orderly administrative behavior shows that individuals have the intention to carry out compliant behavior with tax regulations.Orderly financial administration has an influence on taxpayer compliance (Rusli, Hardi, & Pakpahan, 2015).This taxpayer compliance is the result of the taxpayer's understanding of the good and orderly financial administration process.Regularity in financial administration results in correct and precise tax calculations, thus increasing taxpayers' interest in reporting obligations and complying with paying taxes.Therefore, the second hypothesis is formulated as follows: H2: Financial administration influences tax compliance.
Use of Information Technology Affects Tax Compliancethrough Financial Administration Order Mediation
Research conducted by Ramaswamy et al. (2011) shows that ICT is used forautomation of all economic processes, from purchasing to products reaching consumers.Taxes are one component of a country's economy.Taxes should also be influenced by the use of ICT from people's compliance in paying taxes, how to pay taxes, and calculating taxes for the country (Ramaswamy et al., 2011).In Indonesia, startups are classified as Small and Medium Enterprises (UKM).However, in general, startups are more oriented as SMEs operating in the technology sector.Due to the form of business being an SME, startups in the field of products and services should be subject to tax.This is in accordance with Government Regulation Number 23 of 2018 where all businesses with income are required to pay tax.
The government has provided many methods for tax administration matters such as efiling and e-billing.However, SPT reporting via e-filling is not optimal because taxpayers have several obstacles due to a lack of knowledge regarding the automation of the existing tax administration system (Marliana, Suherman, & Almunawwaroh, 2015).This is further exacerbated by SMEs who rarely plan tax payments correctly.If awareness of paying taxes is high, SMEs will be able to help increase state income (Nalendro, 2014).Thus, the hypothesis formulated is: H3: The use of information technology influences tax compliance through the mediation of orderly financial administration
RESEARCH METHODS
The population in this study was 150 financial managers and directors or owners of digital business or startup companies in Malang based on data on stasion.org.Stasion.org is a startup community in Malang which is a place for startup initiators to develop their business.The registered startups consist of several business sectors, namely applications, games, studios, creative communities and co-working spaces.To determine the sample, this research used a saturated sampling technique or census.The saturated sampling method is determining a sample using the entire population as a sample (Sugiyono, 2005).Therefore, the number of samples used was 150 respondents.The total number of questionnaires that can be processed is 136 questionnaires.
This research uses a computer-delivered survey technique, namely collecting data throughinternet (Hartono, 2015).The survey was measured using a Likert scale, where respondents were asked to provide agreement on a scale of 1 (strongly disagree) to 5 (strongly agree).The analytical tool used is the Partial Least Square (PLS) statistical test with the help of SmartPLS software.Variable questionnaire indicators for use of information technology, financial administration, and compliance taxation was developed from the concepts of Kreher, Sellhorn, & Hess (2017), Aldalayeen, Alkhatatneh, & Al-Sukkar (2013), and Ramaswamy et al. (2011).
RESULT AND DISCUSSION
The number of questionnaires distributed by researchers was 150 questionnaires.Of the questionnaires distributed, 14 people did not fill them out, so that the number of questionnaires that could be processed was 136 questionnaires.The majority of respondents were men (90.44%) aged 20-30 years (41.91%).Most of the survey participants were directors/owners (76.47%) who had worked at the current company for 1-2 years (58.09%).The majority of their final education was S1 (67.65%) and the field of education most frequently pursued was management (50.74%).
All indicators in this research instrument are valid and reliable so that hypothesis testing can be carried out.This can be seen from the factor loading values of all constructs which are more than 0.7, as well as the Average Variance Extracted (AVE) value and communality value of more than 0.5.In addition, the root value of AVE is more than the correlation of latent variables.The Cronbach's Alpha value is more than 0.6 and the Composite Reliability value is above 0.7.
R2 value of this research constructis 0.52 for the financial administration variable which explains that the variation in construct changes can be explained by 52.43%.The R2 value for the tax compliance variable is 0.53.This value explains that the variation in construct changes can be explained by 52.77%, while the rest is explained by other constructs outside this research model.
The variable testing model for the role of information technology (IT), financial administration order (AK), and tax compliance (KP) is shown in Figure 1.There are nine indicators for the IT construct, four indicators for the AK construct, and five indicators for the KP construct, each of which is Each factor loading was tested.All indicators are used because they have a value above 0.6, which indicates that the data has a high level of validity (meets convergent validity).
The results of two-way hypothesis testing using SmartPLS can be seen in Table 1 and Table 2.These results show that the t-statistic value is more than the t-table.Thus, Variance Accounted For (VAF) is calculated to determine full mediation or partial mediation in the relationship.The following is the calculation of the VAF value: VAF= (indirect influence)/(direct influence+indirect influence) (1)VAF= ((0.7241 x 0.6640))/(0.6769+(0.7241x 0.6640)) VAF= 0.4808/1.1577VAF = 0.4153 The VAF value of 0.4153 indicates that the financial administration variable partially mediation the relationship between the use of information technology and tax compliance.
Accepted
The use of ICT has a positive effect on orderly financial administration.This is supported by questionnaire data which shows that the use of software for financial management and administration is also used in startups.There are several factors that make ICT influence financial administration order.The first factor is related to the most dominant average age of respondents, which is 20 to 30 years or what is usually called the Millennial Generation (Gen Y and Z).Olson et al.'s research (2011) found that young people (18-28 years) were better able to adapt to technology compared to older adults.Therefore, startups which are dominated by young people are more aware and understand in using technology, including the use of software in carrying out orderly financial administration.
The second factor is related to startup conditions which are required to always prioritize speed in work.Bhargava & Herman (2020) say that speed in getting products into the hands of customers is the main thing in startups.The need related to speed is what makes performance procedures at startups closely related to the use of software which is very helpful in increasing work productivity.Apart from that, Supardianto, Ferdiana, & Sulistyo (2019) also emphasized that the use of technology in all elements in a startup is an obligation that reflects the character of startups in general.Therefore, software can help speed up startup performance, including orderly financial administration.
Orderly financial administration has a positive effect on tax compliance.This indicates that the more orderly financial administration is, the more awareness of paying taxes will increase.ICT is used to automate all economic processes, including taxes (Ramaswamy et al. 2011).As explained, the government has made it easy to pay taxes digitally through e-filing and e-billing.A good understanding of technology in financial administration coupled with ease in paying taxes is expected to optimize orderly financial administration which is directly proportional to compliance with paying taxes.
The use of ICT has a positive effect on tax compliance mediated by orderly financial administration.This shows that the better the use of financial administration technology, the higher the awareness of tax compliance.Several factors of the positive influence of technology on tax compliance through orderly financial administration in startups include: a. Efficiency In a study conducted by Supardianto, Ferdiana, & Sulistyo (2019), startups are closely related to the use of technology in daily performance, including in orderly financial administration.This shows that startups have adaptability and efficiency towards technology.Apart from that, the government also makes it easier to pay taxes through a web-based application and apps, so that tax payments are greatly influenced by the orderly conditions of financial administration.Therefore, the continued use of technology for efficient financial administration, coupled with the ease of paying taxes makes it easier for startups to pay taxes.b.Funding In this research, the average age of a startup is 1-2 years, where the startup has entered the phase for large funding from investors.Providing capital from investors must also be based on good and correct legal aspects.One way to show that a startup is legally viable is by paying taxes.Therefore, pThe use of ICT and the existence of orderly financial administration will influence tax compliance.
The results of this study are in line with researchSudrajat & Ompusunggu (2015) found that the use of ICT has a positive and significant influence on tax compliance.This research is also in accordance with Aryati & Putritanti (2016) which showspositive effectorder afinancial administration towards tax compliance.Thus, pThe use of ICT and orderly financial administration will increase tax compliance.Apart from that, the results of this research are also in line with Napitupulu & Kadir (2014) who found the fact that taxpayers feel the convenience and benefits of using the SPT reporting administration system with e-filling.This is demonstrated by an increase in the SPT submission compliance ratio after using a technology-based administration system.
CONCLUSIONS
The research results show that the influence of ICT use is positive, which also has a positive impact on tax compliance.This can be explained by various factors such as the young average age of startup actors, the need for legality to obtain funding, and the ease of the tax payment system.It is hoped that the results of this research can increase the optimal use of ICT, in addition to orderly financial administration, but also to increase tax compliance.Awareness of tax compliance is expected to increase along with taxpayers' awareness of utilizing ICT for orderly financial administration.The theoretical and practical implications of this research are that it can provide more benefits for orderly financial administration.This research has been attempted and carried out in accordance with scientific procedures, but it still has limitations, namely: 1) This research only looks at the impact of using ICT in financial administration and tax compliance, while there are still many other factors that can influence it; 2) The main object of this research is digital business, but the application of technology in daily activities has also reached other business fields; 3) There were obstacles in taking the questionnaire where some respondents were still not serious in answering the questions.
Table 1 .
Hypothesis Testing Results (Direct Influence)
|
2023-11-17T16:14:19.509Z
|
2023-10-31T00:00:00.000
|
{
"year": 2023,
"sha1": "a311a99b30ac1be3b195e6ecaae1a18e1596e402",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.aissrd.org/index.php/ijess/article/download/184/161",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4f30c8c02721ab0677ff6c6e55d5bd0955321e36",
"s2fieldsofstudy": [
"Business",
"Computer Science",
"Economics"
],
"extfieldsofstudy": []
}
|
213272329
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of empirical use of low dose aspirin and enoxaprin in the treatment of unexplained recurrent pregnancy loss
Recurrent pregnancy losses have commonly been defined as three or more consecutive spontaneous pregnancy losses. Chromosomal abnormalities have been seen in 10-15% of miscarriages. About 1-2% of women suffer from recurrent miscarriages. The cause is multifactorial such as uterine anomalies, endocrine disorders, immunological causes, infections, chromosomal anomalies and maternal autoimmune diseases. In 50-60% of cases recurrent pregnancy losses, the cause remains unclear. In large meta-analysis different thrombophilia polymorphism has been identified to be associated with recurrent fetal loss. Therefore, interventions with thromboprophylaxis for the prevention of recurrent miscarriages have been proposed. According to some authors thrombophilia markers are not the only criteria for the initiation of the treatment. Whereas other investigators suggest not to treat unexplained miscarriage without evidenced antiphospholipid syndrome or inherited thrombophilia. It is well known fact that thrombosis is common at placental level whether antiphospholipid antibodies are present or not, suggesting that other pathological mechanisms are involved leading to same outcome i.e. fetal loss. LMWH (low molecular weight heparin) and aspirin in low doses have been used empirically to prevent recurrent pregnancy losses. Though there is no consensus regarding the empirical use ABSTRACT
INTRODUCTION
Recurrent pregnancy losses have commonly been defined as three or more consecutive spontaneous pregnancy losses. 1 Chromosomal abnormalities have been seen in 10-15% of miscarriages. About 1-2% of women suffer from recurrent miscarriages. 2,3 The cause is multifactorial such as uterine anomalies, endocrine disorders, immunological causes, infections, chromosomal anomalies and maternal autoimmune diseases. In 50-60% of cases recurrent pregnancy losses, the cause remains unclear. 4,5 In large meta-analysis different thrombophilia polymorphism has been identified to be associated with recurrent fetal loss. Therefore, interventions with thromboprophylaxis for the prevention of recurrent miscarriages have been proposed. 6 According to some authors thrombophilia markers are not the only criteria for the initiation of the treatment. 7 Whereas other investigators suggest not to treat unexplained miscarriage without evidenced antiphospholipid syndrome or inherited thrombophilia. It is well known fact that thrombosis is common at placental level whether antiphospholipid antibodies are present or not, suggesting that other pathological mechanisms are involved leading to same outcome i.e. fetal loss. 8 LMWH (low molecular weight heparin) and aspirin in low doses have been used empirically to prevent recurrent pregnancy losses. Though there is no consensus regarding the empirical use of antithrombotic therapy in unexplained pregnancy losses. 9 LMHW is widely used as prophylaxis in recurrent miscarriages in general obstetric practice.
LMWH is most commonly used agent in the existing trials. In our trial, we aimed to investigate whether the use of LMWH improves live birth rate when compared to aspirin to know which thrombophylactic treatment is the best to prescribe to women with recurrent miscarriage without known cause of thrombophilia.
Objective of this study was to compare the maternal and fetal outcome in patients with unexplained recurrent pregnancy loss treated with LMWH (Enoxaparin) versus Aspirin.
METHODS
Women with 3 or more pregnancy losses, aged between 18-40 years, booked for antenatal care and delivery in our hospital between January 2012 and December 2016 were followed till 6 months after delivery.
Inclusion criteria
• All women had normal results for parental karyotyping, FBS, RFT, serum TSH, serum prolactin and homocysteine levels and all included patients had been screened negative for thrombophilia.
Exclusion criteria
• Women with cardiovascular disease, bleeding diathesis, previous thromboembolic phenomena, DM, vaginal bleeding, multiple pregnancies, smoking, morbid obesity, presence of contraindication for anticoagulants were also excluded.
Patients were divided into two groups, Group A received aspirin and Group E patients received LMWH (Enoxaparin). Group A (62 patients) and Group E with (84 patients), who had unexplained RPL were included in the study. Total 146 patients were followed throughout their pregnancy and delivered in our hospital. As soon as pregnancy was confirmed, Enoxaparin was given as 1mg/kg/day as subcutaneous injection in all those patients who had taken aspirin in previous pregnancy and outcome was not good and aspirin as 75 mg once a day orally in those patients who had not taken aspirin in previous pregnancy. All pregnant women underwent prenatal screening. Adherence of treatment was confirmed during follow up of patients. Patients were called every 2-3 weeks till 28 weeks, then every 2 weekly between 28-34 weeks, then weekly until delivery to assess fetal growth, fetal well-being and drug side effects. LMWH (enoxaparin) was stopped 12 hours before delivery. In preterm patients, aspirin was stopped with the onset of labour and was continued in patients who reached 36 weeks. Outcomes were listed as live birth rate, abortion rate, number of women with pre-eclampsia, IUGR, placental abruptions and drug side effects as thrombocytopenia thrombolic episodes, injection sites hematoma, subcutaneous bruises and allergic skin reaction.
All infants were examined by a paediatrician after delivery. Perinatal outcome in terms of birth weight, gestational week, number mortality, congenital anomalies were evaluated.
Data were described in terms of range; frequencies (number of cases) and relative frequencies (percentages) as appropriate. For comparing categorical data, Chi square (χ 2 ) test was performed and exact test was used when the expected frequency is less than 5. A probability value (p value) less than 0.05 was considered statistically significant. All statistical calculations were done using SPSS (Statistical Package for the Social Science) SPSS 17 version statistical program for Microsoft Windows.
RESULTS
A total number of 146 women were assessed for eligibility. We had 62 women in Group A (aspirin group) and 84 women in Group E (enoxaparin group). Chi-square value = 0.636, p value = 0.727.
All the patients enrolled in the study had more than three consecutive abortions. Fifteen patients in Group A and 24 patients in Group E had 4-6 abortions. Four patients in Group A and seven patients in Group E had seven or more than seven abortions (Table 2). Chi-square value = 0.0908, p-value = 0.993.
Majority of the patients in both the groups delivered between 32-36 weeks gestation (Table 3) with maximum number of live births (Table 4).
Maximum number of IUGR babies were born between 32-36 weeks gestation in both the groups. There were four neonatal deaths, three in Group A and one in Group E (Table 5). More number of patients delivered vaginally in Group A as compared to Group E though the difference was not statistically significant (Table 6). Placenta previa, placental abruption, PIH (pregnancy induced hypertension), and PPH (postpartum hemorrhage) incidence was comparable in both the groups (Table 7). Liveborn chi-square value = 0.0407, p value = 0.938, stillborn chi-square value = 0.938, p value = 0.816 IUGR chi-square value = 0.395, p value = 0.821, NND chi-square value = 0.444, p value = 0.505.
DISCUSSION
Recurrent miscarriage (RM) is defined as three or more consecutive miscarriages occurring before 20 weeks postmenstruation. 10 Around 1% of fertile couples will experience recurrent early pregnancy losses. 11 The risk of recurrence increases with the maternal age and number of successive losses. 12 Recurrent miscarriage has been directly associated with parental chromosomal abnormalities, maternal thrombophilic disorders and structural uterine anomalies. 13,14 Maternal immune dysfunction and endocrine abnormalities have also been postulated in recurrent pregnancy losses. 14 The majority of cases of recurrent pregnancy losses following investigation are classified as idiopathic, that is, no identifiable cause is identified in either partner. It is generally accepted that within the idiopathic group there is considerable heterogeneity and it is unlikely that one single pathological mechanism can be attributed to their recurrent miscarriage history. 10 Current research is directed at theories on defects in nature's quality control related to implantation, trophoblast invasion and placentation, as well as factors, which may be embryopathic. 15 High-quality data on the management of RPL (recurrent pregnancy loss) are limited and reported studies on the aetiology, evaluation and management of RPL are mostly observational. For these reasons, therapeutic recommendations are largely based on clinical experience and data from these observational studies. 16
Acetylsalicylic acid
Low-dose aspirin is an antiplatelet agent which irreversibly inhibits platelet cyclo-oxygenase and thereby decreases the production of thromboxane A2 (TXA2), a potent vasoconstrictor and platelet activator. The hypothesis of an impaired placental circulation due to microthrombosis as a cause of RPL is the background for treatment with ASA. 17 LMWHs have been found to be effective in improving live birth rate even in the absence of demonstrated etiologic factors. Many properties of heparin have been used for this purpose. Besides anticoagulant activity, heparin has an anti-inflammatory effect that decidua's from women with recurrent miscarriages show common pathology that necrosis, acute and chronic inflammation and vascular thrombosis compared with those of women with normal pregnancies. 18 Also heparin has an anticomplement effect which is absolutely required to prevent pregnancy loss and thrombosis. 19,20
CONCLUSION
Live birth rates did not show significant difference between the two study groups. but empirical use of enoxaparin in patients with no live birth who had taken ecosprin in previous pregnancy showed improved results, so Enoxaparin should be used empirically as a first line agent in patients with recurrent pregnancy loss.
|
2020-03-05T11:10:44.905Z
|
2020-02-27T00:00:00.000
|
{
"year": 2020,
"sha1": "f6acf93e39e452d0bd591cce21c8c0b7419f5212",
"oa_license": null,
"oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/7869/5334",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "702120d95dc8bbe3d030b3e259de3431c9b2cac4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220731257
|
pes2o/s2orc
|
v3-fos-license
|
Psychological symptoms in anophthalmic patients wearing ocular prosthesis and related factors
Abstract Anophthalmic patients not only cause obvious functional deficits and facial deformities, but lead to poor psychological outcomes, although prosthesis wearing can offer improvements in psychological well-being to some extent. The study aimed to comprehensively evaluate the psychological symptoms and analyze related factors in anophthalmic patients wearing ocular prosthesis. Total of 150 anophthalmic patients and 120 control subjects were included in this cross-sectional study. Baseline characteristics survey and the symptom checklist-90 scale were completed by all participants to assess the psychological symptoms and analyze their related factors by multivariate analysis. The anophthalmic patients exhibited the increased levels of somatization, depression, anxiety, and hostility compared with control subjects. The most prominent symptom was hostility with the median score of 1.20. Female patients presented with higher somatization, depression, anxiety, and hostility. Marital status single was positively associated with depression, anxiety, and hostility symptoms. Lower education and cause of enucleation were related to higher levels of hostility. Anophthalmic patients wearing ocular prosthesis presented with more prominent hostility and somatization besides its higher depression and anxiety symptoms. The findings suggest that for female single anophthalmic patients with low education, especially caused by trauma, timely psychological assessment and intervention should be provided to avoid undesirable consequences.
Introduction
Eye enucleation is the surgical removal of the eyeball, involving the separation of all connections between the globe and orbit. [1] It is considered to be the primary cure for patients with ocular endstage diseases, including severe ocular trauma, intraocular malignancy, painful blind eye and phthisis bulbi etc. The causes for eye enucleations and its associated demographic and clinical factors have been widely investigated. Trauma or tumors is reported to be the most prevalent cause of enucleation, while the most prevalent indication of eye enucleation in children was retinoblastoma. [2][3][4][5][6] Orbital implant insertion and prosthesis wear after enucleation are usually performed to maintain cosmetic symmetry with the fellow eye, with high levels of patient satisfaction. [7] It is well known that eyes play an important role in both physiological functions and maintenance of appearance. Eye enucleation may not only cause obvious functional deficits and facial deformities, but also lead to poor psychological outcomes, especially in cases of losing eyes due to unexpected trauma or malignancy. [1] Previous studies had emphasized the importance of psychological outcomes after eye enucleation. It was reported that some anophthalmic patients experienced visual hallucinations after enucleation, often in the immediate postoperative period. [8] Quality of life was found to be obviously affected in anophthalmic patients, which was related to high levels of anxiety and depression. [9] Wang et al also reported that the levels of anxiety, depression, and quality of life were significantly poorer than population norms, but orbital implant insertion and prosthesis wearing offered significant improvements in psychological and physical functioning for patients with anophthalmia. [10] Nevertheless, the anophthalmic patients wearing ocular prosthesis with clinical anxiety and depression still remained a high level so far, and more anxiety and depression were associated with poorer visionrelated quality of life and greater levels of appearance concerns. [11] Interestingly, other investigators found that levels of anxiety and depression in patients wearing ocular prosthesis were within the normal range. [12] The different findings may be partly explained by differences in sociodemographic characteristics and different cultural backgrounds. However, it was worth mentioning that only 2 psychological symptoms including anxiety and depression were most frequently concerned and widely investigated in previous studies. Other less evaluated psychological symptoms may be even more important for assessing the psychological status in anophthalmic patients wearing ocular prosthesis, which could be another possible reason for disparities in symptom reported among different studies. It is therefore necessary to further comprehensively evaluate the psychological symptomatology of these patients.
The symptom checklist-90 (SCL-90) questionnaire reflects a broad range of concomitant clinical psychological symptoms and widely used in various fields of medicine. [13][14][15] The aim of this study was to comprehensively evaluate psychological status by SCL-90 and analyze their related factors in 150 anophthalmic patients and 120 healthy control subjects. These evaluations would contribute a better understanding of psychological health status in patients wearing ocular prosthesis and provide a potential therapeutic opportunity through psychological interventions.
Subjects
This study was designed as a cross-sectional study, approved by the Ethics Board of Beijing Tongren Hospital and performed in accordance with the Declaration of Helsinki. All patients provided written informed consent prior to enrollment in the study. Patients aged over 20 years and were living with an ocular prosthesis after eye enucleation at Beijing Tongren Hospital, Capital Medical University from January to December 2019 were recruited in the study. Total of 165 participants were enrolled into this study 1 month after wearing ocular prosthesis and completed questionnaires were received from 150 (90.9%) participants. The healthy control group was chosen from Medical Examination Center in Beijing Tongren Hospital matched for age, sex, and education. Before study participation, the procedures, potential benefits, and risks of the study were explained to all participants to reduce the potential sources of bias as much as possible.
Survey instruments
Each participant completed 2 questionnaires and could ask for help from professional staff if they had problems in understanding any of the questions. In the first questionnaire, participants answered general questions about gender, age, marital status, education, residence, enucleated eye, and cause of enucleation. The second questionnaire was the symptom checklist-90 (SCL-90), [16] a 90-item self-report clinical rating scale that evaluated a broad range of psychological problems in the last week. It consists of nine primary symptom scales: somatization, obsession-compulsion, interpersonal sensitivity, depression, anxiety, hostility, phobic anxiety, paranoid ideation, and psychoticism. The items are scored on a five-point Likert scale ranging from 0 (none) to 4 (extreme) in this questionnaire. Lower scores indicate a better mental health status. [17]
Statistical analysis
Statistical analysis was performed using SPSS 25 (IBM, Chicago, IL, USA). Quantitative variables were non-normally distributed assessed using Kolmogorov-Smirnow test, and reported as median values with quartiles. The sample size was calculated by a power analysis using G * Power 3.1.9.2, which suggested a sample size of 90 with a = 0.05 and statistical power (1 À b) = 0.9. [18] Continuous variables were tested by the Mann-Whitney test, while categorical variables were analyzed using the x 2 test. General linear mixed models were used to estimate the effect of independent variables (gender, age, marital status, education, residence, enucleated eye, and cause of enucleation) on main psychological symptoms. Two-sided P value <0.05 was considered statistically significant for all analyses. Bonferroni correction was applied for multiple comparisons dividing the significance level by the number of comparisons.
Baseline characteristics
A total of 150 patients and 120 controls were included in this study. Table 1 showed the baseline characteristics of 2 groups. The mean age of the patients was 34.0 years (30, 39). The causes for enucleation were as follows: 71 (47.3%) trauma; 36 (24.0%) tumor; 27 (18.0%) phthisis bulbi; 16 (10.7%) infection. Trauma was the main cause of enucleation. There were no significant differences between anophthalmic patients and control subjects in age, gender, marital status, and level of education.
Psychological symptoms
Comparison of 9 scales of SCL-90 between anophthalmic patients and control subjects was shown in Table 2. The score for somatization, depression, anxiety, and hostility was significantly higher in anophthalmic patients than in control subjects. Table 1 Baseline characteristics in anophthalmic patients and control subjects. The most prominent symptom in anophthalmic patients was hostility with the median score of 1.20 (0.7, 1.7). There were no statistical differences between the 2 groups in terms of interpersonal sensitivity, phobic anxiety, paranoid ideation, and psychoticism.
Correlations between main psychological symptoms and baseline characteristics in anophthalmic patients
We next investigated the correlations between main psychological symptoms and baseline characteristics in anophthalmic patients by multivariate analysis using the linear mixed model ( Table 3). The somatization scores were significantly associated with gender but not others in the multivariate analysis model with adjustment for age, marital status, level of education, enucleated eye, residence, and cause of enucleation. There was a statistically significant correlation between depression and gender, age, marital status adjusted for education, enucleated eye, residence and causes. The depression scores were significantly higher in female and single patients than in their male and married counterparts, respectively (P < .05). The patients between 40 and 59 years of age had significantly lower depression level than the other 2 groups (P < .05/2). There was a statistically significant correlation between anxiety and gender, age, marital status and cause of enucleation adjusted for education, enucleated eye, and residence. Marital status single and female gender were positively associated with anxiety symptom (P < .05). The patients between 40 and 59 years of age had significantly lower anxiety level than the other 2 groups (P < .05/2). The cause of trauma was significantly associated with higher anxiety (P < .05/3). For hostility, we found that the hostility scores were significantly associated with gender, marital status, level of education, and cause of enucleation with adjustment for age, enucleated eye and residence. Marital status single and female gender were positively associated with hostility (P < .05). There was a statistically significant correlation between hostility and low level of education (P < .05/2), as well as between hostility and the cause of enucleation, especially for trauma and tumor (P < .05/3).
Discussion
In this study, we assessed the psychological symptoms in anophthalmic patients wearing ocular prosthesis and then analyzed the related factors. The anophthalmic patients exhibited the increased levels of somatization, depression, anxiety, and hostility as compared with the control subjects. Of these, the most Table 3 Multivariate analysis of the correlation between main psychological symptoms and baseline characteristics in anophthalmic patients.
Somatization Depression
Anxiety prominent symptom was hostility with the median score of 1.20. The severity of psychological symptoms in patients was associated with gender, age, marital status, level of education, and cause of enucleation. Supporting interventions should be provided to relieve the main psychological symptoms in order to improve their mental health. The eyes are known to be crucial for inter-personal communication and physical appearance. The effect of eye enucleation on patients' psychology should be great and farreaching during their lifetime. Ocular prosthesis wearing could reduce the anxious and depressive symptoms and improve the appearance-related social avoidance. [19] However, the visionrelated quality of life and appearance concerns were still associated with anxiety and depression after eye enucleation, and anxiety and depression were mostly prevalent in anophthalmic patients wearing ocular prosthesis. [11,20] The psychological assessment in previous studies was mainly based on questionnaires evaluating anxiety and depression symptoms. Other psychological symptoms should also be considered. In this study, we found that a higher global level of psychological distress in the patients than in the control subjects. It was of interest to note, however, that the difference was not generalized across all subscales of the questionnaire used, but was restricted to specific domains: anxiety, depression, somatization, and hostility. While elevation of the first 2 subscales (depression and anxiety) was also comparable with the findings of previous studies, [9][10][11] the last 2 subscales (hostility and somatization) were more prominent in patients compared to controls, and addressed more to the psychological distress experienced.
Eye enucleation and prosthesis wearing was a more complex condition. The patients should not only make decisions on the treatment, but suffer heavy physical and mental burdens. In this study, symptoms of depression, anxiety, hostility, and somatization were statistically associated with anophthalmic state, which probably reflected the objectively and subjectively challenging monocular-vision environment. Among these symptoms, hostility was most prominent in anophthalmic patients according to the SCL-90 scale. Hostility was understandable psychological reactions to frustrating stresses, stemming from the unpredictable concerns about the eyeball and vision lost, physical appearance, financial and emotional burdens, and work-related changes, particularly for anophthalmic patients caused by the unexpected trauma. Somatization score in SCL-90 scale was notably higher in patients, suggesting that the physical symptoms might be the consequence of abnormal psychological status, such as headache, anorexia or sleeplessness. This might be due to the fact that the Chinese prefer to express certain psychological problems through somatic symptoms. [21] Therefore, hostility and somatization scores may be of clinical importance in assessing mental health in patients wearing ocular prosthesis.
In this study, the correlations between main psychological symptoms and baseline characteristics in anophthalmic patients were assessed by the linear mixed model. We found that women with ocular prostheses differed significantly from men in all of its positive dimensions. Specifically, female patients presented with higher somatization, depression, anxiety, and hostility. Other studies also reported that women had higher stress sensitivity and a higher risk for depression and anxiety compared to men with ocular prostheses. [9,22,23] Moreover, gender was the only factor significantly associated with somatization scores in this study, suggesting female patients appeared to somatic more. The reason may lie in that women patients are less likely to accept their current condition and responded negatively to their overall situations, thus leading to somatic complaints. We also found that marital status single was positively associated with depression, anxiety, and hostility symptoms, which highlighted the importance of family support. However, one study from Korea showed that marriage was associated with a lower quality of life, and they explained the result may have been due to the extra responsibilities and labors imposed as a consequence of the patient's disability. [9] Similar to the previous studies, [11] lower levels of education were found to be related to higher levels of hostility. More importantly, there was a statistically significant correlation between hostility and the cause of enucleation, especially for trauma. Severe trauma usually comes with the reexperiencing spontaneous memories of the traumatic event, recurrent dreams related to the trauma, flashbacks, or other prolonged psychological distress. [24] These changes were reflected as increased related symptoms in negative alterations in mood and cognitions. It was reported that high cognitive function was linearly associated with low hostility level. [25] Therefore, strengthen cognitive function by encouraging healthier lifestyles and higher education or professional achievements is of great importance to improve the mental health of anophthalmic patients.
Some limitations of this study are worth noting. Firstly, this study is exploratory and cross-sectional, which precludes an examination of how the patients change over time. A longitudinal study is preferred for future investigations. Secondly, given that the present sample consisted of patients one month after prosthesis wearing, they may still have been in the process of adapting. Studies using more structural interviews and more disease-specific instruments will contribute to understand the patients better. Thirdly, although some baseline factors have been found to be significantly associated with main psychological symptoms, other clinical factors, including visual function in the remaining eye, appearance concerns, comfort of the prosthesis, etc, might influence them. For future studies, it would be interesting to explore the role of combined related factors on the psychological health status.
In conclusion, we found that the psychological symptoms of anophthalmic patients wearing ocular prosthesis exhibited high levels of somatization, depression, anxiety, and hostility. Of these, the hostility was the most prominent psychological symptom in anophthalmic patients, which was related to gender, marital status, level of education, and cause of enucleation. The findings suggest that for female single anophthalmic patients with low education, especially caused by trauma, timely psychological assessment and intervention should be provided to avoid undesirable consequences.
|
2020-07-23T09:06:17.841Z
|
2020-07-17T00:00:00.000
|
{
"year": 2020,
"sha1": "008339c502d00b8a457d83e9f41b386480a2d815",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/md.0000000000021338",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01a672f52d9178af4d6e2ddc5c398586a7a00410",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235458594
|
pes2o/s2orc
|
v3-fos-license
|
Data lake concept and systems: a survey
Although big data has been discussed for some years, it still has many research challenges, especially the variety of data. It poses a huge difficulty to efficiently integrate, access, and query the large volume of diverse data in information silos with the traditional ‘schema-on-write’ approaches such as data warehouses. Data lakes have been proposed as a solution to this problem. They are repositories storing raw data in its original formats and providing a common access interface. This survey reviews the development, definition, and architectures of data lakes. We provide a comprehensive overview of research questions for designing and building data lakes. We classify the existing data lake systems based on their provided functions, which makes this survey a useful technical reference for designing, implementing and applying data lakes. We hope that the thorough comparison of existing solutions and the discussion of open research challenges in this survey would motivate the future development of data lake research and practice.
Introduction
Big data has undoubtedly become one of the most important challenges in database research. Unprecedented volume, large variety, and high velocity of data need to be collected, stored, and processed with high data quality (veracity) to provide value.
Especially the variety of data still poses a daunting challenge with many open issues [1]. Web-based business transactions, sensor networks, real-time streaming, social media, and scientific research generate a large amount of semi-structured and unstructured data, often stored in separate information silos. Only the combination and integrated analysis of the information across these silos achieve the required valuable insight.
Traditional schema-on-write approaches such as the extract, transform, load (ETL) process are too inefficient for such data management requirements. This has drawn the interest of many developers and researchers to NoSQL data management systems. NoSQL systems provide data management features for a high amount of schema-less data, which enables a schema-on-read manner of data handling, i.e., the structure of data is not required for storing but only when further analyzing and processing the data. Open-source platforms, such as Hadoop with higher-level languages (e.g., Pig 1 and Hive 2 ), as well as NoSQL database (e.g., MongoDB 3 and Neo4j 4 ), are gaining more popularity and implemented scenarios. Although the current market share is still dominated by relational database systems, a onesize-fits-all big data system is unlikely to solve all the challenges required from data management today.
To address this gap, data lakes (DLs) have been proposed as big data repositories, which store raw data and provide a rich list of functionalities with the help of metadata descriptions [125,119,124,107]. Various solutions and systems are proposed to address many of the data lake challenges. However, as 'data lake' is a current buzzword, there is much hype about it, without exactly knowing what it is. Moreover, most recent data lake proposals only target a specific research problem or certain types of source data. As a relatively new concept, there has been merely a survey [112] discussing certain aspects of data lakes (i.e., architecture and metadata management). A coherent complete picture of DL problems and solutions is still missing.
Contributions. Our survey provides a thorough explanation of the DL concept and categorizes the existing data lake solutions. The survey also aims at helping to build or customize a data lake, and discover open questions and future research directions about data lakes. We summarize the contributions as follows.
-We review the almost ten-year development of the data lake concept and implementations. We provide the definition of data lakes and compare it with other similar concepts. -We categorize the necessary functions in a data lake and propose a fine-grained architecture, which clarifies the workflow and function components for building a data lake system. -We provide a general classification of existing data lake studies according to their provided functions. We analyze each class of research problems in depth and compare the data lake solutions. -We illuminate a few new directions, applications, and potential technologies of data lakes.
Notably, different from a recent tutorial [95] that discusses challenges and potentially useful technologies for data lakes, this survey focuses on systems that claim to be a data lake, or provide partial functions of a data lake. Moreover, the comparison in this survey will mainly cover implemented systems that resolve certain research problems of data lakes, rather than highlevel DL system proposals or commercial DL products. Refer to [110,65] for a more industrial point of view.
Survey outline. The rest of the survey is divided into two parts. In the first part, we first discuss the origin (Sec. 2), definition (Sec. 3), and existing architectures (Sec. 4.1) of data lakes. We propose a data lake architecture and categorization criteria for the survey in Sec. 4. 2-4.3. In the second part of the survey, we mainly compare existing data lake solutions with regard to data storage (Sec. 5), ingestion (Sec. 6), maintenance (Sec. 7), and exploration (Sec. 8). In Sec. 9 we review formal schema mapping and provenance aspects of metadata management. We discuss some new directions and applications of data lakes in Sec. 10 and conclude the survey in Sec. 11.
A brief history of data lakes
As of this writing, the concept of data lakes is about a decade old and has significantly evolved in this period. We summarize this evolution in three stages.
2010-2013: Beginnings
The concept of data lake was first coined in the industry. In 2010, it was first proposed by Pentaho CTO James Dixon, as a solution that handles raw data from one source and supports diverse user requirements [34]. This was seen in sharp contrast to data warehouses or data marts for which the structure and usage of the data must be predefined and fixed, and rigorous data extraction, transformation, and cleaning are necessary before entering data. By storing raw data in the original format, data lakes could avoid or delay this expensive standard preprocessing.
In 2013, Pivot proposed an architecture for a business data lake [21], which ingests multiple data sources in three abstract tiers: (1) an ingestion tier takes data in real-time/micro-batch/batch; (2) an insight tier analyzes data in real-time or interactive time and derives insights, and an action tier that links these insights with the existing applications; (3) additional tiers monitor and manage the data. The company also suggested using Hadoop as the storage system of the data lake, and applying their existing products for realizing the aforementioned tiers. However, w.r.t the actual implementation of a data lake, not many details were given.
2014-2015: Criticisms and further development
In 2014, Gartner raised several criticisms about data lakes [49]. When ingesting disparate data into a data lake, the data lake may easily turn into an unusable "data swamp", unless it requires metadata or data governance. In particular, after ingestion, the semantics or data quality of the raw data is unknown, and the origin (provenance) of individual datasets or its possible connection to other datasets is missing. It would be difficult for users to retrieve useful information or have any possible application of such a data swamp. In addition, Gartner pointed out that the existing data lake solutions did not provide a good answer to how oversight of data security and privacy should be conducted. Such crucial criticisms had a significant influence on many data lakes studies in the following years, which we will elaborate on in Sec. 6-7. In response, Dixon revisited the concept [35], and emphasized that a data lake should also be equipped with metadata and governance, such that even with data in its raw form instead of transformed/aggregated form, a data lake could enable ad-hoc data analytics.
Meanwhile, more DL proposals started to emerge. They brought more requirements, solutions, and challenges. PwC has defined a data lake as a repository of structured, semi-structured, unstructured data in heterogeneous formats [119], originating from the business transactions, sensors, or mobile/cloud-based applications. With Hadoop in the center, a data lake should provide a low-cost data storage that is easy to access, yet in a schema-on-read manner, i.e., the data and metadata (e.g., semantics) can grow over time. They postulate that a data lake actively extracts metadata from the raw data and stores it; then it discovers patterns about the raw data. Moreover, users provide additional descriptive information of datasets (e.g., semantic annotations, domain-specific knowledge, and attribute linkages). The dynamic interaction between the data lake and users should thus continuously improve the quality and value of data.
Meanwhile, IBM proposed more sophisticated information governance and management [26], and defined an enhanced DL solution, i.e., data reservoir, and refined it by the notion of data wrangling [124] for data governance issues such as license checking or visualization. A data reservoir groups the diverse input datasets by their types of usage. Each group of datasets is referred to as a zone. It maintains the catalog with descriptive details about data. Moreover, the IBM proposal allowed multiple manners of interaction between users and the stored raw data, e.g., discover datasets with the help of catalogs, and access the data with APIbased service calls or user-defined decision models.
Other proposals addressed new possibilities such as Artificial Intelligence (AI) and Crowdsourcing to facilitate data integration, access, and quality improvement in data lakes [99]. For example, AI helps extract features of data, generate tags with descriptive metadata, find related datasets, discover possible structures from schema-less data, and reduce data redundancy. Crowdsourcing can help with collectively tagging semantic knowledge about the data, and linking the possible relationships among datasets.
With a special focus on security information and event management, how to store and access the data properly is discussed in [88]. The importance of meta-data management is emphasized in [125] with an architecture to parse, store, and query heterogeneously structured personal data. There is also a proposal [40] emphasizing the importance of Human-in-the-loop, e.g., data scientists govern the data in data lakes.
2016-present: Prosperity and diversity
Since 2016 the realization of data lakes in both industry and academic community has been booming. There are proposals about data lake architectures [115,66,83,116], concept, components and challenges [92,89,106].
Many IT companies offer commercial tools for building data lakes, e.g., GOODS [63] from Google, Azure Data Lake [109] from Microsoft, AWS Lake 5 from AMA-ZON, Vora from SAP [114], IBM and Cloudera 6 , Oracle 7 , and Snowflake 8 . In particular, Delta Lake 9 from Databricks is open-source and offers the storage layer compatible with Apache Spark 10 APIs.
Data lake definition
In this section, we first give a general definition of data lakes, then compare data lakes with similar concepts, i.e., data warehouses and dataspace systems. We also illustrate the application of data lakes with an example from a manufacturing process.
Definition
Based on the discussion in Sec. 2, we extract the most essential aspects of a general-purpose data lake, which are also agreed upon in most of the existing literature, and define the concept of data lakes as below.
Definition 1 (Data Lake) A data lake is a flexible, scalable data storage and management system, which ingests and stores raw data from heterogeneous sources in their original format, and provides query processing and data analytics in an on-the-fly manner.
Regarding the definition, several remarks are in order.
Data lakes store raw data. A data lake ingests raw data in its original format from heterogeneous data sources, fulfills its role as a storage repository, and allows users to query and explore the data. The ingestion of raw data may lead to the absence of schema information, constraints, and mappings, which are not defined explicitly or required initially for a data lake. However, metadata management is crucial for data reasoning, query processing, and data quality management. Without any metadata, the data lake is hardly usable as the structure and semantics of the data are not known, which turns a data lake quickly into a 'data swamp'. Therefore, it is important to extract as much metadata as possible from the data sources during the ingestion phase. If sufficient metadata cannot be extracted from the sources automatically, a human expert has to provide additional information about the data source.
A data lake is not only a storage system. The primary role of a data lake is a data repository, which can be a centralized repository or a set of distributed repositories. However, a data lake is not only a storage system, e.g., a storage repository on top of a Hadoop file system. It needs to provide a set of functions to manage and govern the data such that it can be used later. Notably, in the existing literature the term "data lake" is used in both cases: solely the storage layer (i.e., Hadoop or certain databases), or a system providing storage and further services (e.g., metadata management). In this survey, we apply the latter meaning of a data lake (system).
Data lakes support on-demand data processing and querying. One important feature of data lakes is on-the-fly, or on-demand, which indicates that schema definition, data integration, or indexing should be done only if necessary at the time of data access. This might lead to more cost or effort for accessing the data, but it also increases the flexibility for posing dynamic, ad-hoc queries or applications of data. In data lakes, the ingestion of data sources could be light weight, as it is not necessary to force schema definitions and mappings beforehand. Moreover, the metadata in data lakes matures incrementally. For instance, the data marts can be defined when a user queries the data lake. Such information is stored to help with possible future queries, e.g., joined attributes.
Data lakes and data warehouses
A data warehouse (DW) manages, integrates, and aggregates data from multiple sources, and provides data analytics for decision making via multi-dimensional data cubes in data marts [67,2]. The focus is on high data quality, usage of compact historical data, and interactive user support using pre-defined multi-dimensional user views or data marts [69]. Based on [31,55] and the data lake definition given in Sec. 3.1, we summarize key differences between DW and data lakes in Table 1.
In data warehouses, heterogeneous source datasets need to first go through the Extract/transform/load (ETL) process. While enabling compact continuous embedding of new experiences with historical information, ETL is time-consuming and requires careful design, thus precluding real-time analytics. DW data is stored and handled as structured data in relational databases or cube structures. In contrast, in data lakes the transformation step is delayed and data is loaded in its original structure (i.e., load-as-is) to reduce upfront cleaning and integration effort and to make full source data available for later data analysis (i.e., pay-as-you-go). Data lakes aim to process more heterogeneous sources including semi-structured and unstructured sources, and to manage these different data models efficiently using multiple dedicated kinds of storage (cf. Sec. 5). To access the data, a DW usually applies (multi-dimensional extensions of) SQL queries, while a data lake may need to support different query languages (e.g., SQL, Cypher 11 , JSONiq 12 ), and programming languages (e.g., Python, Java, R). Further differences in system design and implementation between data lakes and data warehouses can be derived from these key differences.
Some industrial proposals also suggest to use data lakes to simply store massive raw data, or when the usage of data is not clear; later they store a subset of cleaned, transformed, nonvolatile data in data warehouses, or -in the so-called Lambda architecture -mix clean integrated warehouse data and current raw data streams in data lakes to combine real-time capability with data warehouse quality.
Data lakes and dataspaces
Before the emergence of data lakes, a similar concept was proposed by Microsoft Research initially for better personal data management, i.e., dataspaces [45,61]. These early dataspace prototypes tackled the problems of how to organize, integrate, discover, and query the data from several loosely interrelated data sources. They already mentioned a pay-as-you-go style of data integration to reduce the overhead, and suggested multiple services such as metadata catalog, storage, indexing, event detection, monitoring, support for complex workflows, effectively reusing human input, and handling data uncertainty and inconsistency. Similar to data lake, dataspace proposals of the 2000s face several levels of variety problems. The data sources may have structured (e.g., relational databases), semistructured (e.g., XML) or unstructured (e.g., text) formats. An early proposal was to have multiple data models with a hierarchy of expressive power, starting at the top with only basic metadata and a data source as a named Resource Description Framework (RDF) alike resource with basic information indicating its type and size; next, a bag-of-words data model which supports keyword queries; third, a labeled-graph data model, supporting simple path or containment queries, or complex queries based on the semi-structured data model; and finally, individual local data models like relational, XML, RDF, Web Ontology Language (OWL), etc. Query processing solutions in such a dataspace should be able to describe a collection of various relationships among participating sources, and provide multiple ways of data exploration such as keyword search, structured queries, result ranking, and source/participant discovery. We will see in Sec. 7 and 8 that similar research questions are also addressed in data lakes.
Stimulated by the growing concerns of losing data sovereignty in the presence of dominating Internet platforms, the industrial data space initiative extends these issues by a strong emphasis on sovereign data exchange among local data spaces/data lakes of small and medium IT users or vendor organizations [70]. Additional requirements collected from a broad range of industries and public organizations [101] include, e.g., trusted connectors controlling the import and export of data among dataspace partners, as well as usage control policies and services, in addition to what is needed for data lakes. Starting in 2015, the International Data Space Association 13 produced a reference architecture for the concept, and contributed many of them to GAIA-X [53], a central component of the Data Strategy of the Eu-13 https://www.internationaldataspaces.org/ ropean Union. In summary, the dataspace concept can be considered a complement to the data lake approach supporting sovereign inter-organizational cooperation.
Use cases of data lakes
Data lakes have been proposed to store and manage data in many real-life use cases, e.g., Internet of things (IoT) and smart city [90], smart grids [93], air quality control [126], flights data [87], disease control, labor markets and products [13]. For a better understanding of DL definition and applications, we showcase real-life use cases of applying data lakes in manufacturing as illustrated in Fig. 1. It is abstracted and simplified from an Industrial 4.0 project Internet of Production 14 .
Example 31 (Multiple heterogeneous raw data inputs). Alice is a scientist who studies the milling process, and she has two problems with the data.
First, vast and heterogeneous data is generated, which she needs to store and organize. Such production data includes binary image files from cameras, CSV and JSON files from different sensors, and ontologies enriched with her own annotation for the milling process. She hopes to store the heterogeneous data in raw formats, as transformation would be time-consuming, and might lose certain information and jeopardize her future research.
Second, instead of browsing through massive, diverse data files, she hopes to have a data management system, which provides her easy access to the raw data. A data lake would be a good solution to solve Alice's problems.
Example 32 (Data integration and transformation). Bob is an industrial consultant, who has similar production data but in different formats, structures, and terminologies from Alice. Charlie is also a milling scientist but with a different research goal. He has another set of milling machines, sensors, and engineering models. For data analysis, he uses machine learning (ML) and runs Python scripts instead of merely queries.
Both Bob and Charlie want to integrate and compare their data and results with Alice. The three users hope to have a data lake, which can combine these independent data sources, help them easily find the datasets relevant for their own use cases, transform the data flexibly, and provide query answering and data analytics. Example 33 (On-demand data processing and querying). As a researcher, Alice has her research questions and solutions evolving with time. She may introduce a new sensor, which has different machine-generated data formats compared to the existing datasets in the data lake. She may have new queries or want to use the raw data in a different manner. In addition, in the beginning the usage of some raw datasets was unclear, which she just stored without using. Now she has created an engineering model for the unused data. Similar dynamic analytical requirements also apply to the use cases of Bob and Charlie. Therefore, they would like to have a data lake, which supports data processing, integration, and querying in an on-the-fly manner rather than being fixed from the beginning.
Data lake architecture
The architecture of a data lake describes the structure and components of the system, indicating how to store, organize and use the data. Rather than repeating a recent survey on data lake architecture [112], we briefly review two high-level data lake philosophies, but then only present an integrative function-oriented architecture used to structure this survey. Additionally, we propose classification criteria for data lake studies, and discuss the different kinds of data lake users.
Pond and zone architectures
The pond architecture [66] partitions ingested data by their status and usage. Ingested data is first stored in the raw data pond, then transformed and moved to the analog data pond, application data pond, or textual data pond if possible. Associated processes prepare the data for future analytical processing. Later on, valuable data is secured long-term in an archival data pond. For instance, analog data generated by an automated device is moved to the analog data pond followed by data reduction to a feasible data volume. In contrast, the zone architecture [26,134,115,102], separates the life cycle of each dataset into different stages. For instance, there could be individual zones for loading data and checking data quality, storing raw data, storing cleaned and validated data, discovering and exploring the data, or using the data for business/research analysis.
Our proposed architecture
High-level architectural philosophies, such as pond or zone architecture, often lack technical details about functions, which hampers modular and repeatable implementations. Therefore, we propose in Fig. 2 an architecture based on [56]. Besides the data storage systems, it divides the tasks of the whole workflow into three functional tiers, along with the workflow when the task should be performed. There are certain tasks conducted during or right after data is ingested (ingestion). Other tasks are triggered by queries as shown on the right side of Fig. 2 (exploration). Tasks in the middle (maintenance) conduct the general management and organization of the ingested datasets, but can also be considered as the preparation for querying. This architecture can also be seen as an abstraction of earlier layer-based data lake proposals [120,115,70,38]. But here we define relevant components, and specify their functions. Our main goals of proposing such an architecture are for the discussion of necessary functions in the whole workflow of a data lake, and for comparing existing technologies for each function in Sec. 6-8. This provides a much more comprehensive and complete view compared to earlier DL architecture proposals, and permits a fine-grained categorization of existing DL research and practice.
Classification criteria
Many DL research works focus on one specific task in data lakes. We classify them by their functions in the architecture of Fig. 2. Of course, some proposals provide more than one function. We will give a detailed explanation of each functional criterion in Sec.6-8, and group existing data lake solutions as listed in Table 2. In Sec. 5 we introduce the data storage strategies applied in existing data lake solutions, because the specific choice of storage strategy often shapes the required functions. The ingestion layer is responsible for importing data from heterogeneous sources into the data lake system. The main challenges are about extracting and modeling metadata (Sec. 6). To prepare the ingested raw data for querying or analytics, a data lake needs a set of operations in the maintenance layer to organize, discover, integrate or transform the datasets (Sec. 7). The functions in the exploration tier mainly contribute to allowing users to access the data lake (Sec. 8).
Depending on the purpose of the data lakes, there could be an external application layer on top of these three tiers e.g., for visualization [90]. The design choices of the application layer are myriad in practice. Thus, we leave them out of this survey.
Data lake users
As indicated in Fig. 2, human users interact with a data lake in different roles. According to [26], a business data lake scenario typically includes: (1) data scientists and business analysts who build and apply analytics models over the data lake; (2) information curators who define new data sources, organize and maintain the metadata in the catalog of existing data sources; (3) the governance, risk, and compliance team who ensures that the organizational regulations and business policies are followed (e.g., an auditor); (4) operations team who maintains the data lake (e.g., data quality analysts, integration developers). Such users can help improve the semantics of data lakes over time, by adding metadata tags and linkage based on conceptual models or standard vocabularies resp. ontologies, such as schema.org [119,70]. In the sequel, we do not emphasize the different kinds and generally talk about "data lake users".
It is not an easy task to design a data lake that has effective control of data security over heterogeneous data stores in data lakes. Only few data lake systems have targeted this problem, often based on their data stores. A few tools are mentioned in [29] for system authentication, authorization, and data encryption based on the Hadoop platform, e.g., Apache Ranger 15 . Among the full DL prototypes, CoreDB [10,11] creates different users or roles for access control, and enables authentication and data encryption.
Storage
The data lake storage problem asks which data storage systems should be used to preserve the ingested datasets. Some approaches rely on the common relational or NoSQL databases while others have developed new storage systems, or combinations (Polystore); the upper part of Fig. 2 depicts diverse choices which could be operated on-premise or in clouds. We classify solutions by how the ingested data is stored in the data lake: as files, in a single database format, or using polystores. We also briefly mention industrial solutions that build data lakes on cloud platforms.
File-based storage systems
The Hadoop Distributed File System (HDFS) is one of the most frequently mentioned data storage systems for data lakes [21,119,13]. HDFS supports a wide range of Exploration Query-driven data discovery Juneau [129,68,130] Query heterogeneous data Constance [56,60] CoreDB [10,11] Ontario [38] Squerall [84] files [54]. Besides text (e.g., CSV, XML, JSON) and binary files (e.g., images), it supports certain formats for data compression, e.g., Snappy 16 , Gzip 17 . It also allows columnar storage formats such as Parquet 18 and rowbased storage format Avro 19 that enable easy schema management. As discussed in Sec. 3, Hadoop alone usually does not fulfill the goals of a data lake. Microsoft's Azure data lake store [109] offers a hierarchical, multi-tier filebased storage system. 20 It applies Azure Blob storage, which is a cloud storage solution optimized for large unstructured object data. It also supports HDFS and the Hadoop ecosystem, (e.g., Spark, YARN 21 [41] uses RDF as its data model. It can directly ingest RDF data; for other formats, the ingestion must be extended by information extraction tools (e.g., Open Calais 23 ) for extracting RDF triples, which could be linked to existing ontologies. CLAMS stores the ingested dataset in HDFS, and allows users to register the datasets for constraint discovery and data cleaning.
Single data store
Some DL systems aim at specific types of data and employ a single database system as the storage system.
As one example, Personal data lake [125] applies a graph-based data model (e.g., property graphs), and stores data in Neo4j. The proposed data lake has a special application focus on user data of usually relatively small size compared to business scenarios, but higher requirements regarding data privacy. Inputs of the personal data lake, heterogeneous personal data fragments generated from user-web interaction (structured, semistructured, unstructured), are serialized to specifically defined JSON objects, which are flattened to Neo4j graph structures with extensible metadata management in the data lake, categorizing for kinds of data: raw data, metadata, additional semantics, and the data fragment identifiers.
Going further, multi-model databases support multiple data models and formats in a single database (for a survey, cf. [80]). However, before choosing a multimodel database in a data lake, one should also check the underlying storage strategy, i.e., native storage support for different data models or merely different interfaces to the same storage strategy. If not, polystores should be considered, as discussed next.
Polystore systems
By definition, a data lake supports heterogeneous data in raw format, e.g. for zone architectures [134]. Polystore (or multistore) systems implement polyglot persistence, i.e., with integrated access to a configuration of multiple data stores for heterogeneous data. For example, Constance [56,60] ingests raw data depending on their original format to relational (e.g., MySQL), document-based (e.g., MongoDB), and graph databases (e.g., Neo4j). For instance, a JSON file will be stored in MongoDB. If an input dataset cannot be directly stored in a relational or NoSQL store, or considering e.g., scalability for distributed computing, data can also be stored in HDFS. An example would be a stream data source of large binary image files, which requires parallel data compression. If these defaults seem inadequate, users can specify the data store via the UI. Google's data lake, Google Dataset Search (GOODS) [62,63] supports heterogeneous data storage used in Google's key-value stores Bigtable [23], file systems, and Spanner [28]; see also Sec. 5.4. Another data lake allowing raw data stored in both relational and NoSQL is CoreDB [10,11]. To store diverse data from web applications, besides relational databases (e.g., MySQL, PostgreSQL, Oracle) it supports multiple NoSQL systems, i.e., MongoDB, HBase 24 , and HIVE. JSON is used as a unified format to represent entities.
As a data lake designed for supporting data science tasks, Juneau [130] mainly focuses on tabular data or nested data that can be easily unnested into relations. Besides data processed by the notebook kernel, Juneau also handles (Jupyter, JupyterLab, Apache Zeppelin, or RStudio) notebooks, workflows in a notebook, and cells 24 https://hbase.apache.org/ that constitute a workflow. Moreover, it needs to generate and store the relationships of these data objects as graphs. Thus, it applies both a relational database (PostgreSQL) and a graph database (Neo4j).
Potential techniques. Since 2015, the study of polystore has been booming with regard to systems (e.g., Polybase [64], BigDAWG [37]) and open-source tools (e.g., Drill 25 , Spark) -see [121] for a survey. Although they are not claimed as data lakes, we discuss in Sec. 8 their role as part of a data lake architecture in terms of how to store and query diverse source data.
Data lakes on clouds
In the aforementioned data lake systems, most are onpremises, except for a few real-life applications. For instance, the data lake in [93] uses Google Infrastructure as a Service (IaaS) cloud computing platform.
For commercial data lakes, it is a more common practice to build data lakes on clouds due to their large scale of data. Several major cloud database vendors are promoting server-less data analytics and native cloud platform for building data lakes, most prominently Amazon Web Services (AWS) 26 , Azure Data Lake Store 27 , Google Cloud Platform (GCP) 28 , Alibaba Cloud 29 , and the Data Cloud from Snowflake 30 .
We take GCP as an example. 31 The data lake serves as a raw data repository for heterogeneous data. GCP provides multiple data stores for different types of data and usages: Cloud Storage 32 for unstructured data, Cloud SQL 33 for structured data and transactional workload, Cloud Spanner 34 for horizontal scalability, etc. Then the user can build a pipeline (e.g., an ETL pipeline) to prepare the data, before loading it into a data warehouse (e.g., BigQuery 35 ).
Cloud platforms have several prominent advantages for building data lakes. In such a cloud data lake, one 25 https://drill.apache.org/ 26 https://aws.amazon.com/big-data/datalakes-andanalytics/what-is-a-data-lake/ 27 https://azure.microsoft.com/en-us/solutions/datalake/ 28 https://cloud.google.com/solutions/build-a-datalake-on-gcp 29 https://www.alibabacloud.com/product/data-lakeanalytics 30 https://www.snowflake.com/workloads/data-lake/ 31 Considering the rapid evolution of industrial solutions, we encourage readers to check the latest information for the content of this subsection. 32 https://cloud.google.com/storage 33 https://cloud.google.com/sql-server 34 https://cloud.google.com/spanner 35 https://cloud.google.com/bigquery can scale the storage space and computation power dynamically, and in many cases the prices of resources are more economic than on-premises. Moreover, the major cloud vendors provide many additional analytics tools in their product portfolio, e.g., AI services and data visualization tools, which make it convenient for developing different practical functions on top of data lakes. Nevertheless, relying on a cloud platform also implies risks and challenges in some aspects such as data security, data provenance, and fault tolerance. In conclusion, it is interesting to see more data lake solutions researched and developed over cloud platforms.
Ingestion
Ingestion components load data into the lake, and store data into databases or file systems. The required throughput and synchronization of multiple incoming data streams may require sophisticated scheduling approaches that are typically associated with the storage systems. In this section, however, we focus on the question of metadata modeling and extraction during or shortly after ingestion, which is essential to prevent turning data lakes into incomprehensible data swamps.
Metadata extraction
The process of metadata extraction discovers metadata information that is essential for accessing a dataset, e.g., its name, extension, structure. In Sec. 7.4 we further address detecting more hidden metadata such as functional dependencies.
The Generic and Extensible Metadata Management System (GEMMS ) for data lakes [108] is a framework that extracts metadata from heterogeneous sources, and stores the metadata in an extensible metamodel. Since the data sources and schemas may change over time and in some cases unknown in advance, it is important that the data lake has a flexible and extensible manner of metadata extraction. For each input file, GEMMS first detects its format, then initiates a corresponding parser to obtain the structural metadata (e.g., trees, tables, and graphs) and metadata properties (e.g., header information implying the content of the file). A tree structure inference algorithm is implemented for structural metadata extraction, which iterates semi-structured data in a breadth-first manner, and detects the tree structure. Then it transforms and stores the metadata in an extensible metamodel, such that new types of sources could be easily integrated.
It also provides basic querying support. The following work Constance [56] can also extract structural metadata, i.e., schemas from semi-structured files such as XML and JSON.
DATAMARAN [48] provides an approach to extract the structures from semi-structured log files whose records span multiple lines, with record types and boundaries. DATAMARAN follows a three-step algorithmic approach. It first generates candidate structure templates, which use regular expressions [117] to express the record structure while allowing minor variations; the structure templates are stored in hash-tables, and only the ones satisfying a coverage threshold assumption (Assumption 1, [48]) are kept. Next, redundant structure templates are pruned based on a specially designed score function, and finally further optimized using two refinement techniques over the pruned structure templates. The DATAMARAN process does not require human supervision and provides a high extraction accuracy compared to existing works. To mimic a real data lake, the authors crawled 100 datasets with large log files from GitHub.
For scientific data files, Skluma [118] extracts JSON metadata for content and context of scientific data files. It first finds the name, path, size, and extension of the files; then it infers file types and adds specific extractors accordingly to process tabular data, free texts or null values, etc. With the growing importance of Research Data Management for replicability in scientific studies, this kind of approaches can be expected to grow significantly over the next years.
Metadata modeling
To make DL content findable, accessible, integratable, and reusable (FAIR Principle [127]), a natural question is how to structure and organize the metadata in a formal way, i.e., metadata modeling. The majority of such models are either logic-based or graph-structured with more or less formal semantics.
Generic metadata model
The logic-based metadata model of GEMMS [108] has different model elements and allows the separation of metadata containing information about the content, semantics, and structure. It captures the general metadata properties in the form of key-value pairs, as well as structural metadata as trees and matrices to assist querying. Moreover, domain-specific ontology terms referred by Uniform Resource Identifier (URI) can be attached to metadata elements as semantic metadata. The relationships between the model elements are "hasa" relationships or aggregations. In [59], this metadata model is extended for representing individual schemas for relational tables, JSON, and labeled property graphs of Neo4j.
Data vault
For structured or semi-structured data in typical business scenarios, a promising conceptual modeling environment is data vault [98,52]. It has three main elements types: hubs representing business concepts, links indicating the many-to-many relationships among hubs, and satellites with descriptive properties of hubs and links [78,79]. Nogueira et al. [98] show how their conceptual model based on data vault can be transformed into relational and document-oriented logical models, and further to physical models (PostgreSQL and Mon-goDB, respectively). Giebler et al. [52] report experience with applying data vault for data lakes in the domains of manufacturing, finance, and customer service, pointing out practical obstacles such as inconsistencies among data sources.
Graph-based metadata model
Adapting ideas about knowledge graphs as pursued in the linked data and semantic web community, several network-or hypergraph-based metamodels have been proposed for data lakes.
Again in the business context, a proposed network metadata model [32,33], focuses on business names, data field descriptions, and rules, in addition to data formats and schemas. It creates a graph-based representation with XML/JSON nodes and labeled arcs indicating their relationship. Nodes can be merged nodes based on lexical and string similarities, and linked to semantic knowledge (e.g., from DBpedia). Moreover, it suggests extracting thematic views of interest to the business, similar to data marts in DW.
To efficiently discover relevant datasets from massive data sources, Stonebraker and colleagues proposed Aurum [43] with an enterprise knowledge graph (EKG) to capture and query the relationships among datasets. An EKG is a hypergraph with three elements: nodes, weighted edges, and hyperedges. Nodes represent dataset attributes; an edge between two attribute nodes represents their relationship; and hyperedges represent different granularities among arbitrary numbers of nodes, e.g., connecting attributes and tables. Aurum builds the corresponding EKG, maintains it upon data changes and allows users to query EKG with a graph query language based on discovery primitives.
In [113], six evolution-oriented features of metadata management in data lakes are emphasized: semantic en-richment, data indexing, link generation and conservation (discover hidden similarities or integrate existing links among datasets), data polymorphism (preserve multiple transformed forms of the same dataset), data versioning, and usage tracking. Taking these features into consideration, their metadata model comprises notions of hypergraph, nested graph, and attributed graph. In terms of content, it can describe attributes, objects, datasets, historical versions, (similarity or parent-child) relationships, logs, and indexes.
Maintenance
After ingesting heterogeneous raw data sources, a data lake is a vast, unrelated collection with initially little metadata. To make the data usable the data lake needs to further process and maintain the raw data, e.g., find more metadata, discover hidden relationships, and perform data integration, transformation or cleaning if necessary. As shown in Fig. 2, we categorize the maintenance-related functions into six groups and discuss corresponding data lake solutions in what follows.
Dataset preparation and organization
The dataset organization problem studies how to structure and navigate the massive heterogeneous datasets in data lakes. Often, it requires preparing and profiling the datasets with their metadata.
Dataset preparation
KAYAK [81,82] focuses on just-in-time data preparation in data lakes. It automatically collects and stores in catalogs intra-dataset metadata describing individual datasets, and inter-dataset metadata for relationships among the attributes of different datasets. Interdataset metadata includes integrity constraints, semantic connection [111] (calculated based on external domain knowledge such as ontologies) and data instance overlap. Overall, the metadata form a graph where nodes are the datasets, and weighted edges indicate joinability by similarity values based on inter-dataset metadata.
For dataset preparation KAYAK first defines basic tasks such as basic profiling, compute joinability. A sequence of such atomic tasks further builds up a specific operation for data preparation, referred to as a primitive, e.g., insert a dataset. To represent data preparation pipelines it uses a directed acyclic graph (DAG) with primitives as nodes and their dependencies (based on execution order) as edges. For instance, an edge a → b indicates that the steps of primitive a (e.g., insert a dataset) needs to be executed before the primitive b (e.g., use the dataset to train an ML model). In Table 3 we compare different usages and definitions of DAGs in KAYAK and other systems to be discussed.
To manage dependencies among tasks and execute the atomic tasks of a primitive in parallel, KAYAK has a second type of DAG in Table 3. Here each node represents an atomic task, and the directed edges indicate the execution order of two tasks. Such a graph shows which tasks can be parallelized during execution.
Data lake organization
The GOODS data lake [62,63] allows datasets to be created, stored, and modified in the data lake first, then conducts metadata collection. For each dataset, it collects various metadata and adds it as one entry in the GOODS catalog, which is stored in Bigtable. To organize, profile and search datasets (e.g., cluster different versions of the same dataset), the metadata is classified into six categories, including basic, content-based, provenance, user-supplied, team and project, and temporal metadata. This categorization of metadata is closely related to Google's specific information retrieval requirements, but also a generally nice example of metadata management in data lakes. Data lake developers should customize the metadata catalog and algorithms to their own usage scenarios.
DS-Prox [6] and a later version DS-kNN [4] consider the dataset organization problem as a classification problem. That is, in an incremental manner, for each dataset DS-kNN adds it to a new category or an existing category that consists of already classified datasets. It has applied the k-nearest-neighbour (k-NN) for grouping the datasets. Before the step of classification, DS-kNN first conducts data preparation by feature extraction. The features are extracted through a metadata management process [3]. For each attribute, depending on whether its values are continuous or discrete, DS-kNN extracts statistical or distribution-based features respectively, e.g., average numeric mean, or the average number of values. Such data-based features are added to each dataset, together with other features based on the metadata, e.g., number of attributes, type of each attribute. With features of two datasets, DS-kNN then computes their similarity with string comparison metric in Levenshtein distance [85]. Next, given a new dataset, the proposed classification-based algorithm returns top-k neighbors (classified datasets), from which DS-kNN chooses the most frequently appeared category, then assigns the current dataset to this category; if none of the existing datasets are found, the new dataset is assigned with a new category. Finally, the datasets in the lake can be visualized as a graph: each node is a dataset, and edges between two nodes are labeled with the similarity of the two datasets. In a later work [5], it uses supervised ensemble models to obtain the similarity values between dataset pairs. If the similarity value of two datasets reaches the threshold value, then they are proposed as candidates for schema matching.
Nargesian et al. [94] defined the data lake organization problem as discovering the optimal structure for users to effectively find the desired dataset in a data lake. Such a structure for navigating data lakes is referred to as an organization: -As listed in Table 3, a DAG-based organization has attributes of tables as leaf nodes; non-leaf nodes are assigned a topic label that summarizes the set of attributes or topics represented by its child nodes. The edges represent containment relationships between the set of attributes represented by the nodes. For instance, a parent non-leaf node f ood represents a set of food-related attributes, and has three child non-leaf nodes with meat, vegetables, and f ruit, which are the subsets of attributes of f ood. -To measure semantic similarities among attributes, the attribute values are represented as word embeddings [96], and cosine similarity is applied. The process of navigation is formalized as a Markov model, where the states are the nodes (i.e., sets of attributes) and transitions are the edges, i.e. future states depend only on the current state, not on all the historical states. Thus, given a query asking about a topic (e.g., searching keyword f ood), the transition probability is only relevant with the current node in DAG and the similarities between its child nodes and the given topic. The proposed algorithms in [94] try to find the organization structure that achieves the maximum probability for all the attributes of tables to be found.
In Sec. 5.3 we mentioned that Juneau [130] is a data lake handling computational notebooks, workflows, and cells, from which Juneau builds graphs for data management. From the workflows, it builds workflow graphs. A workflow graph is a directed bipartite graph with two types of nodes: data object nodes which represent input/output files or formatted text cells, and computational module nodes representing code cells in a Jupyter notebook. If the data object is the input or output of the computational module, there is a directed edge connecting their nodes. If there are program variables used for the data object and computational module, the variables are added as the edge label. Moreover, another type of graph in Juneau is for managing the relationships of variables in notebooks, referred to as variable dependency graphs, which is defined as a DAG-based model as listed in Table 3. In a variable dependency graph, nodes represent the variables, and the labeled, directed edges indicate one variable is computed using another variable through a function. For instance, the edge data train −−−→ var indicates that the variable data is the input of function train, whose output variable is var. Then by examining the subgraph isomorphism Juneau is able to discover tables sharing similar workflows of notebooks (similar sequences/patterns of variables and functions).
Discover related datasets
When a data lake is hosting a large number of datasets, it is neither realistic nor necessary to query or integrate all the datasets. It might be more beneficial that data lakes first discover useful datasets for the current purpose. Moreover, the relatedness discovered among datasets is also a valuable type of metadata for exploring a data lake and preventing a data swamp, e.g. for entity resolution or resolving inconsistency across datasets. In data lakes, the process of related dataset discovery, also referred to as data discovery, tries to find a subset of relevant datasets that are similar or complementary to a given dataset in a certain way, e.g., with similar attributes names or overlapped instance values.
As shown in Table 4, we now present the solutions that address the related dataset discovery problem in data lakes. The systems in this group mainly handle tabular data, or hierarchical data that can be transformed into tabular data (not necessarily relational data, i.e., some may even violate the first normal form).
Aurum
Aurum [43] employs hypergraphs (i.e., EKG) to find similarity-based relationships and primary-foreign key candidates among tabular datasets. To construct the hypergraph introduced in Sec. 6.2.3, it first profiles each table column by adding signatures, i.e. information extracted from column values such as cardinality, data distribution, a representation of data values (i.e., Min-Hash). It indexes these signatures in locality-sensitive hashing (LSH) with Jaccard similarity using the Min-Hash signature, or cosine similarity with TF-IDF signature. If two columns have their signatures indexed into the same bucket after hashing, an edge is created between the two column nodes, and the similarity value is stored as the weight of the edge. Aurum also detects primary-foreign key relationships between columns by first inferring approximate key attributes.
A highlight of Aurum is its efficiency of computing set similarities. Given the total number n of attributes of all datasets, instead of conducting an all-pair comparison whose complexity is O(n 2 ), it profiles columns with signatures and indexes them in LSH, searches the approximate nearest neighbors, and only needs O(n). By using data parallelism, the process of profile construction can scale to a considerable number of data sources in the data lake. When source data is changed, Aurum does not re-read all the data, but first computes a magnitude of the Jaccard similarity. Only if the difference compared to the original value is above a threshold, it updates the signature and hypergraph.
Human-in-the-loop data discovery
The high-level data lake proposal in [16] shares a similar idea for using multiple criteria to measure dataset similarities, but includes humans in the loop to make similarity decisions if the algorithms alone cannot provide reliable suggestions. To find joinable datasets, it measures the similarity of files (e.g., HTML tables), and considers approximate matches in terms of data values, schemas and descriptive metadata (the source of data, information added by users, etc). For measuring the similarity of the files and group them, it computes the Jaccard similarity between file paths using MinHash and LSH.
JOSIE
Given a data lake with tabular data (e.g., a corpus of web tables), JOSIE [131] addressed two challenges with regard to applying existing overlap set similarity search solutions in data lakes. A data lake may contain a large number of tables, thus, the number of columns and the distinct values from a column could also be large; and it could be difficult for a human user to directly give an appropriate threshold value θ for the intersection value. Given a table T in the data lake, and one specific column C from T , JOSIE can return tables in the data lake that could be joined with T on C. The task is formalized as the problem of overlap set similarity, which considers the table columns as sets, and the same tuple values as the set intersection. Each table in the output, contains a column that has an overlap with C, and the intersection value is larger than a given threshold θ. JOSIE formulates the problem of joinable table discovery as the problem of finding exact top-k overlap set similarity search. The measurement used in JOSIE is the intersection size of the sets, also referred to as "overlap similarity". For returning top-k sets JOSIE has applied inverted indexes, which map between the sets and their distinct values. JOSIE returns top-k results and scalable with a large number of tables. It employs a cost model to eliminate the unqualified candidates effectively. Such a method makes the performance robust to different data distributions. 14] broadened the criteria of "relatedness" that make a dataset relevant. It computes five features for multiple similarities, including attribute names, instance value overlaps, representation patterns of instance values, semantics for textual attributes, and data distribu-tion for numerical attributes. Given attributes of tables, D 3 L first transforms schemas and data instances to intermediate representations with features of q-grams, TF/IDF tokens, regular expressions, word-embeddings, and the Kolmogorov-Smirnov statistic [27]. Similar to previous data discovery systems, D 3 L first computes the pair-wise similarities between attributes based on Jaccard similarity (MinHash [17]), and high cosine similarity (random projections [24]). From the similarity values of five features, it transforms the problem of finding the relatedness between tables to the calculation of weighted Euclidean distance in the 5-dimensional Euclidean space. In such a distance calculation, the weight of each feature (i.e., feature coefficients) indicates its significance for the combined distance.
To optimize the weights of the five features, D 3 L trains a binary classifier, i.e., logistic regression, over a training dataset with relatedness ground truth, and applies the coefficients of the trained model as the weight of features for distance calculation. D 3 L also builds LSH to index the features and map them to the distance space. It considers two items similar if they are hashed into the same buckets. One interesting finding in this work is that using LSH to discover joining paths leads to finding more related tables (and attributes), also with higher accuracy.
Juneau
Juneau [129,68,130] provides searching over related tables from a different perspective. It extends computational notebooks (e.g., Jupyter) and supports common data science tasks, such as finding additional data for training or validation, and feature engineering. When users specify a desired target table via the user interface, the system can automatically return a ranked list of tables, which might be relevant to the given table. Specifically, as shown in Table 4, Juneau extends the notion of "relatedness" with the following signals.
1. In addition to value overlap and similar attribute names, it considers pairwise matched attributes that share similar attribute value domains, and matched candidate key pairs. The proposed similarity metrics are based on Jaccard similarity; sketches and LSH-based approximation [44,132] For a specific data science task, Juneau picks a subset of aforementioned relatedness features, measures and combines their similarity values. For instance, when searching tables for a data cleaning task, it considers the instance value overlap, schema overlap, provenance similarity, and null value differences.
Discussion
In this subsection, we have reviewed one of the hottest topics in data lakes, i.e., data discovery. 36 We can observe a standard procedure here: the first step is to define and extract relatedness signals from tables w.r.t. data (e.g., value overlaps, data distribution patterns), schemas (e.g., attribute names, key constraints), semantics, and descriptive metadata. The next step is to compute multi-dimensional similarities between attributes (e.g., based on Jaccard similarity or cosine similarity), and aggregate them to an overall similarity between tabular datasets. The LSH index and its extensions (e.g., LSH Forest [9]) are often used to index and map feature values to boost performance or increase the accuracy of relatedness. Another important part of data discovery is to query the data lake, which is discussed in Section 8.1.
However, even though all data lakes introduced here tackle the data discovery problem, they have slightly different focuses. Aurum is fast and robust against data value changes and offers a graph-based structure; JOSIE shows a high performance; D 3 L improves the accuracy of discovered related tables by dimensions of similarity. Juneau emphasizes workflows for multiple data science tasks. Therefore, their individual similarity measures, applied data structures and proposed algorithms vary significantly. We can expect more novel perspectives and techniques to be brought to the landscape of data discovery in data lakes.
Data integration
Data integration (DI) studies the problem of combining multiple heterogeneous data sources and providing unified data access for users [36]. Given a large scale of sources in a data lake, users might need to first discover a subset of relevant datasets (the related dataset discovery process in Sec. 7.2), before resolving the heterogeneities of sources with regard to data models and schemas.
The fundamental data integration techniques include schema matching, schema mapping, query reformulation, entity linkage, etc. Few data lake proposals provide an end-to-end data integration pipeline. Constance [56] has the extracted schemas of relational, JSON documents and graph data modeled in the generic metadata model (see Sec. 6.2.1). For data integration Constance first performs schema matching, which finds the similar attributes representing the same real-world entity. Instead of creating a global schema covering the information of all the datasets in the data lake, users can select a subset of data sources and the system generates an integrated schema for partial integration, or users can choose a set of relevant schema elements via the user interface to form the desired integrated schema. Next Constance generates schema mappings, which preserve the relationships between the source schemas and integrated schema [58]. With schema mappings Constance performs query rewriting and data transformation in a polystore-based setting [60]. It rewrites the input user query (against the integrated schema) to subqueries (against source schemas), executes the generated subqueries in the query languages of each data store (e.g., MySQL, MongoDB, Neo4j) and retrieves the subquery results. For the final integrated results it further resolves the data type and value conflicts while merging the subquery results. It also pushes down selection predicates to the data sources to optimizes query execution and reduce the amount of data to be loaded.
Metadata enrichment
As discussed in Section 6.1, in order to access and use the data, it is necessary to extract certain metadata from the raw ingested data in a data lake, e.g., schemas. To further understand and explore a dataset, sometimes it is necessary to perform additional steps to discover more metadata of a dataset, such as semantic knowledge, its relationships to other datasets, and constraints that could also be helpful for data cleaning. We introduce the following data lake systems providing such enrichment of metadata.
CoreDB [10,11] is developed given that more insights should be extracted and enriched from the raw data. It first extracts essential information representative of the original raw data, referred to as features, e.g., keywords and named entities. Then it provides services that add synonyms and stems to such features; it connects the features to open knowledge bases such as Google Knowledge Graph 37 , Wikidata 38 ; it further annotates and groups the data sources in the data lake.
In order to obtain metadata of a dataset that precisely describes its origin, ownership, and possible usage, it is often beneficial to keep human experts in the loop. Google's data lake GOODS [62,63] stores metadata of its datasets in the catalog, and it applies crowdsourcing for metadata enrichment. For instance, it allows adding descriptive metadata of datasets, marking datasets worth additional security attention, such that people from different teams of the organization (e.g., data owners, auditors, users) can exchange and communicate about the information of the datasets.
Constance [59] can also enrich the metadata in the data lake by discovering dependencies, more specifically, relaxed functional dependencies (RFD) [22]. The relaxed functional dependencies are relaxed in the sense that they do not apply to all tuples of a relation, or that similar attribute values are also considered to be matched. Such dependencies provide insights that certain attributes functionally depend on some other attributes in a loose manner, which apply to the ingested datasets even though they have a certain percentage of inconsistent tuples.
Data quality improvement
A data lake provides means for data quality management, which guarantees the quality and efficiency of the user query results. The ingested raw data sometimes has inadequate data quality. Thus, there are certain proposals about how to obtain dependencies from the data in the data lakes, and then use them to improve the data quality.
CLAMS [41] uses conditional denial constraints to detect the potentially erroneous data. Given the RDF triples, a conditional denial constraint specifies a set of negation conditions about the tuples. The proposed approach automatically detects such constraints by discovering possible schemas from RDF data, and corresponding constraints. It examines the triples violating the obtained constraints and uses them to build a hypergraph, which indicates the number of constraints violated by each triple. Then accordingly it ranks the RDF triples and asks the user to validate whether such a candidate dirty triple should be removed.
Constance [59] also uses discovered dependencies for data cleaning, whereas it applies relaxed functional dependencies. These dependencies are especially useful in cases where the source data has lower quality with inconsistencies and incorrect values. By using relaxed functional dependencies, Constance identifies the data objects violating the detected dependencies, which could be potentially erroneous data.
Schema evolution
Schema evolution requires handling the changes of schemas and integrity constraints [30]. In data lakes, the possible challenge of schema evolution could be the heterogeneity of the schemas and the frequency of the changes. While data warehouses have a relational schema that is usually not updated very often, data lakes are more agile systems in which data and metadata can be updated very frequently.
In [71] it addresses the problem of how to construct the whole evolving history of schemas given data stored in NoSQL databases, e.g., JSON stored in MongoDB. Instead of table schemas in relational databases, it considers the structure persisted object in NoSQL database, referred to as entity types. The proposed approach first extracts each entity type from loaded datasets, with assigned timestamps that indicate its residing time interval. Then from different structure versions of the entity types, it detects the possible operations between two consecutive versions. In the case of multiple alternative operations, users will make the final validation. In addition, to detect certain schema changes, it is often useful to detect integrity constraints, e.g., inclusion dependencies. The assumption in [71] is that in NoSQL databases often schemas are "less" normalized, which leads to the inclusion dependencies involving multiple attributes rather than a single attribute as in relational databases. In [71] an algorithm is proposed to detect such k-ary inclusion dependencies.
Exploration
It is important that useful information can be retrieved from data lakes. However, this is often a challenging task due to a large number of ingested sources, and the heterogeneity of data. A user may have knowledge of one or a few data sources, but rarely all the datasets. Thus, the existing solutions mainly solve the querying problem in data lakes in the following two directions: discover the data lakes based on the relatedness of datasets (Sec. 8.1), or provide a unified query interface for heterogeneous data sources (Sec. 8.2).
Query-driven data discovery
Query-driven data discovery [91] refers to searching a data lake based on the measured relatedness (e.g., joinable, unionable) among datasets (e.g. web tables) as introduced in Sec. 7.2. With input queries specifying a given dataset (usually tabular data), the system returns the top-k most related datasets.
Exploration methods
There are roughly three ways of exploration. We denote the set of datasets in the data lake as S.
1. Given the user-specified table T and a column c of T , the system returns top-k tables that are most related with T , e.g., JOSIE [131]. 2. Given the user-specified table T , the system returns top-k tables (referred to as S k ) that contain relevant attributes for populating T , e.g., D 3 L [14]. In addition, if a table S i is not in the top-k result set (i.e., S i ∈ (S − S k )), yet it can be joined with some table(s) in S k and improve the attribute coverage of T , D 3 L also include S i in the result. 3. Given the user-specified table T and the search type τ for external applications (e.g., a data science task), the system returns top-k tables that are most relevant to T based on the relatedness measurements associated to τ , e.g., Juneau [130].
Notably, in this group of studies the challenge of exploring datasets is a "search" problem rather than the query reformulation problem in data integration. It can also be seen as a step prior to data integration or data science tasks [14,130].
Querying methods and indexing
Given an input query table, the data lakes in Table 4 often rely on similarity estimation using indexes (e.g., aforementioned LSH indexes, inverted indexes). They rank candidate tables, and include the top-k tables in the result set. In addition, Aurum [43] applies a graph index to accelerate expensive queries containing discovery path queries for searching its hypergraphs. In its primitive-based query language, an Aurum user can compose queries to search schemas or data values with keywords to find specific columns, tables, or paths; users can specify criteria and obtain ranked querying results in a flexible manner, i.e., users can obtain ranking results of different criteria without re-running the query. In Juneau [130] a query is a cell output table picked by the user. The user also chooses the type of search, e.g., find tables for data cleaning. Then Juneau uses the corresponding relatedness measurements to perform the top-k search. It speeds up the search with strategies such as indexing columns profiled in the same domain and tables connected by workflow steps, and pruning tables under a threshold of schema-level overlap.
Remarks on future directions
DL exploration could benefit greatly by taking into account recent results on web table exploration [76,19,105], data wrangling [124,46], or external applications upon the data lake. With the existing works mainly focus on evaluating the accuracy of similarity computation, or the performance (query processing time), deep analysis and further improvement on the accuracy and completeness of the top-k result set are still rare. Finally, the existing solutions in this group mostly study tabular data. In what follows we discuss the data lakes that explore datasets with diverse data models.
Query heterogeneous data
For accessing a data lake, the second group of studies focuses on providing a unified querying interface for heterogeneous data.
Constance [56,60] provides an incremental manner for users to explore the data lake. A user can first browse the existing data sources, including their description, statistics, and schema; then she can write a query (SQL or JSONiq) for a single dataset, or use the user interface to make a keyword search over the schema or the data. Alternatively, with certain knowledge of the datasets, which could be learned through the previous exploration processes, she can choose to integrate and query a subset of datasets as introduced in Sec. 7.3. In addition, users can transform data in the data lake to their desired structure and format. The information retrieval requirements of external applications are supported via RESTful APIs [50].
CoreDB [10,11] provides users with a unified interface, i.e., through a REST API for querying data or performing Create, Read, Update and Delete (CRUD) operations. It applies Elasticsearch 39 for the underlying full-text search, SQL queries for relational database systems, and SPARQL queries for knowledge graphs.
Ontario [38] and Squerall [84] both enable a federated query processing over a semantic data lake and apply SPARQL to query the heterogeneous data lake. Ontario supports heterogeneous data, e.g., RDF (stored in Virtuoso 40 ), local JSON files, TSV files (in HDFS), XML files (in MySQL). It profiles each dataset with its metadata and additional information, e.g., the types of the source, or the web API for querying this type of source. For instance, for TSV files stored in HDFS, the data lake provide Spark-based services which translate the SPARQL queries to SQL. Given an input SPARQL query, Ontario first decomposes the query; then it uses the profiles to generate subqueries for each dataset with a set of proposed rules; using metadata, it also tries to generate optimized query plans. Similar to Ontario, Squerall also supports querying diverse data sources including files (i.e., CSV, Parquet) and relational (i.e., MySQL) and NoSQL databases (i.e., Cassandra, Mon-goDB). The schemata of the sources are mapped to a mediator, which consists of high-level ontologies. Given SPARQL queries against the mediator, with the mapping the relevant data entities are retrieved from data sources, which are joined and transformed to form the final query results. Squerall features distributed query processing and is implemented with two versions with different data connectors: Spark and Presto 41 .
Potentially useful systems. In Sec. 5.3 we have discussed that polystore systems enable multiple data models and diverse query languages. We divide their querying solutions into three groups.
1.
A straightforward strategy is to transform the data in heterogeneous NoSQL stores into relational tables and uses an existing relational database to process the data. For example, Argo [25] studies enabling key-value and document-oriented systems in relational systems. Though such solutions offer an easy adaptation to existing relational systems, they come with a high cost of data transformation, especially with large datasets, and might not be scalable for big data settings. 2. Another group of approaches focuses on multistore systems providing a SQL-like query language to query NoSQL systems. For instance, CloudMdsQL [74] supports relational, document-oriented, and graph-based databases. BigIntegrator [133] is a system that enables querying over both relational databases and cloud-based data stores (e.g., Google's Bigtable). MuSQLE [51] performs optimization of queries across multiple SQL engines. FORWARD [100] provides a powerful query language SQL++ for both structured and semi-structured data. Although these approaches perform optimizations such as semi-join rewriting, they do not consider schema mappings during the rewriting steps. 3. The third category covers the approaches which provide more efficient query processing techniques by applying a middle-ware to access the multiple NoSQL stores, such as Estocada [18] and Polybase [64], Big-DAWG [37], and MISO [75].
Composite metadata management in DLs
In Sec. 6-8 we have discussed individual functions provided by existing data lake systems. To prevent a data lake turn into a "data swamp", metadata management is crucial. As surveyed in [112], there are different types of metadata: schemas that preserve the structure of the dataset, semantic metadata, constraints, and other descriptive information. Metadata management is therefore necessary throughout all aspects of data lakes. It starts from extracting metadata such as schemas, semantics, provenance information from ingested datasets (Sec. 6.1); then it needs to model the metadata in a formal manner (Sec. 6.2); it is possible to enrich the metadata, e.g., relatedness signals extracted from datasets and the computed relatedness between datasets (Sec. 7.2), schema mappings (Sec. 7.3), semantics, functional dependencies, or human added annotations (Sec. 7.4); finally, the metadata is used for facilitating querying a data lake (Sec. 8), or improving the data quality (Sec. 7.5).
In the sequel, we look at two important integrative aspects, i.e. schema mapping and provenance of data.
Schema mapping formalisms
Schema mappings are also metadata; thus, metadata management includes studying their algorithmic prop- [72]. Schema mappings are often expressed as logical sentences, e.g., source-totarget tuple generating dependencies (s-t tgd), which are also known as Global-Local-as-View (GLAV) assertions [77]. In what follows, we discuss the expressive power and properties of mapping languages that are relevant to the metadata-related challenges in data lakes. Expressive power. A data lake may have not only relational data but also hierarchically structured data such as XML and JSON. The ideal mapping language should be expressive enough to describe a wide variety of data integration or data exchange scenarios in data lakes. Regarding expressive power, Fig. 3 shows the strict inclusion hierarchy between the four most intensively studied schema mapping languages [7,73]. For hierarchically structured data nested mappings [47] are proven to be more accurate for describing the relationship between source and target schemas, and also more efficient for data transformation than the basic mapping expressed in tgds. Nested mappings can be expressed by nested tuple-generating dependencies (nested tgds) [47,122]. The second-order mapping formalism, i.e., second-order tuple-generating dependencies (SO tgds) [39] and plain second-order tuple-generating dependencies (plain SO tgds), have stronger expressive power than tgds and nested tgds, which means that they can describe more mapping scenarios. In particular, SO tgds and plain SO tgds are rules with Skolem functions, which are used for the creation of object identifiers. Many existing data exchange and integration applications [128,47,86,15] generate mappings with Skolem functions as grouping keys for set elements, and their schema mappings can be expressed in plain SO tgds.
Structural properties and decidability of reasoning tasks. Table 5 summarizes the structural properties and decidability of implication and logical equivalence problems of these four mapping languages based on the results from [123,7,73,42]. Here we discuss the three arguably most important structural properties of schema mapping languages: the existence of universal solutions, closure under target homomorphism, and allowing conjunctive query rewriting [123]. The first two properties are crucial for data exchange while the third property plays a fundamental role in query reformulation for data integration. When a mapping allows conjunctive query rewriting, it enables rewriting a union of conjunctive queries over the global/target schema to a union of conjunctive queries against the source schema, which corresponds to the polynomial time complexity in terms of data complexity. In Table 5 we can see that all three structural characteristics are satisfied except the case that SO tgds do not hold with the closure under target homomorphism. To the best of our knowledge, the two first-order mapping languages, i.e., tgds and nested tgds, are decidable for the logical equivalence problem and implication problem, which is not the case for plain SO tgds and SO tgds [73,42].
It is a complicated problem to choose the right mapping language for a large-scale, heterogeneous data lake. In addition to the comparison in Fig. 3 and Table 5, in practice SO tgds and plain SO tgds are less desirable specifications compared to dependencies with FO semantics. To begin with, FO and SO logics correspond to algorithmic behaviors with different complexities [73]. The data complexity of the model checking problem for plain SO tgds is NP-complete while the same problem is merely LOGSPACE for nested tgds [104,73]. FO tgds are more user-friendly and can be easily translated to SQL [97,73]. Thus, there are studies [8,57] transforming schema mappings in plain SO tgds to first-order mappings (i.e., tgds and nested tgds). Furthermore, to embrace ontologies in data lakes, one possible path is to apply description logic [20] that can express ontology language, and support query answering and verifying knowledge base satisfiability.
Data provenance
Data provenance (also known as data lineage) refers to the information of data records, which indicates their origin, usage, status in the whole life circle, etc. The provenance information tells how a dataset is obtained and helps to make proper access to the datasets. Such information could be extracted during data ingestion, later enriched during maintenance or exploration phases, possibly with human input.
In [124] a governance tool from IBM is developed, which can manage the requests for ingesting new data sources or using already ingested datasets in a data lake. Suriarachchi et al. [120] have proposed an abstracted architecture that provides integrated provenance (information of activities) given multiple data processing and analytics systems (e.g., Hadoop, Storm 42 , and Spark), as these systems populate provenance events in different standards and apply various storage manners. They have also studied a use case, in which data from Twitter is collected and processed (e.g., count hashtags, aggregate data by each category) by Apache Flume 43 , Hadoop jobs, and Spark jobs.
CoreDB [10,11] and Juneau [129,68,130] both preserve the provenance information as graphs. CoreDB uses the descriptive, administrative and temporal metadata to build DAG-based provenance graphs [12]. In such a provenance graph model, nodes represent entities, and users/roles; edges represent CRUD (Create, Read, Update and Delete) or querying operations. For example, by querying the provenance graph one can answer the question: who queried this entity, and when? In Sec. 7.1.2 we have mentioned that Juneau [130] generates graphs with variables as nodes, and connects two variable nodes in the same function. Given a variable v in the notebook, one can find all other variables affecting v via some functions, and the relationships among these variables and v.
New directions of data lakes
In this section, we list a few directions that are not wellstudied in existing literature, yet indicate promising future for applications and development of data lakes.
Machine learning in data lakes
In recent years, machine learning has shown many new possibilities in information systems. With a large amount of data, a data lake could be applicable for certain machine learning or deep learning models.
ML models have been applied in data lakes for handling the problems of dataset organization and discovery. The related dataset discovery problem requires finding "similar" attributes and datasets from a large number of candidates, and often a set of features that can be extracted from the data, metadata, etc. Thus, the problem can be coined as a clustering or a classification (with category labels) problem. Supervised ML models are applied to assign datasets to categories [6,4], i.e., KNN classifier, random forest ensemble. To optimize feature coefficients, D 3 L trains a logistic regression model over a training dataset with relatedness ground truth. It is interesting to see whether there are more sophisticated solutions w.r.t. ML models, distance metric calculation, and feature selection to further improve the accuracy of related dataset discovery.
Data lakes for data science
One important application of data lakes is data science [91]. If a data lake is used for data science tasks, we call it a scientific data lake. A straightforward solution to build a scientific data lake, is to apply the open-source tools that support data preparation pipeline and MLrelated tasks (e.g., model debugging). For instance, the data lake proposal in [90] uses Spark to build the logistic regression model for sentiment analysis and Python libraries (e.g., Matplotlib) for visualization.
A more sophisticated manner is to integrate the specific data science tasks into the design of data lake functions. First, when profiling the datasets, a scientific data lake might need to extract information relevant to the data science tasks [130,16], e.g., manage notebook variables, scientific topics of the experimental data. KAYAK [82] divides the process of data preparation into atomic operations, then allows data scientists to optimize the whole pipeline of data preparation. Juneau [130] allows the training/validation data augmentation, linking data, features extraction, and data cleaning. For each data science task, the corresponding similarity measures can be assembled based on the basic similarity measures discussed in Sec. 7.2. For example, for data augmentation it also tries to find tables that can bring new data instances to improve the information gain. In this way, the accuracy of classification models could be improved.
Furthermore, given a scientific data lake with massive datasets, it is important to efficiently discover relevant data for data analysis [91]. The related dataset discovery systems introduced in Sec. 7.2 and Sec. 8.1 are contributing to this promising direction.
Stream data lakes
With the recent advances of Industrial 4.0, smart city, and e-commerce websites, a large volume of stream data in high speed needs to be selectively stored and analyzed in near real-time. Data lakes can store raw data and support heterogeneous data structure and formats, which makes data lakes a prominent technology in these scenarios, even though the concept of data stream management is based exactly on the assumption that there is not enough storage to keep all of them. Therefore, a core challenge in bringing both ideas together in stream data lakes will be to find an optimal storage mix of storing raw data in limited time windows, reducing them by data analytics on the fly, and selective storage of aggregated time series.
In [103], it is proposed to apply data lakes to store, integrate, and collectively analyze vast amounts of heterogeneous data streams generated from sensors and machines in the production environment. Built upon Apache Kafka 44 and Flink 45 , it supports ingesting stream data into Hadoop, and other DBMSs with the support of Kubernetes 46 . In a smart city use case, [90] proposes to apply Apache Flume and Spark for ingesting and process stream data.
To build a stream data lake, besides applying the existing open-source data stream libraries (e.g., Kafka, Flume, Storm, and Spark), another challenge is how to apply existing stream techniques (e.g., Lambda architecture) in a data lake setting. For instance, the architecture proposed in [93] replaces the maintenance layer in Fig. 2 with the three parts of a Lambda architecture. Its batch layer serves all the data stored Hadoop in batches and precomputes batch views; the speed layer handles streams ingested by Flume in realtime and computes real-time views; the serving layer integrates the batch views and real-time views respectively such that results can be passed to the exploration layer of the system for data querying and analytics.
Summary and outlook
In the first decade of their existence, data lakes have been receiving increasing interest from both academia and industry. However, as a comprehensive concept, the term "data lake" still contains a large body of unclarity. Therefore, in this survey, we have started by providing an introduction of the definition of data lakes, then categorize and discuss existing data lake systems. With the existing data lake technologies grouped by their provided functions, it is easier to have a deeper analysis of the corresponding research questions. Note that even to design a data lake focusing on a specific function, it might still be necessary to complete the workflow with components from other tiers (e.g., for data discovery it also needs metadata extraction and querying) and set up the data storage systems.
The concept of "data lakes" is complex and evolving, not limited to the problems addressed in this survey. Some well-studied problems (e.g., data integration, schema evolution, metadata modeling) need new perspectives and methods in data lakes; while many blank spaces (e.g., stream data lakes, integrate data lakes with machine learning and data science) also call for novel solutions.
|
2021-06-18T04:01:53.354Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "eaed38fb521dd74c43ac5c59fa3d48aa941f87f1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eaed38fb521dd74c43ac5c59fa3d48aa941f87f1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
236450558
|
pes2o/s2orc
|
v3-fos-license
|
Multistate model for pharmacometric analyses of overall survival in HER2‐negative breast cancer patients treated with docetaxel
Abstract The aim of this study was to develop a multistate model for overall survival (OS) analysis, based on parametric hazard functions and combined with an investigation of predictors derived from a longitudinal tumor size model on the transition hazards. Different states – stable disease, tumor response, progression, second‐line treatment, and death following docetaxel treatment initiation (stable state) in patients with HER2‐negative breast cancer (n = 183) were used in model building. Past changes in tumor size prospectively predicts the probability of state changes. The hazard of death after progression was lower for subjects who had longer treatment response (i.e., longer time‐to‐progression). Young age increased the probability of receiving second‐line treatment. The developed multistate model adequately described the transitions between different states and jointly the overall event and survival data. The multistate model allows for simultaneous estimation of transition rates along with their tumor model derived metrics. The metrics were evaluated in a prospective manner so not to cause immortal time bias. Investigation of predictors and characterization of the time to develop response, the duration of response, the progression‐free survival, and the OS can be performed in a single multistate modeling exercise. This modeling approach can be applied to other cancer types and therapies to provide a better understanding of efficacy of drug and characterizing different states, thereby facilitating early clinical interventions to improve anticancer therapy.
INTRODUCTION
Modeling of longitudinal tumor size (TS) data to establish exposure-response-outcome relationships has been increasingly applied to facilitate trial design and the go/nogo decision making in oncology clinical trials. 1,2 Time to event (TTE) models allow investigation of the association between various covariates and long-term clinical end points, such as progression-free survival (PFS) and overall survival (OS). The developed TS-TTE models have the potential to predict PFS/OS of a similar population (i.e., same indication and end points), for example, utilizing results from phase II, and simulating event distributions in phase III trials. In a randomized clinical trial, the efficacy of new molecule is characterized into different response categories by the response evaluation criteria in solid tumors (RECIST), 3,4 which is based on the change in sum of longest diameters (SLDs). The TS is typically only recorded until disease progression because later to that, patients receive a different treatment or a sequential line of cancer therapy. The survival data comprise, however, the full duration of the time from enrollment into the clinical trial to the event of death or censor (end of trial or loss of follow-up). Therefore, the existing TTE modeling approach where estimation of a single survival function to OS data has problems. The model predicted tumor size (or biomarker) is typically extrapolated until OS time during survival analysis, [5][6][7] leading to not accounting the effect of the sequential therapy. Immortal time bias originating from a failure to adequately account for time-dependent covariates in the TTE model can be a major issue. For example, using "depth of tumor response" as a covariate on survival may introduce bias as a substantial decrease takes considerable time to achieve. 8 Thus, only the individual surviving for considerable time will have a large decrease in TS. Multistate models could be a way of addressing these issues and describe the hazard over time correctly.
In traditional survival analysis, a single hazard function is applied to the survival data in the presence of competing events, such as death due to non-cancer causes and censoring. This could lead to a biased estimation of the hazard. Moreover, the intermediate events prior to OS time might contain accompanying information on disease status and hazard of death, for example, the RECIST assessment of progressive disease from stable disease/partial response may indicate an increase in the risk of death. Multistate models have been recommended and has been increasingly used for such data. [9][10][11] For the analysis of survival data, Beyer et al. 12 developed a multistate model, where the transition hazards of intermediate events were modeled using semiparametric models with a treatment arm as a binary covariate. The implementation
WHAT QUESTION DID THIS STUDY ADDRESS?
The intermediate events prior to OS time might contain accompanying information on disease status and hazard of death. A multistate model could be a way of characterizing the intermediate events and evaluation of predictors that are specific to the transition, and jointly describing the survival data. Different states -stable disease, tumor response, progression, second-line treatment, and death following docetaxel treatment in patients with HER2-negative breast cancer were used in model building.
WHAT DOES THIS STUDY ADD TO OUR KNOWLEDGE?
The developed multistate model operated by parametric hazard functions, estimates the transition hazards of intermediate events, and allows investigation of predictors derived from a longitudinal tumor size model. Past changes in tumor size prospectively predicts the probability of state changes the hazard of death after progression was lower for subjects who had longer time-to-progression. The developed multistate model adequately described the transitions between different states and jointly the survival data.
HOW MIGHT THIS CHANGE DRUG DISCOVERY, DEVELOPMENT, AND/OR THERAPEUTICS?
Investigation of predictors and characterization of the time to develop response, the duration of response, the progression-free survival, and the OS can be performed in a single multistate modeling exercise. This modeling approach can be applied to other cancer types and therapies to provide a better understanding of efficacy of drug and characterizing different states, thereby facilitating early clinical interventions to improve anticancer therapy. of multistate models in a nonlinear mixed effect (NLME) modeling framework would allow NLME-derived covariate evaluation. 13 NLME implementation could also allow for random effects 14 or mixture models to be incorporated into the description of the data. This investigation gives an example of the latter. In an NLME framework, the tumor model derived predictors can be evaluated to be transition-dependent, and could more reliably predict different states (for example, time to response and duration of response) and survival. The aim of this study was to develop a multistate model operated by parametric hazard functions using data from docetaxel treated patients with HER2-negative breast cancer, while allowing investigation of predictors derived from the longitudinal TS model on the transition hazards.
Data
The tumor data (SLD) were available from the docetaxel (control) arm of the phase III AVADO trial where the efficacy and safety of combining bevacizumab with docetaxel were investigated in patients with HER2-negative metastatic breast cancer (ClinicalTrials.gov Identifier: NCT00333775). 15 In the docetaxel arm (n = 241), three patients did not receive therapy, 21 patients either received one dose of bevacizumab (n = 7) or started bevacizumab before disease progression (n = 14) and 34 patients did not have a measurable target lesion at baseline. 15 Therefore, these 58 patients were not included in the multistate modeling, hence, the study data consisted of 183 patients with HER2-negative metastatic breast cancer. The subjects were women with a median age of 55 years (range 29-83 years). Patients received docetaxel 100 mg/m 2 infused over 1 h on day 1 of each 3-week cycle. The SLD were evaluated from computed tomography scans every 9 weeks during the first 36 weeks and thereafter every 12 weeks; median follow-up was 32 weeks (range 6-160 weeks). The TS response was evaluated according to RECIST version 1.0 3 (i.e., up to 10 lesions/patient were followed during the trial). Because individual lesion and metastatic organ data were available, the tumor SLD were re-created as per the RECIST version 1.1 criteria, 4 which consider measurements of up to five lesions/individual but not more than two lesions/organ. The AVADO trial was conducted according to the Declaration of Helsinki, the Good Clinical Practice guidelines of the International Conference on Harmonization, and the laws and regulations of the countries involved. The protocol was approved by local ethics committees and written informed consent was obtained from all patients before the screening.
Tumor model
A tumor growth inhibition (TGI) model was applied to describe the change in SLD over time. 16 In this model, the tumor was best described to grow exponentially with a first-order rate constant (k GROW ). The tumor size shrinkage during treatment was explained by drug exposure, the drug-specific cell kill rate constant (k SHR ), and the emergence of resistance to the treatment (LAMBDA) (Equation 1). As docetaxel concentrations were not available, a population K-pharmacodynamic (PD) modeling 17 approach (Equation 2), where the K-PD parameter represents the elimination rate constant in K-PD model, was used along with the TGI model to describe the docetaxel exposure over time (docetaxel (t) ). Interindividual variability was tested on all parameters. where IBASE, model estimated baseline SLD for individual I; TS(t), tumor time course; DOSE, the docetaxel dose, k GROW is the tumor growth rate; k SHR cell kill rate constant; LAMBDA, resistance parameter; docetaxel (t), docetaxel exposure over time; K-PD, the elimination rate constant.
Multistate model
Depending on the patient-level tumor response and OS event data, subjects had the possibility to transfer among five different states. The states considered were stable disease (S1, time = 0 state), tumor response (S2, >= 30% decrease in SLD from baseline), progressive disease (S3, >= 20% increase in SLD from tumor nadir or appearance of new lesions or progression of nontarget lesions), initiated second-line treatment (S4) and death (S5). At baseline, all individuals were assigned in the stable disease state and during the study and follow-up period, after progression they could transfer to other states, as shown in Figure 1. In contrast to RECIST response evaluation, in multistate model if a tumor response (>= 30% decrease in SLD from baseline, stable →response) was observed, then the subject cannot move back to stable state (response →stable) even if it is later observed that the %decrease in SLD from baseline is less than 30%, or the %increase in SLD from baseline is less than 20%. (1) A multistate model, 13 where the transition rates ( ij) between each state were estimated, was developed to describe the observed events (Equations 3-7). The transition intensity ( ) was evaluated with different hazard distributions (Exponential and Weibull) and selected based on the likelihood ratio test.
Where Si-j, different states; ij transition intensities between state i (Si) and state j (Sj) and Si-j0, the initial conditions for the state.
The hazard of death from second-line treatment ( 45) was set to be the same as the hazard for progression to death ( 45 = 35) if it was not statistically different from 35. A mixture model with two subpopulations was evaluated on 34, where the first population received second-line treatment after disease progression and the second population did not receive second-line therapy (i.e., 34 = 0) . The investigated predictors on transition rates included baseline variables: age, Eastern Cooperative Oncology Group (ECOG) score at enrollment, TS, total number of lesions, number of metastatic sites involved, as well as post-baseline model-based tumor dynamic estimates: relative change in SLD from baseline to the present SLD, relative change in SLD between two previous measurements (dSLD), relative change in SLD from tumor nadir (defined as the lowest SLD up until the present time point) to the present SLD, tumor growth rate k GROW , and rate of appearance of resistance (LAMBDA). Both past-observed and model-predicted tumor dynamic metrics at time of transitions were evaluated as predictors of transition rate. Additionally, reason(s) of disease progression (>= 20% increase in SLD from tumor nadir or appearance of new lesions or progression of nontarget lesions), number of new lesions, and time to progression were investigated on 35. The predictors were investigated using proportional hazards model with a baseline transition rate of ij; for example, predictor X on transition rate stable to decrease (λ12) for individual i would be: where X i is the value of Xfor individual I; X median is the population median value of X; X is the coefficient of the effect of X on λ12, and e x represents the hazard ratio associated with covariate X.
In traditional survival analysis, the predictors have most often been evaluated sequentially but in some cases as a joint model. 18 The metrics derived from a tumor model, for example, tumor growth rate (k GROW ) or time-to-regrowth The multistate model describing different states in patients with HER2-negative breast cancer treated with docetaxel. The ij represents the transition intensities between each state and the n along with ij is the number of observed transitions from state i to state j. The n along with different states are the number of clinical outcomes at the end of study. The metric in the dotted box indicating the associated predictor of the transition intensities in the final multistate model. relSLD, relative change from baseline; dSLD, change in SLD between previous two measurements; TTP, time to progression; Age, age in years (TTG), has been computed based on all collected tumor data. In a sequential or joint model, the computed k GROW or TTG information is treated as baseline covariate (i.e., as it is available at time 0). This does not recognize that data relevant for its estimation largely is obtained after start of therapy. Whereas joint models with estimated timevarying predictors (e.g., tumor time-course [SLDt]) to some extent account for the immortal time bias, they are typically estimated based on all the tumor data, including the future tumor data. Thus, a tumor-OS joint modeling does not completely eliminate such bias. 19 In the current analysis, the post-baseline time-varying predictors were investigated in a way that the future tumor observations would not influence the present predictions of transition rates. Thus, in contrast to the standard use of population pharmacokinetic (PK)PD models where all data contribute to defining individual parameters, only observations up to time t was used to make predictions beyond time t for each individual parameter. As a consequence, the tumor dynamic model parameters for an individual will change over time as more observations become available. The derivation of the model predicted metrics on-fly using a joint tumor-multistate model or using the PPP&D approach 18 could not be applied here, as the estimates are based on all available data. The proseval tool from PSN 20 was used for deriving tumor dynamic model parameters (k GROW , LAMBDA, and model predicted tumor change) with successive increase in number of tumor measurements and these metrics were available to the multistate model through the input dataset.
Model development and evaluation
Population models were developed using the nonlinear mixed-effect modeling (NONMEN) software (version 7.4.4). 21 Model development was assisted by Pirana (version 2.9.9) for run management, the Perl-speaks-NONMEM (PsN) toolkit for handling NONMEM run commands, R (version 3.6), and Xpose (version 4.1) for model diagnostics and graphical analysis. 22 The objective function value (OFV; −2 log-likelihood) and graphical diagnostics were used in the evaluation of model performance. A randomization test (randtest tool in PsN 20 ) was performed to determine actual significance levels and an OFV decrease of 5.17 (p < 0.05) was considered as significant for the addition of one parameter (1 degree of freedom) while testing predictors in the multistate model. An increase in OFV of 18.9 (p < 0.001) was used while testing 45 = 35 (decrease in 1 degree of freedom). Parameter uncertainties were derived using the sampling importance resampling (SIR) 23 tool in PsN (tumor model) or R matrix (multistate model). Visual predictive checks (VPCs) for the tumor model and Kaplan-Meier VPCs for the multistate model were used for evaluating the predictive performance of the models. In the tumor model VPC, the simulated tumor data that is greater than 20% increase in SLD from tumor nadir along with at least 5 mm absolute size increase in SLD were censored (RECIST -Progressive Disease based on target lesions 4 ). The final multistate model was evaluated using the case deletion diagnostics (CCD) tool in PsN 20 to identify any potential influential individual of estimated parameters/covariate effects.
To investigate if the final multistate model can be applied in a prospective manner to predict hazard of death over time for each individual, the "individual dynamic prediction" methods suggested by Desmée et al. 24 was used. The individual dynamic predictors include, time-dependent Brier score (BS; Equation 9) and the time-dependent area under the receiver operating characteristic (ROC) curve (AUC) metric (Equation 10). 24,25 The methods proposed by Desmée et al. 24 for assessing dynamic predictions and calculation of BS and AUC were here applied to the tumor-multistate model implemented in NONMEM. The individuals' data until landmark time (s) were used for deriving a posteriori distribution, from this distribution, 200 samples were drawn to compute the predicted hazard of death for each individual in the prediction window (t). In NONMEM, the SAEM estimation method along with ETASAMPLES argument was used for obtaining 200 samples from the conditional distribution. 25 The landmark times (s) considered were 0, 3, 6, 9, 12, and 18 months and prediction windows (t) until 36 months.
Where i model predicted probability of death for subject i in interval s to s+t given individual survival to time s; BS (s, t), brier score based on the final multistate model; BS no link (s, t), brier score based on the base model without any covariates.
The time-dependent AUC was calculated using tim-eROC R package and BS function (R script) by Blanche et al. 26 To account for censoring bias, the inverse probability of censoring weighting approach [27][28][29] was applied in both BS and AUC calculations. Because the number of events and number of subjects alive at that landmark time are different, scaled BS (sBS; Equation 11) was used for comparing different landmark times. 30 The sBS calculates the relative improvement from the base model in predicting individuals' hazard of death over time in the final multistate model, whereas the AUC score shows the how well the final model distinguishes patients of low and high risk of death.
Tumor model
The tumor data consisted of 903 observations and the median SLD at time of enrollment (SLD 0 ) was 56 mm (range, 10-221). OS data were collected for a median of 108 weeks (range, 12-160 weeks) after the start of docetaxel treatment. Ninety-two patients (51%) had death event and the median time to death was 50 weeks (range, . The main characteristics of the study population are summarized in Table 1. The TGI model described the longitudinal SLD adequately and the parameter estimates in the final tumor model are provided in Table 2. The typical (k GROW ) was estimated as 0.00576 week −1 (i.e., a tumor doubling time of ~ 2.3 years [doubling time = ln(2)/kGROW]). The interindividual variability was significant on all parameters, and kGROW was associated with a large interindividual variability (IIV; 126 coefficient of variation percentage [CV%]). The predictive performance was adequate from the VPC diagnostics ( Figure S1) and the parameter uncertainties were less than 48% relative standard error ( Table 2). The case deletion diagnostics did not identify any influential individuals of the parameter estimates ( Figure S2a).
Multistate model
The multistate data consisted of 961 observations that includes 720 post baseline tumor measurements, 58 second line and deaths (92)/censor (91) events. There were 432 transitions between each event (Figure 1). The multistate model operated by parametric hazard functions was developed to successfully characterize the different events in patients with HER2-negative breast cancer treated with docetaxel. In the final model, the transition hazard λ12 (stable to response state) was decreasing with time, indicating that the probability of observing response state diminished over time. The past observed relative change from baseline SLD was predictive of λ12; every 10% reduction in SLD from baseline increased transition rate by 90%. The longer a patient stayed in the stable state the hazard of progression (λ13) became higher (i.e., λ13 increased with time). No covariates were significant in predicting λ13. A constant hazard function described the transition from tumor response to progressive disease (λ23) and every 10% increase in the past observed SLD between the two previous measurements increased the hazard by 15%. A mixture model with two subpopulations best described the transition from progressive disease to second-line (λ34), where 45% of disease progressors (pop-1) received second-line treatment within 6 weeks after disease progression and remaining 55% (pop-2) did not receive second-line treatment. Age was a significant predictor for the probability to receive second-line treatment (i.e., younger patients had higher chance to receive second-line therapy after disease progression than older patients).
Characteristics Median Range
The hazard of death from the second-line treatment state was similar to the hazard for death from the state of progression and the parameter could be shared for the two transitions (λ45 = λ35) and was best described using a constant hazard function. The hazard of death was lower for subjects who had longer treatment response durations (i.e., longer time-to-progression). The baseline covariates and tumor model parameters (k GROW or LAMBDA) were not predictive of any transition rate. The model predicted tumor dynamics did not retain level of significance once the observed SLD changes at the previous tumor measurement were included in the model. The estimated hazard of death from stable disease (λ15) and tumor response (λ25) was estimated to be close to 0, and hence the hazards were fixed to Gompertz-Makeham distribution 31 to allow for a hazard no lower than the expected age-specific hazard.
The final parameters and their uncertainties are given in Table 3. The VPCs showed good predictive performance of the final model (proportions in different states, Figure 2 and Kaplan-Meier VPC Figure S3). The case deletion diagnostics results demonstrated no influential individuals that drive the parameter estimates and estimated covariate effects ( Figure S2b).
The sBS (Equation 11) showed that the final multistate model improved 5-26% in the accuracy of (Table S1). For early predictions, a landmark time of s = 3 months was useful, and the BS was 0.19 (sBS = 0.29, ~30% improvement compared to base model) but with a smaller discrimination value (AUC = 0.19). The landmark time of s = 6 months provided best overall score for all prediction windows, when both BS (sBS ~0.2) and AUC metric (~ 0.25) were considered. The sBS for landmark time t = 0 was 0, indicating with only baseline information, the base model and full model perform similarly. The AUC values for s = 9 months were slightly better than the AUC for s = 6 months, however, the sBS was less than 0.15. The AUC improved (0.27 to 0.68) with a longer landmark times, whereas the sBS marginally affected by landmark time greater than 6 months.
DISCUSSION
Herein, a multistate model was developed to characterize the different intermediate events as per RECIST response status and jointly describing the survival data. The developed multistate model allows for simultaneous estimation of transition rates along with their tumor model derived predictors and their effect on transition rates. Changes in tumor SLD were predictive of treatment response and progression, whereas the duration of response or time to progression was predictive of hazard of death after progression.
Various metrics derived from longitudinal models (e.g., tumor or biomarker model) have been identified as predictors of OS. 2,[5][6][7]16,30,[32][33][34] In contrast to traditional tumor-OS analysis, the multistate model framework is quite flexible model for describing the hazard of death In the current analysis, a large decrease in the relative change from baseline and an increase tumor size change from the previous two measurements were associated with a high transition rate of λ12 and λ23, respectively. The metrics were evaluated in a prospective manner so that there will not be immortal time bias.
The hazard of death after progression was higher for patients who had early disease progression (shorter time to progression). Time to progression is a similar predictor to a frequently identified predictor of OS -time to re-growth (TTG). 2,16,[35][36][37][38] Time to re-growth is the time to achieve tumor nadir and calculated from model parameters. Time to progression is defined as an increase of greater than or equal to 30% in SLD from tumor nadir or identification of new lesions, or increase in nontarget lesions, and could be a different time than model derived TTG. The hazard of death from second-line treatment and from progression was not statistically different. However, depending on the cancer type and treatments the hazard could vary between patients who received second-line treatment compared to patients who did not receive treatment. A multistate model could be used to investigate the hazard of death associated with second-line treatment and simulate the OS associated with the primary therapies (clinical trial regimens) without the confounding effect of second-line treatment to provide a fair comparison of control versus treatment groups. The multistate model could give detailed information compared to traditional TS-OS analysis. This single framework allows investigation of predictors and characterization of time to tumor response, duration of response, PFS and OS ( Figure S3). From the developed multistate model herein for docetaxel therapy in patients with HER2-negative metastatic breast cancer, the median time to docetaxel response was 12 weeks (Figure S3a; i.e., after 4 cycles of docetaxel therapy). Median duration of response (time in response state) was 26 weeks, and this could be interpreted as the median time to develop resistance to therapy and tumor regrowth ( Figure S3b).
Traditionally, tumor shrinkage (response) or growth is evaluated as an early marker of efficacy of an anticancer therapy. In clinical trials involving newer therapies like immunotherapies and regimens involving multiple molecules, identifying the proportion of responders at an early stage as compared to a clinical trial with cytotoxic therapy could be challenging. 39 Moreover, when patients are allowed to switch to secondary therapies, regardless of whether secondary therapies consisting of drugs are from the same class 40 or a chemotherapy cocktail, the traditional survival analysis becomes weaker to identify any treatment benefit of the new treatment. Beyer et al. 12 demonstrated the application of multistate models in oncology trials. Furthermore, the multistate model can be used to predict the probability of intermediate events (states) for a future clinical trial population. In drug development, forecasts of study populations' treatment response trajectory and prediction of time to disease progression/OS is very valuable information that could help in the optimization of the trial.
Multistate models can also be used for optimizing treatment at an individual level. The BSs showed that data collected until 3 months is enough to get good individual predictions up until 9 months (s = 3 months, t = 6 months) and if data until 6 months (s = 6 months) is used in the multistate model, individuals' hazards of death were predicted accurately for the first 2 years. The forecasts from the multistate model using varying amount of follow-up data of a representative individual from the study population is given in Figure 4. The multistate model forecasts that at the first scheduled post baseline measurement (12 weeks), there is an ~ 40% probability to observe response state (i.e., >30% decrease in tumor size from baseline; Figure 4a). The patient had an initial treatment effect (Figure 4b), however, at the second visit (at 24 weeks), the decrease in tumor size was not as high as what was observed at the first visit (at 12 weeks), and the multistate model predicts around 30% risk of disease progression at the next scheduled visit, at week 36 ( Figure 4c). After being assessed as progressive disease, and initiation of second-line therapy, the increase in hazard of death with time was forecasted reasonably by the model framework (Figure 4d-f). The prediction of treatment benefit duration and early identification of patients at higher risk of disease progression would guide early clinical interventions to enhanced benefits for patients.
There were 34 patients out of 58 excluded subjects who had no measurable target lesion at baseline but received docetaxel until disease progression, whereas the remaining 24 patients had received bevacizumab before disease progression. These 24 patients were not considered for inclusion in the analysis because of treatment crossover. When 34 patients (who received docetaxel until disease progression) were included in the analysis, they had a longer "stable state" before they had disease progression, compared to the other patients, whereas the uncertainty of λ12 and λ13 increased. Moreover, these individuals could not be included in the tumor modeling and should be excluded during investigation of tumor model derived metrics as predictor of transition intensities thus they were not included in the final analysis. There were five patients who had stable disease after an observed response state (<30% decrease from baseline and <20% increase from nadir). In the multistate model, these patients were allowed to continue in the response state with the assumption that they have the same risk of death and progression as that of response to progression/death. Moreover, this assumption will not influence the PFS and OS derived from the multistate model.
CONCLUSION
The developed multistate model adequately described the transitions between different possible states in patients with HER2-negative metastatic cancer treated with docetaxel. The model jointly characterized the overall outcome events in the data, including both PFS and OS. The multistate model allows for simultaneous estimation of transition rates along with their tumor model derived predictors. The investigation of predictors and the characterization of time to develop response, duration of response, PFS, and OS can be performed in a single multistate modeling exercise. This modeling approach can be applied to other cancer types and therapies to provide a better understanding of efficacy of drug and characterizing different states, thereby facilitating early clinical interventions to improve anticancer therapy.
ACKNOWLEDGMENTS
This work was supported by the Swedish Cancer Society and Genentech Inc., San Francisco, CA.
|
2021-07-28T06:17:57.449Z
|
2021-07-26T00:00:00.000
|
{
"year": 2021,
"sha1": "a85cc84df117dfc08b99eeda77618b39e733bdfc",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/psp4.12693",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "555964c337c398dc22da142e1412d6052660f94f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246705141
|
pes2o/s2orc
|
v3-fos-license
|
Personal experience and awareness of opioid overdose occurrence among peers and willingness to administer naloxone in South Africa: findings from a three-city pilot survey of homeless people who use drugs
Background Drug overdoses occur when the amount of drug or combination of drugs consumed is toxic and negatively affects physiological functioning. Opioid overdoses are responsible for the majority of overdose deaths worldwide. Naloxone is a safe, fast-acting opioid antagonist that can reverse an opioid overdose, and as such, it should be a critical component of community-based responses to opioid overdose. However, the burden of drug overdose deaths remains unquantified in South Africa, and both knowledge about and access to naloxone is generally poor. The objective of this study was to describe the experiences of overdose, knowledge of responses to overdose events, and willingness to call emergency medical services in response to overdose among people who use drugs in Cape Town, Durban, and Pretoria (South Africa). Methods We used convenience sampling to select people who use drugs accessing harm reduction services for this cross-sectional survey from March to July 2019. Participants completed an interviewer-administered survey, assessing selected socio-demographic characteristics, experiences of overdose among respondents and their peers, knowledge about naloxone and comfort in different overdose responses. Data, collected on paper-based tools, were analysed using descriptive statistics and categorised by city. Results Sixty-six participants participated in the study. The median age was 31, and most (77%) of the respondents were male. Forty-one per cent of the respondents were homeless. Heroin was the most commonly used drug (79%), and 82% of participants used drugs daily. Overall, 38% (25/66) reported overdosing in the past year. Most (76%, 50/66) knew at least one person who had ever experienced an overdose, and a total of 106 overdose events in peers were reported. Most participants (64%, 42/66) had not heard of naloxone, but once described to them, 73% (48/66) felt comfortable to carry it. More than two-thirds (68%, 45/66) felt they would phone for medical assistance if they witnessed an overdose. Conclusion Drug overdose was common among participants in these cities. Without interventions, high overdose-related morbidity and mortality is likely to occur in these contexts. Increased awareness of actions to undertake in response to an overdose (calling for medical assistance, using naloxone) and access to naloxone are urgently required in these cities. Additional data are needed to better understand the nature of overdose in South Africa to inform policy and responses.
Background
Globally, there are an estimated 15 million people who inject drugs (PWID) [1], with 20.5% having experienced a non-fatal drug overdose event in the last 12 months. Drug overdose is responsible for substantial mortality among people who use drugs, with an estimated 109,500 people dying from opioid overdose in 2017 [2]. However, weak vital registration systems and limited surveillance systems limit the understanding of the prevalence and consequences of opioid overdose [3].
Naloxone is a semisynthetic competitive opioid antagonist with a high affinity for the μ opioid receptor and can reverse an opioid overdose. Further, naloxone acts within seconds and has a short duration of effect; it has an elimination half-life of 60-90 min [4].
The World Health Organisation (WHO) recommends naloxone for the emergency treatment of known or suspected opioid overdose with respiratory or central nervous system depression [5] and recommends naloxone distribution programmes in community settings as they reduce deaths related to opioid overdose and save costs [5]. Overdose education and training of people at risk of opioid overdose and among those likely to come into contact with people experiencing an overdose is the foundation for naloxone distribution programmes [5]. The administration of naloxone by people who are likely to be present at an overdose (e.g. peers, police, outreach workers) who could respond before medical professionals arrive increases the chances of survival [6].
South African context
The use of drugs (other than cannabis) is illegal in South Africa [7]. There is limited data on the prevalence of heroin use and overdose. Most of the country's estimated 75,000 PWID [8] use heroin (known locally as nayope, whoonga, unga, sugars and thai) [9]. Among whom, many are homeless [10]. Further, evidence shows that the use of heroin and the prevalence of injecting drug use is on the rise [11], increasing the risk of opioid-related overdoses.
The accuracy of South Africa's vital registration system is negatively affected by data limitations, including missing data on cause of death [12]. Opioid and other drugrelated deaths are categorised under accidental poisoning by and exposure to a noxious substance [12]. The latest published data (2017) reflect 720 deaths due to accidental poisoning which equates to 1.4% of deaths due to external causes of accidental injury [12]. Overdose is not routine monitored in the drug-related surveillance collected from drug treatment and harm reduction services via the South African Community Epidemiology Network on Drug Use [11].
Naloxone is registered and available as a hydrochloride solution in an ampule, typically 0.4 mg per 1 ml injection. South Africa's Essential Medicines List includes naloxone for the management of opioid poisoning across all levels of care [13]. Harm reduction services operate in several cities, with opioid substitution therapy programmes operational in four cities, and needle and syringe services available in an additional five cities [14]. To date, no community-based naloxone distribution programmes have taken place [15].
We performed a pilot study among people who access harm reduction services in three South African cities to provide initial insights into: (1) occurrence of opioid overdose among study participants and their peers, (2) levels of comfort in calling for emergency medical assistance in the case of an overdose, (3) knowledge of how to respond to an opioid overdose and (4) comfort in carrying naloxone. The study intended to provide preliminary data on overdose and to inform future research.
Methods
We conducted a cross-sectional survey in Durban, Cape Town, and Pretoria between March-July 2019.
Study settings
The three cities were selected based on the availability of harm reduction services provided as well as the need identified by service providers for insight into frequency of drug overdose. At the time of the survey, outreach programmes took services to homeless people who use drugs. The teams distributed sterile injecting equipment (except in Durban where a municipal ban on the service was in place between May 2018 and June 2020), provided advice on harm reduction and safer injecting, and made referrals for psychosocial services. Opioid substitution therapy was available to a group of ± 30 people who injected opioids in Cape Town and ± 1000 people who use opioids in Pretoria. response to an overdose (calling for medical assistance, using naloxone) and access to naloxone are urgently required in these cities. Additional data are needed to better understand the nature of overdose in South Africa to inform policy and responses. Keywords: Drug overdose, Naloxone, Opioids, People who inject drugs, Drug-related harms
Eligibility criteria and sampling
We invited individuals, aged 18 years and above, who used drugs and accessed the harm reduction service in the respective city to participate. Participants were conveniently sampled. Participants were approached by outreach team members during routine outreach visits and asked to participate.
Measures
The survey tool gathered data on the following areas: (1) socio-demographics (age, gender, race, housing), (2) drug use history (current, past, duration, and frequency of use), (3) personal history of drug-related overdose in the past year (frequency, location, drugs used, presence of others and assistance received), (4) awareness of opioid overdose among peers, (5) awareness of and likelihood of using naloxone, and (6) likely action taken in response to an overdose situation.
An opioid overdose was described to participants including the most likely signs and symptoms (slowed breathing, slurred speech, changes in heart rate, unresponsiveness, and blue lips and fingertips). Participants were instructed to report overdoses in which opioids were known or thought to be the cause. Since opioid overdose can be the result of a multiple substances, participants were able to list non-opioid drugs that were thought to be involved in the reported overdose.
Regarding "likelihood of using naloxone", participants were given a brief summary explaining the role of naloxone in reversing an opioid overdose. It was explained to participants that in South Africa, naloxone is primarily used by emergency medical personnel to reverse an opioid overdose. Participants were asked whether they would be interested in carrying and administering naloxone in the event of an opioid overdose should future policies permit peers access to naloxone. Participants could answer "Yes", "No", or "Not sure".
Regarding "likely action taken", participants were asked what they would do to respond to a suspected opioid overdose where they were present. This question was open-ended, allowing respondents to share their response with the surveyor.
Data collection
The study was integrated into routine outreach activities and trained outreach team members from TB HIV CARE and COSUP administered the tool in the field. The survey was administered in English and took 15-20 min to complete which included providing information about the study and obtaining verbal consent to participate.
Data analysis
Data from the paper-based survey tool were entered manually into a Microsoft Excel database and analysed using descriptive statistics (frequencies and proportions). Data were stratified by city.
Overdose experiences are presented in Table 2. Over one third of participants (38%, 25/66) had experienced an overdose in the past year. Of those, participants reported a median of one overdose in the past year (IQR 1-2). The largest proportion of participants reporting an overdose in the past year were from Pretoria (63%, 10/16), and the smallest proportion was from Cape Town (14%, 3/21). Almost half (48%, 12/25) of those who had experienced an overdose were "outside" when it occurred. Most (83%, 15/18) of those reporting the type of drug-related to the overdose reported using heroin, and two others reported using a heroin mixture. Almost a third (28%, 7/25) of participants did not report the kind of drug used before the overdose event. Nearly two-thirds of participants who experienced an overdose were with someone at the time of their overdose; however, 72% (18/25) did not receive help during their overdose. Seventy-six per cent (50/66) of participants knew at least one person who had experienced an overdose in the past year. These participants reported a total of 106 overdose experiences among their peers. The distribution of these reported overdoses was similar across the three survey locations, with 46 (43%), 31(29%), and 29 (27%) in Durban, Cape Town, and Pretoria, respectively. Table 3 provides knowledge around and responses to overdose. The majority of participants (64%, 42/66) had not heard of naloxone, but, 73% (48/66) indicated that they would carry naloxone after being informed about what it is. Pretoria had the highest proportion of respondents having heard of naloxone previously (56%), whereas Durban had the lowest proportion (14%). Forty per cent (10/25) of participants who overdosed in the past year had heard of naloxone. Similarly, 76% of all participants (50/66) knew at least one person who overdosed in the past year.
Frequency of use
Regarding responses to overdose events, more than two-thirds (68%, 45/66) would phone for medical assistance and a fifth (13/66) would administer help to someone experiencing an overdose. The remainder (12%, 8/66) either did not respond to the question or did not know what they would do in the event of an overdose. Similarly, the majority of participants (74%, 49/66) were comfortable calling for help in the event of an overdose. Over half (56%, 37/66) would call for professional medical assistance in the event of an overdose.
Discussion
We present, to our knowledge, the first published data of drug overdose among people who use drugs accessing harm reduction services in selected cities in South Africa. The findings from this small pilot study highlight the likely burden of overdose. The study points to probable significant (and largely undocumented) overdose-related morbidity and mortality among marginalised people who use drugs in South Africa. This initial study highlights the critical need to better quantify and understand overdoses to inform policy and programming. The characteristics of this sample are similar to people who access harm reduction services at organisations that are part of the South Africa Community Epidemiology Network on Drug Use [14].
This study found that opioid overdoses are occurring within the population of people who use drugs in the selected cities. Over a third of the participants experienced an overdose in the last year. Over two-thirds were aware of overdoses occurring among their peers, presenting an important opportunity to train peers in harm reduction principles, including recognising and responding to an opioid overdose. Our pilot study has also demonstrated the willingness of people who use drugs who access harm reduction services to report personal experiences of drug overdose. The ability of this approach to obtain information on overdose experiences among peers suggests that the community of people who use drugs share their experiences among themselves. The sense of community is important because peers are often the first ones that can phone for medical assistance in the case of an overdose. In many settings globally, peers play a critical role in saving lives by reversing potentially lethal overdoses [16]. Several known risk factors associated with fatal drug overdose were reported. These risk factors include homelessness, daily injection of heroin and polysubstance use [17]. This information indicates a significant opportunity to direct increased efforts to reduce the risk of morbidity and mortality among this group of individuals. Further risk reduction can also be achieved through increased access to opioid substitution maintenance therapy [5]. Further, an overdose can be mitigated through community awareness and training, and the wide distribution of naloxone [5].
While the sample is small and comprised mostly of homeless people, there was heterogeneity in the responses. Caution should be exercised in generalising from these participants. However, despite the small sample, the majority (64%) had never heard of naloxone, yet most respondents were willing to carry it on their person in case of an overdose, should it become available and accessible. Given that most participants who had overdosed in the past year reported that they were not alone at the time of the overdose, participants' willingness to carry naloxone presents an opportunity to equip the people closest to those at risk of overdose with a tool proven to prevent death. The findings align with the WHO's guidelines for community management of opioid overdose [5]. The WHO guidelines recommend that people who are likely to witness an opioid overdose, very often friends and family members of people who use opioids, should be given access to naloxone and training on how to recognise and respond to an overdose [5]. As a result, at least 15 countries globally have implemented programmes which include access to naloxone and overdose training, demonstrating an increase in knowledge and competencies around responding to an opioid overdose [18,19]. The rapid scale-up of overdose prevention programmes (including take home naloxone) is possible in low-and middle-income settings, provided there is political will [20]. Programmes that improve peer and family responses to overdose situations decrease rates of overdose deaths when compared to programmes that do not include family and peers [21]. Brief education and training of opioid users in recognising and responding to an overdose of a peer improved overdose response [22]. The existing legislation in South Africa does not include Good Samaritan laws [23]. The provision of naloxone is limited to self-use only, requiring a prescription by a medical doctor. The risk of prosecution makes it challenging to ensure peers, who are often opioid users themselves are equipped to respond to a potentially lethal overdose. Neither the draft national standards for emergency medical services (2021) [24] nor the Health Professions Council of South Africa's clinical practice guidelines for emergency service providers (2018) [25] include a requirement for emergency medical service providers to report drug overdoses to the police.
Finally, the study found that most people were comfortable calling for help in the event of an overdose. However, a notable proportion (26%) were either uncomfortable calling for help or did not know whom to call. Globally, there is a reluctance, particularly among people who use drugs, to calling for help [26]. One of the most significant predictors of calling for help is the drug policy environment and the related charges one might face if arrested at the site of an overdose [27]. There is very little data on the facilitators and barriers to calling for help to respond to an overdose in South Africa. A report by the South African Network of People Who Use Drugs revealed that people who use drugs experienced discrimination and delayed response times when calling ambulances to respond to medical emergencies [28]. In the USA, people most often cite the fear of police involvement as the reason for low rates of calling for help in the case of a drug overdose [26,29]. Good Samaritan Laws provide limited indemnity from prosecution if someone responds to an overdose and calls 911 to seek medical help. In the USA, Washington was the second state to implement Good Samaritan laws. In a survey of 355 opiate users in Washington, when informed of the laws, 88% reported they would phone for medical assistance in the case of an opioid overdose [30].
In South Africa, the hostile engagement between people who use drugs and law enforcement continues, as noted by harassment and often confiscation and destruction of injecting equipment (SAMRC, 2020). Encouragingly, efforts are ongoing to enable collaboration between health and security actors towards public health and safety [31].
Over the past decade, as global harm reduction efforts have focused on making naloxone more accessible and available in overdose situations, first responders have been equipped with naloxone [32]. In many settings globally, law enforcement officers are frequently first to arrive on the scene of an emergency. When officers carry and are prepared to use, they can administer naloxone before other responders arrive, increasing the likelihood of effective overdose reversal [32].
In the case of an opioid overdose, death does not usually occur immediately, often allowing time for a lifesaving intervention. Overall, 76% of individuals surveyed in this study were aware of opioid overdoses occurring among their peers; however, very few people who experienced an opioid overdose received medical attention; hence, it is critical to ensure that those who are most likely to be at the scene of a drug overdose have the capacity to recognise and respond to an opioid overdose. Self-report data from PWID are critical to understanding personal and witnessed overdose events. Many cities around the world have begun training bystanders and peers on overdose prevention and response, including self-reporting.
Peers and bystanders represent a critical part of any comprehensive overdose response plan. For example, the WHO and the United Nations Office on Drugs and Crime's Stop Overdose Safely project in Kazakhstan, Kyrgyzstan, Tajikistan and Ukraine demonstrated the effectiveness of community naloxone distribution [20]. The project involved the training of 14,263 potential opioid overdose witnesses over eight months. Trainees were provided with take home naloxone. Thirty-five per cent (478/1388) of participants engaged in the project evaluation cohort witnessed an overdose within six months following the training, among whom 89% used naloxone with 98% of the victims surviving [20]. There is an important need to maximise the potential role of bystanders and communities of people who use opioids in South Africa and other African contexts to respond to opioid overdoses.
Our South African pilot study has several limitations. First, the small sample and convenience sampling of participants who access harm reduction services limit the generalisability of findings. The available resources, accessibility of participants, and feasibility of integrating the survey into service delivery influenced the sample size. However, the pilot study has established preliminary data that points towards a high prevalence of overdose among people who use drugs. The value of assessing the feasibility and acceptability of integrating overdose assessment into harm reduction service in a more robust manner has been demonstrated. The small size of this study limits the degree to which it can be used to directly inform policy.
Second, the survey tool was not validated, and the amount of missing data limited the analysis that was possible. Missing data fields were caused primarily by the use of paper-based tools with free text and the option of participants to decline answering questions. Notably, many participants declined to provide demographic information or information about their experiences with overdose.
Third, the study did not explicitly ask participants about access to or use of health-related or harm reduction services. However, given that recruitment was performed by harm reduction service providers, it is possible that the study cohort likely has increased exposure to information about responding to an overdose. As a result, it is likely that they would have felt more comfortable calling for help or administering aid when responding to an overdose. It cannot be assumed that the same level of knowledge and willingness to phone for medical assistance would be reported among groups with less exposure to harm reduction organisations.
Conclusion
Our pilot study demonstrates that there is likely to be a high prevalence of overdose among people who use drugs in South Africa, particularly among people who inject opioids. Opioid overdose is likely contributing to notable morbidity and mortality among people who inject opioids in South Africa. The study also highlights that people who use drugs are willing to respond to overdoses with naloxone and to call for help. The findings suggest that there is a need to strengthen efforts to raise awareness about overdose responses, including around naloxone. Notably, the study highlights the need to move towards naloxone being available for use by people likely to witness an overdose. Further data are needed to quantify and understand the context and substances involved in overdoses. Findings from future research and surveillance can inform advocacy efforts towards changing policy and programming. Policy changes include the rescheduling of naloxone to enable purchase without a prescription and administration by trained people likely to come across people experiencing an opioid overdose. Future programmatic changes could include community distribution of naloxone through harm reduction and social service providers and the police. All policy and programme changes should protect the responders and overdose victims from drug-related arrest and criminal sanctions. Until more robust data are gathered, the findings from our pilot demonstrate the need within harm reduction service providers to raise awareness of the risks associated with a drug overdose, how to respond in the event of a drug overdose, and to advocate for increased community access to naloxone for peers and first responders.
|
2022-02-11T14:45:19.589Z
|
2022-02-11T00:00:00.000
|
{
"year": 2022,
"sha1": "03b4b000f2f68e1b1e26d07b6689779752248374",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "03b4b000f2f68e1b1e26d07b6689779752248374",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254366395
|
pes2o/s2orc
|
v3-fos-license
|
Estimation Large- Scale Fading Channels for Transmit Orthogonal Pilot Reuse Sequences in Massive MIMO System
Massive multiple-input multiple-output (MIMO) is a critical technology for future fifth-generation (5G) systems. Reduced pilot contamination (PC) enhanced system performance, and reduced inter-cell interference and improved channel estimation. However, because the pilot sequence transmitted by users in a single cell to neighboring cells is not orthogonal, massive MIMO systems are still constrained. We propose channel evaluation using orthogonal pilot reuse sequences (PRS) and zero forced (ZF) pre-coding techniques to eliminate channel quality in end users with poor channel quality based on channel evaluation, large-scale shutdown evaluation, and analysis of maximum transmission efficiency. We derived the lower bounds on the downlink data rate (DR) and signal-to-interference noise ratio (SINR) that can be achieved based on PRS assignment to a group of users where the number of antenna elements mitigated the interference when the number of antennas reaches infinity. The channel coherence interval limitation, the orthogonal PRS cannot be allocated to all UEs in each cell. The short coherence intervals able to reduce the PC and improve the quality of channel. The results of the modelling showed that higher DR can be achieved due to better channel evaluation and lower loss.
INTRODUCTION
Massive multi-input multiple-output (MIMO) is a critical technology to reduce interference between adjacent cells and multi-user interference to increase the capacity of this 5G network. Massive MIMO technologies have recently received much attention as a promising technology to significantly increase spectrum efficiency. In multi-cell massive MIMO systems, pilot contamination (PC) is a essential problem, which influences the Data Rate (DR). Nevertheless, training channel and reducing the interference depend on mitigated pilot reuse sequences (PRS), where the PRS increase gain and avoid interference between neighboring cells [1][2][3][4][5]. The pilot scheduling takes into account how the system allocates the pilot sequences to users in order to minimize or even simply eliminate the PC problems. The PC is a major problem in multicellular massive MIMO systems, and it affects DR. However, channel evaluation can be used to train the channel by conducting PRS in a time distribution duplex (TDD), which increases the benefit of PRS between neighboring cells and eliminates interference between neighboring cells [1][2][3][4][5]. Placing a very large number of antennas on a transmitter, linear signal processing can significantly reduce the PC from any rapid fading. The large-scale fading cannot be ignored, which are a crucial technique for estimating system performance, and evaluation of the feasible data rate based on the effective signal-to-interference noise ratio (SINR). The received training signal and the varying numbers of antennas at the BS generated the linear precoding problem, which could not be properly formulated. Under low power, MRT precoding maximized the SNR of each user targeted at the desired signal. The number of antenna arrays transmitting in the downlink (DL) determines the achievable high DR. By taking into consideration the comprehensive knowledge of largescale fading and mitigated PC between adjacent cells, the transmitted data signals from BS build precoding paths to users. [6][7][8][9][10][11] studied the impact of the pilot between neighboring cells by transmitting signals simultaneously, which provided a better estimation of the channel and improved pre-coding performance. Another author [12][13][14][15][16][17] looked into PC contractions using orthogonal pilots between adjacent cells and came up with the best PRS for the full user set. At the same time, the author [4], [18][19][20][21][22] studied the effects of PC and with the help of a pilot and without a pilot, and helped in a practical array MIMO to perform high DR. We propose in this research to employ PRS to reduce the PC of edge users by evaluating the channel with full knowledge of large-scale fading. Therefore, we evaluated the large-scale fading based on maximum ratio transmission (MRT) and zero forced (ZF) pre-encoding when the number of antenna elements and the number of users (UEs) increased by a large number and obtained a better performance analysis [23][24][25]. Massive MIMO systems have two major issues: pilot contamination and increased consumption power. Considering the evaluation of the proposed channel with full knowledge of large-scale fading, the PC reduction is capable of reaching high data speeds for pilot reuse. To generate a high-quality channel value, the BS links the training signal to a predefined PRS of each UE using comprehensive knowledge. Furthermore, the orthogonal PRS was used to eliminate PC in edge users whose channel quality began to deteriorate as the MRT and ZF pre-encoding approaches achieved largescale performance and efficiency [26][27][28][29][30]. Increasing the high data rate of the network of 5G mobile systems depends on increasing the network capacity by evaluation channel. Because of the multicellular array's great degree of freedom and the enhanced pilot sequence for each neighboring cell, BS was exploited to evaluate channels. The number of users to share bandwidth has increased because of reduced PC. The pilot sequence provided by users of additional cells contaminated the limited coherent channel. To meet the everincreasing demands of explosive wireless data services and prevent interference due to PC used in channel estimation across all BS, the number of PRS has not been arbitrarily increased. This coherence led to a critical limitation in channel estimation and reduced the possibility of data rates. In essence, PC results from assigning the same pilot sequences to users in nearby cells [31][32][33][34][35]. Based on the performance degradation of users who have not been assigned with the pilot sequence, the pilot assignment approach seeks to optimize the feasible sum rate for the target cell. Due to the restricted coherence time of the channel and the constrained bandwidth available in a multi-cell, the allocation of orthogonal pilot sequences for all users cannot be guaranteed. To improve the system's performance, a relative channel evaluation was used with full knowledge of the large-scale fading. Based on an evaluation of large-scale fading and efficacy study of MRT and ZF pre-encoding approaches, a PRS was used to cancel interference. These were pre-codes with less spatial dimensions that could block nearby cell interference from the most vulnerable UEs.
II. SYSTEM MODEL
The base station (BS) delivers signals to the UEs in the following multicellular massive MIMO systems, where each cell has a BS M antenna to service , ( ≫ ). Where the channel model for the correlated Rayleigh fading channel matrix is independent and identically distributed (i.i.d.) and the properties minimum mean square error (MMSE) for channel ℎ ∈ ∁ ×1 can allocate a diverse antenna correlation to each channel between the users in the BS l and M in the BS , the channel reciprocities were assumed to be the same in the uplink and downlink (DL). In Ѳ ∈ ∁ , × 1 is the small-scale fading channel and Ω ∈ ∁ × accounts for the related channel correlation matrix for largescale fading. In [ ] , = √ Ω , represent the diagonal matrix and has a diagonal element which can be written as √ Ω = [ √ Ω 1 , . , √ Ω ]. The channel between BS and the ℎ user in cell j is given by = √ Ω Ѳ (1) The BS was employed with imperfect channel state data (CSI). The ℎ user's established signal inside the ℎ cell can be written as where stands for the Hermitian transposition channel matrix, which is used to assess the UEs channel in each cell using an orthogonal pilot.
= [ 1 . . . . . ] ∈ ∁ × is the DL channel between BS and user in its cell. The transmit signal vector of BS is ɣ = ӿ ∈ ∁ , ∈ ∁ × is the linear precoding matrix, ӿ ∈ ∁~(0, ) is the data transferred from cell to the users, represent the DL transmit power, and ~(0 ×1 , ) is the conventional noise vector.
A. Channel Estimation
At the DL, we used channel estimate based on channel interactions in TDD. The main goal of this research was to improve the maximal DR with full knowledge of large fading to remove PC. By investigate the improved DR depends on using relative channel by reduce PRS to obtain the good quality of channel by used the training received signal with evaluated every pilot for the user. After that choosing, an optimal antenna selection depends on select the optimal a number of radio frequency chains (RF) and good quality of channel to exploit the high DR. We assumed that the reciprocity of the channel could be determined by matching the UEs in cell of the downlink , which is the Hermitian transposition of the uplink . The channel might be estimated using preparation signal in the same identical PRS for neighboring cells as = where is the transmit power, which is relative to the current pilot SNR, and is the length of the training transmit power. ̂= Ω ∑ ≠ + 1/2 (4)
B. Downlink Transmission
In this section, a large-scale fading, PC was suppressed by increasing the number of transmit pilot reuses. The achievable high data rate derived by using linear combining by propose relative channel estimation in full knowledge of large-scale fading. Then, the downlink derived the achievable DR and SINR based on linear precoding. Consequently, mitigated pilot contamination allowed an increasing number of users to share bandwidth and improved the system performance to get better channel estimation and reducing inter-cell interference. The relative channel estimation is very important to evaluate the conventional pilot and suppresses interference between neighboring cells. Based on only utilized large-scale fading, the achievable data rate was enhanced, assuming that the BS had imperfect CSI. In addition, the interference between users was suppressed without using more time-frequency resource. The BS transmitted the pilot sequences to every UEs in the cell. These UEs estimated their own channels. Furthermore, symbols × were required for the orthogonal PRS in the multi-cell. As a result, extensive training channel were utilized, with the variance channel expressed as A pilot sequence of = = ∞ was created by increasing the orthogonal PRS, where is the symbol of the DL pilot sequence that performed for the desired channel; the received signal is provided by For the estimation channel, the properties of MMSE ~= ̂− the channel estimation between distributed more users in same cells was discovered, where ~(0, ) is the uncorrelated estimation selfregulating of ~~( 0, ( − Ѻ ) ) , and Ѻ = Ω Ω , which is also self-governing of antenna for large-scale fading [6][7][8][9][10], [22][23][24][25][26][27][28]. Enhancing quality of channel for ℎ UEs in terms of MMSE can be written as The good quality of channel can be obtained by applied MMSE and also determined by by employing matrix inversion with Ѳ Ѳ = 1 and the same vector Ѳ is proportionate to the characteristics of MMSE. The estimation channel is proportional based on the training received signal Ѳ is Reducing PC in (8) Obtain the good quality of channel and apply MRT and ZF to achievable high data rate
End
No Yes a user group that alleviated PC when the number of antenna .increased.to.infinity .The training signal at the BS cause the linear precoding problem, which cannot be properly expressed [29][30][31][32][33]. In addition to the MRT, the MRT precoding maximised the SNR of each UE for the required signal under allocated power, due to interference of = [ 1, , . , , ] = [̃1, . ,̃] =̃. Precoding with ZF able to deliver data, and improve processing to account for background noise based on the situation of channel = . To receive the lower bound of DR as: where ( 1 − ) represent the loss of pilot signalling for the pre-log factor, and 0 < < 1 indicates the possible DR. A correlated channel matrix was used to predict the channel response based on mitigated PC. We rotten the received signal as =1, ≠ ӿ + (11) From (11) the uncorrelated noise channel it be the worst case. To evaluated the achievable data rate at transmit signal from BS, which can be written as The PRS was found to be linked between the channel ℎ and the neighbouring cell's precoding . The SINR was calculated at the ℎ UEs [32][33][34], and the desired SINR signal can be written as The interference can be written in the denominator as At employed ZF precoding, the large scale fading channel is obtained by replacing (18) and (19) into (17).
D. Our Proposed
According to the random distribution of in each cell, the SINR depend on the quality of channel fading Ω in equation (10). Because the training signal at the BS cannot be directly constructed, MRT and ZF linear precoding must be utilized to improve multi-cell interference in MIMO systems without adding time. The performance evaluation of MRT and ZF precoding techniques was achieved when the number of antenna elements M and number of UEs K increased to large numbers. The perfect orthogonality PRS, focusing on the impact of pilot contamination and the analysis of MRT and ZF precoders with PC in DL multi-cell massive MIMO systems. The mitigated PC allows an increasing number of users to share bandwidth in order to improve the system performance to get better channel estimation and reducing inter-cell interference. The relative channel estimation is very important to evaluate the conventional pilot and suppresses interference between neighboring cells. This is because the channel estimation was more efficient with full knowledge of large-scale fading. In addition, orthogonality at its best with the help of PRS, the quantity of UEs can be improved by reducing cross-tier interference across several antennas. When transmission power is limited in mm-wave, the perfect orthogonality PRS is crucial for increasing capacity and high coverage at cell edges, lowering high path loss, and suppressing cross-tier interference. The desired trade-off between improving channel estimation accuracy and retaining more resources for data transmission was achieved because of the improvement of PRS. Suppressing PC to achieve high DR for pilot reuse by considering the proposed channel estimation with comprehensive knowledge of largescale fading. The comprehensive knowledge this means the BS correlates the training signal with the established PRS of every UE to obtain a high-quality channel estimate. In addition, the orthogonal PRS applied to remove PC in edge UEs with reduced channel quality according to the approximation of large-scale fading and performance evaluation of MRT and ZF precoding techniques.
As a result, the best quality of channel Ω of UEs in ℎ cells depend on sorting UEs when ≠ . The channel quality of UEs increased by transmission orthogonal PRS to the edge UE group in equations (17) and (20) based on quality of channel Ω and separating UEs into two groups as follows: where is the threshold value of grouping, and represent the channel quality of UEs Ω , the DR enhanced based on selected the quality of channel of the edge UEs. As a result, acceptable channel quality can be achieved by allocating orthogonal PRS to the edge users while reusing the same PRS to the center users [12], [19], [28], a user grouping that may be expressed as . . , ≤ ] represents the number of edge UEs. The perfect orthogonality PRS, with an emphasis on the impact of PC and an investigation of MRT and ZF precoders with PC. The mitigated PC lets a growing number of users to share bandwidth, improving system performance and eliminating inter-cell interference. The perfect orthogonality PRS can reduce the cross-tier interference between multiple antennas to assist the number of UEs [35][36][37][38][39][40], the high DR achieved in terms of MRT and ZF by mitigated PC as follows: Large-scale fading caused some losses in the DR in equation (23), which can be prevented by growing the number of and confirming the location UEs .
III. SIMULATION RESULTS Figure 2 shows how a constant number of UEs inside each cell and quality of channel offered complete information of large-scale fading at greater spatial resolutions. When the grouping value was increased to M = 256 and K = 10, the target cell achieved a high DR. Figure 2 also demonstrates that increasing the PRS from 1 to 3 enhanced the DR that could be achieved. Furthermore, because the ZF was prepared to operate at high SINR, it gave greater DR than MRT for big pilot reuse. Furthermore, by adjusting the number of edge users or the grouping parameter, the DR that can be achieved can be increased. Finally, the multi-increased cell's antenna count rendered these channels orthogonal to others, reducing interference between adjacent cells. In addition, when compared to MRT, ZF precoding a mitigate interference resulting in a high attainable DR. The average DR for all curves was nonlinear to the number of UEs, as shown in Fig. 2, necessitating the selection of the optimal number of K in order to achieve a high DR. Furthermore, a high DR was attained based on the impacts of bandwidth and correlation between the transmit pilots. Furthermore, the user's capacity is limited, limiting the transmit orthogonal PRS based on bandwidth impacts. With more transmitting PRS in largescale fading, the channel estimation evaluated at the received signal and suppressed PC was dependent. Based on only utilized large-scale fading, the achievable data rate was enhanced, if the BS had imperfect CSI. In addition, the interference between users was suppressed without using more time-frequency resource. The BS transmitted the pilot sequences to every UEs in the cell. These UEs estimated their own channels. The achievable DR was improved and established based on added antennas to existing cell. Using different PRS in DL reduced the performance loss and provided better estimation quality in the channel, maximizing high DR. (23), a high feasible DR may be attained, based on UE grouping, to reduce PC in edge UEs. Furthermore, when the quantity of = 7 was used, the system performance resulted in a significant DR performance as compared to = 1. Figure 3 demonstrates that the number of PRS did not grow arbitrarily in order to reduce interference due to the PC used in every BS for form channel estimation, which resulted in a major restriction in coherence channel estimation and a reduction in DR capacity. The average DR began to rise and then progressively fell in Fig. 3, indicating that the increasing DR of edge UEs is greater than the decreasing DR of center UEs. As seen in Fig. 4, increasing a number of cells it reducing the high DR due to interference. Interference happened when the number of was less than the number of cells < , since users of different cells utilised the same pilot sequence, contaminating the channel estimation. To limit the interference caused by pilot contamination, which generated channel estimation at all BSs, the number of pilot reuse sequences could not be increased arbitrarily. This resulted in a major constraint in the calculation of coherence channels, lowering the data rate's capacity. Because of the channel coherence interval limitation, the orthogonal PRS cannot be allocated to all UEs in each cell. The short coherence intervals able to reduce the PC and improve the quality of channel. Furthermore, as the number of cells increased, the average data rate declined as the number of pilot reuses increased. The average data rates are R= (55.5, 50) bits/s/Hz when the number of cells was L=2 and PRS was =7. Meanwhile, due to the essential constraint in coherence channels estimation, the average DR fell at ℛ = (28.6, 26.2) {bits/s/Hz}, with the higher number of cells L=18 and the same PRS.
IV. CONCLUSION
The evaluation of MRT and ZF precoders with PC in a DL multi-cell massive MIMO system was reported in this paper. The good coverage channel is very important based on recognized PC connected with good, received signal to UEs due to channel estimation. For the edge users in surrounding cells with large-scale fading, orthogonal PRS was used to evaluate channel estimate accuracy to eliminate PC. The numerical findings revealed that using orthogonal PRS in the DL decreased loss, improved channel estimate quality, and increased data throughput. In future works, we will study the efficient pilot and channel scheduling for small-scale fading.
|
2022-12-08T06:42:05.596Z
|
2022-10-20T00:00:00.000
|
{
"year": 2022,
"sha1": "0b80609992f64c19d9e38a671c13665d826996d4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0b80609992f64c19d9e38a671c13665d826996d4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
7933361
|
pes2o/s2orc
|
v3-fos-license
|
Genetic characterization and molecular identification of the bloodmeal sources of the potential bluetongue vector Culicoides obsoletus in the Canary Islands, Spain
Background Culicoides (Diptera: Ceratopogonidae) biting midges are vectors for a diversity of pathogens including bluetongue virus (BTV) that generate important economic losses. BTV has expanded its range in recent decades, probably due to the expansion of its main vector and the presence of other autochthonous competent vectors. Although the Canary Islands are still free of bluetongue disease (BTD), Spain and Europe have had to face up to a spread of bluetongue with disastrous consequences. Therefore, it is essential to identify the distribution of biting midges and understand their feeding patterns in areas susceptible to BTD. To that end, we captured biting midges on two farms in the Canary Islands (i) to identify the midge species in question and characterize their COI barcoding region and (ii) to ascertain the source of their bloodmeals using molecular tools. Methods Biting midges were captured using CDC traps baited with a 4-W blacklight (UV) bulb on Gran Canaria and on Tenerife. Biting midges were quantified and identified according to their wing patterns. A 688 bp segment of the mitochondrial COI gene of 20 biting midges (11 from Gran Canaria and 9 from Tenerife) were PCR amplified using the primers LCO1490 and HCO2198. Moreover, after selected all available females showing any rest of blood in their abdomen, a nested-PCR approach was used to amplify a fragment of the COI gene from vertebrate DNA contained in bloodmeals. The origin of bloodmeals was identified by comparison with the nucleotide-nucleotide basic alignment search tool (BLAST). Results The morphological identification of 491 female biting midges revealed the presence of a single morphospecies belonging to the Obsoletus group. When sequencing the barcoding region of the 20 females used to check genetic variability, we identified two haplotypes differing in a single base. Comparison analysis using the nucleotide-nucleotide basic alignment search tool (BLAST) showed that both haplotypes belong to Culicoides obsoletus, a potential BTV vector. As well, using molecular tools we identified the feeding sources of 136 biting midges and were able to confirm that C. obsoletus females feed on goats and sheep on both islands. Conclusions These results confirm that the feeding pattern of C. obsoletus is a potentially important factor in BTV transmission to susceptible hosts in case of introduction into the archipelago. Consequently, in the Canary Islands it is essential to maintain vigilance of Culicoides-transmitted viruses such as BTV and the novel Schmallenberg virus.
Background
Biting midges of the genus Culicoides Latreille (Diptera: Ceratopogonidae) comprise a highly diverse biological group with more than 1,400 species worldwide [1]. Many biting midge species have ecological [2], economic [3] and sanitary [4] relevance as haematophagous insects and as vectors of pathogens in humans, livestock, poultry and wildlife.
Bluetongue virus (BTV) is one of the most important disease agents transmitted by biting midge Culicoides [4,5]. Bluetongue disease (BTD) largely affects ruminants [5], above all in America, Australia, Asia and Africa, and in recent years has expanded into Europe. This spread is probably closely linked to a northward expansion of Culicoides imicola Kieffer, the main Afro-tropical vector species, and the ability of autochthonous Palaearctic Culicoides species to transmit BTV [1]. Among native Culicoides species, C. obsoletus Meigen has been identified as a potential vector of BTV by using different procedures including viral isolation and RT-PCR detection [6,7], as well as experimental infection assays [8]. In addition, this species is thought to have been involved in the outbreaks produced by different BTV serotypes that have occurred in recent years in Europe [9].
The morphological identification of female biting midges to species level is a complex task that usually requires great taxonomic expertise [10]. Their minute size and the huge number of existing biting midge species are two of the main reasons why field studies on Culicoides are laborious and have lagged behind those conducted on other insect vectors [4]. Wing patterns and the morphology of several parts of the body, which usually requires the dissection and separation of the wings, head, abdomen and genitalia, are the main characters employed in species identification [11]. Recently, the morphological identification of biting midges Culicoides has been complemented by the use of molecular tools and as a result more accurate species identification is now possible. In particular, the amplification of a fragment of the mitochondrial cytochrome oxidase subunit I (COI) gene is emerging as a useful and effective tool that can facilitate the identification of Culicoides species [12][13][14][15] and animals in general [16]. This procedure is especially useful when identifying females of sibling species within species complexes, which are difficult to identify using available morphological information; this is the case for a number of species in the Culicoides obsoletus species complex, which contains both C. obsoletus and C. scoticus Downes & Kettle.
Located around 100 km off the African coast, the Canary Islands archipelago (Spain) has one of the highest densities of small ruminants per hectare of agricultural land in the whole of Europe. Although the relationship between ruminant density and the efficient spread of bluetongue is yet to be tested, animal density is thought to play an important role in the spread of this disease [17]. Despite being considered free of BTD [18], it is still important to improve our knowledge of the occurrence of biting midges on these islands. The arrival of the disease in hitherto BTD-free areas could be provoked by the movement of infected livestock or by the passive movement of infected Culicoides on winds, especially over the sea [19,20]. Winds transporting sand from the Sahara desert, known locally as calimas (Figure 1a), are habitual in the Canary Islands and potentially transport biting midges from localities in which they are present (see ref. [21]). In addition, C. obsoletus and C. analis Santos Abreu have both been previously reported in the Canary Islands [22]: C. obsoletus has been captured in Tenerife and La Palma [23] and, according to recent information provided by the Spanish National Surveillance Programme, C. obsoletus is present in Fuerteventura and Gran Canaria [18]. The aims of our study were: (i) to identify species and characterize the COI barcoding region of biting midges captured on these islands and (ii) to ascertain the source of bloodmeals from biting midge females using molecular tools that permit the identification of the feeding sources of the bluetongue vector C. obsoletus in the studied areas. We thus captured biting midges from two farms on the Canary Islands, one on Gran Canaria and the other on Tenerife.
Methods
Biting midges were captured using two CDC-type downdraft miniature suction traps baited with a 4-W blacklight (UV) bulb (model 1212; J.W.Hock, Gainesville, FL) at a distance of 21 m. The light traps were operated from sunset to sunrise for 19 nights from 15/09/2010 to 15/12/2010 on the farm of the Universidad de Las Palmas de Gran Canaria (Figure 1b; 28°8′21 N, 15°30′ 24 W, Arucas, Gran Canaria). During this period, on two nights one trap only was operating at this site. Traps were hung 2.2 m above ground level and were operated with a UV light. UV light traps are regarded as useful surveillance tools for measuring the relative abundance of midges, although, it is possible that this procedure may not be a reliable indicator of species composition [24]. As well, for four nights from 05/12/2010 to 11/12/ 2010, a single trap was placed in the El Pico farm of the Instituto Canario de Investigaciones Agrarias (28°31′ 24 N, 16°22′19 W, Tegueste, Tenerife). All biting midges were preserved in ethanol and maintained at −80°C until further analysis.
All the biting midges collected were quantified and sorted according to their wing patterns [11] using an Olympus SZH stereomicroscope (10 × -64× magnification). Bloodfed females were individually stored for blood-source identification (see below).
Molecular characterization of biting midges
Genomic DNA from 20 morphologically identified biting midges (11 from Gran Canaria and nine from Tenerife) was extracted as indicated in ref [2].
A 688 bp segment of the mitochondrial COI gene was PCR amplified from individual biting midges using the primers LCO1490 and HCO2198 ( [25], see Table 1 . The reactions were cycled using a Verity thermal cycler (Applied Biosystems) according to the following parameters: 94°C for 10 min (polymerase activation), 40 cycles at 95°C for 30 s (denaturing), 46°C for 30 s (annealing temperature), 72°C for 1 min (extension), and a final extension at 72°C for 10 min. Amplicons obtained after PCR assays were recovered from agarose gels, purified using the MoBio kit UltraClean GelSpin and subjected to direct sequencing. DNA fragments obtained were sequenced using an ABI 3130 (Applied Biosystems) automated sequencer.
Bloodmeals identification
In all, 290 females (74 females from Gran Canaria and 216 from Tenerife) were identified as blood-fed. Using a conservative approach, we included all available females showing any rest of blood in their abdomen to detect the maximum number of potential hosts, thus we included fully engorged and partially engorged females and blood-fed biting midges that, due to the shape and colour of the abdomen, probably contained a partially digested bloodmeal. After excluding some blood-fed females that showed external remains of blood from other damaged blood-fed arthropods, 267 biting midges were processed to identify the origin of their bloodmeals. The abdomen of individual blood-fed biting midges was cut off using sterile tips and introduced into 50 μl of lysis solution (25 mM NaOH, 0.2 mM EDTA), crushed and incubated at 95°C for 30 min. At least two negative DNA extraction controls (i.e. absence of tissue) were performed during the PCR experiments. After incubation, the solution was cooled for five minutes, after which time 50 μl of neutralization solution (40 mM Tris-HCl) was added. Abdomens were simultaneously processed using 96-thermowell plates and stored at −20°C until PCR amplification. Bloodmeal sources were identified using the protocol described and tested in ref. [26] to amplify a fragment of the vertebrate COI gene (Table 1). This method is used to amplify DNA from blood of a diversity of vertebrate species in the abdomen of arthropods including mosquitoes and biting midges [26]. Briefly, this procedure used a nested-PCR approach, using the primary pair of primers M13BCV-FW and BCV-RV1 and the nested primer pair M13 and BCV-RV2. Sequences were edited using the software Sequencher v4.9 (Gene Codes, © 1991 -2009, Ann Arbor, MI) and identified by comparison with the nucleotide-nucleotide basic alignment search tool (BLAST) (GenBank DNA sequence database, National Center for Biotechnology Information) to assign unknown COI sequences to particular vertebrate species. Host species assignment was considered completed when we found a match of 98% or more between our sequences and those in GenBank.
Results
A total of 491 biting midge females (201 in Gran Canaria and 290 in Tenerife) and eight males (seven in Gran Canaria and one in Tenerife) were captured during the study. Based on wing patterns, all females were morphologically identified as belonging to the C. obsoletus complex. Amongst the 11 biting midge females from Gran Canaria and nine biting midge females from Tenerife we identified two haplotypes (haplotypes C1 and C2) that differed only in a single base. With the exception of a single biting midge from Gran Canaria, which was identified as C2, the rest of the biting midges analysed corresponded to the haplotype C1. These two haplotypes were unequivocally identified as C. obsoletus by comparison with the GenBank DNA sequence database. Culicoides body parts from three specimens (two C1 and one C2 haplotypes) not used in molecular analyses have been deposited in the collection of the Museo Nacional de Ciencias Naturales (MNCN-CSIC), Madrid, Spain (accession numbers: MNCN/ADN 17017, 17018 and 17019). As well, the sequences obtained from these biting midges have been deposited in GenBank [GenBank: JQ740594, GenBank: JQ740595, GenBank: JQ740596].
In all, we obtained positive PCR products from 136 blood-fed females. Biting midges were found to have fed on both goats and sheep in the Canary Islands: specifically, on Gran Canaria, 40 Culicoides females had fed on goats (Capra hircus) and 14 on sheep (Ovis aries), while on Tenerife, 81 females had fed on goats and one on sheep.
Discussion
Bluetongue has become an important animal health problem in Europe and as a result European legislation now obliges countries susceptible to the presence of Primers previously published by [26].
BTV vectors (those located approximately between parallels 55°N and 35°S) to undertake appropriate surveillance programmes that include (i) serological surveillance, (ii) clinical inspection of livestock and (iii) entomological surveillance [18]. Here, we have confirmed (i) the presence of the potential bluetongue vector C. obsoletus on two of the islands in the Canary Islands archipelago (Spain) and identified, to our knowledge for the first time, its barcoding region, and have demonstrated (ii) the susceptibility of ruminants (goats and sheep) to the attacks of this biting midge species. DNA barcoding advocates the adoption of a standard that consists of the identification of a fragment of around 650 bp of the 5´end of the mitochondrial COI gene of each species [27]. Our results have enabled us to characterise this gene fragment for C. obsoletus, thereby increasing the segment previously amplified and sequenced in several studies of this particular species [12,13,28]. Using this approach, we identified two genetic haplotypes of C. obsoletus. Haplotype C1 was present on both islands, Gran Canaria and Tenerife, while haplotype C2 was isolated from a single biting midge captured on Gran Canaria. Our results clearly support the conclusions of previous studies and confirm the utility of molecular tools in the identification of C. obsoletus biting midges to species level [12].
Traditionally, studies of the feeding sources of Culicoides have been conducted using serological techniques [29][30][31][32][33] and only a few have ever investigated this feature by amplification and the sequencing of host DNA [34][35][36]. The cost of PCR amplification and sequencing is high; nevertheless, these tools represent an accurate way of identifying hosts to species level in studies on hostfeeding patterns [37]. However, the efficacy of identification of bloodmeal sources may decrease as the stage of digestion of the host´s DNA in the abdomen of the insect increases [38]. In this respect, although we included only females with blood in their abdomen, a proportion probably had a partially digested bloodmeal whose source could not be identified. Moreover, some females may also contain a very low volume of blood, thereby reducing the success of host DNA amplification.
Both birds (mallards and common wood pigeons) and mammals (common rabbits, horses, cattle, sheep and humans) are considered potential hosts of C. obsoletus [34][35][36]. In Europe, the feeding pattern of this biting midge species was previously identified by the amplification of host DNA using mainly species-specific primers [39,40]. Our results clearly highlight the importance of ruminants, both goats and sheep, as hosts of C. obsoletus in the Canary Islands, a finding that could be of great importance in the transmission of diseases such as BTD. This may be the case in sheep since clinical symptoms most often manifest themselves in this ruminant [5], even though all ruminants are susceptible to infection with BTV. Further studies on the feeding patterns of this biting midge species in the Canary Islands are necessary in order to identify the susceptibility of other ruminants present in this region to attack by C. obsoletus females. Furthermore, the C. obsoletus group has also been found to be involved in the transmission of the newly emergent Schmallenberg virus [41], which affects cattle, goat and sheep production [42]. As well, the direct cost of the insect attacks that cause dermatitis in sheep has a certain veterinary importance [43].
Conclusion
In sum, our study suggests that all the essential elements needed for an outbreak of BTD are present in the Canary Islands given that C. obsoletus has been shown to be one of the main vectors of this viral disease in continental Europe and given the high density of ruminants that are fed on by this midge species on these islands. Thus, the setting up of an active BTD surveillance programme on the Canary Islands is essential and sanitary laws regarding air and sea transport of potentially contaminated insects and livestock must be enforced. Nevertheless, these measures alone will not guarantee that the archipelago will remain free of the disease, since natural wind dispersal of Culicoides midges to the islands could occur [19,20] and as a result some infected midges could arrive from Africa.
|
2016-05-12T22:15:10.714Z
|
2012-07-24T00:00:00.000
|
{
"year": 2012,
"sha1": "365fb2c0443e0f0df5018368bf91bb6bfe69cbba",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/1756-3305-5-147",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "365fb2c0443e0f0df5018368bf91bb6bfe69cbba",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
226461824
|
pes2o/s2orc
|
v3-fos-license
|
Analysis and Modeling of Power Consumption Modes of Tunnelling Complexes in Coal Mines
The technological processes of mining enterprises are largely energy-intensive; therefore, studying the modes of power consumption in different sites of a mining enterprise is a relevant task, as it makes it possible to reduce current costs. The article presents the results of an experimental study of the dynamics of changes in the specific power consumption depending on per-shift indicators of extracted rock volume and the rate of mine workings penetration. The analysis of geological, technological and organizational factors affecting the efficiency of mining operations in coal mines is performed. Energy technological profiles and recommendations for assessing sustainable levels of power consumption by mechanized tunnelling complexes are given. Dynamic and predictive models of power consumption have been developed taking into account the main time trends and additive components within the limits that ensure a steady level of power consumption. Special recommendations on improving the efficiency of power consumption during mining operations are given.
Introduction
The efficiency of mining enterprises is determined by the technical level of mechanization and automation of mining technological processes. In the conditions of developing market economy, the main requirements for mining equipment are increasing of operational efficiency and safety, reducing the metal consumption of machines and the energy intensity of rock destruction, reducing the load of mining operations on the environment, etc. [1,2,3].
The intensification of mining in promising mines due to the technical re-equipment of heading and tunnelling complexes requires significant changes in the conduct of preparatory and mining operations. First, this concerns the progressive combine method, because the level of combine tunnelling in leading coal mines ranges from 72 to 98%.
Mining workings tunnelling (MWT) in the conditions of coal mines are currently being developed intensively both in terms of increasing the volume of work performed and the rate of the workings' drivage, which is reflected in the increase in the energy intensity of technological processes and the amount of power consumed.
In accordance with the Long-Term Program for the Development of the Russian Coal Industry for the Period Until 2030 (LP-2030), widespread introduction of energy and resource-saving technologies in the field of coal mining is envisaged on the basis of deep modernization of the industry in the field of digitalization, automation, power and resources saving, etc. [4,5,6].
One of the solutions to the problem of increasing the energy efficiency of MWT in coal mines is to ensure the stable operation of tunnelling complexes in accordance with the technological scheme.
The analysis showed that in the process of MWT conducting, there were violations of the operating modes of tunnelling complexes for both technological, technical and organizational reasons [7,8]. The management of mining processes can be synchronized through the organization of work based on the implementation of appropriate control algorithms.
Material and Methods
The aim of this work was to analyze and simulate the dynamics of power consumption in mining complexes and to evaluate the energy efficiency of mining operations in coal mines.
To achieve this goal, the task was set to develop dynamic models of electrical profiles in coal mines, allowing to take into account technological solutions, mining and geological conditions and organizational factors of production.
The analysis of the technical characteristics of the tunnelling machines and complexes that are commercially available on the world market and their operating modes has shown that the main directions associated with improving the energy efficiency of mining operations are focused on increasing the power supply and productivity of mining machines. This progress is carried out due to the transition to combines of medium (35-50 tons, 100-160 kW) and heavy types (up to 100-150 tons, up to 600-900 kW), providing drivage of mine workings with a cross section of up to 40-45 m 2 , destruction of rocks with ultimate uniaxial compression strength up to 140-150 MPa, and having technical productivity up to 26-30 m 3 /hr [7,9].
The most optimal for analyzing the efficiency of power consumption is to present the dependences of the technological and specific power consumption in the form of multifactor (three-dimensional) models of the following type (1): and ω = f(Q1, Q2) (1) where: W, ω -respectively, the technological and specific power consumption for tunnelling; Q1, Q2 -respectively, the productivity in terms of the volume of extracted rock mass (tons) and in terms of the running linear mine working (meters).
If one of the parameters is assumed to be discrete (i.e., constant at given intervals), then we can construct a two-dimensional model that represents the projection of the function onto the plane (see Fig. 1). In this case, for this projection, the so-called normalized characteristics are obtained that reflect the nature of the change in the desired function, if one of the arguments is a fixed constant and the second argument is a variable. Such a model is an energy-and-technology profile or cross-section [7].
Based on the results of the analysis of energy-and-technological profiles for the tunnel sections, the dependences of the changes in the technological and specific power consumption on the productivity of these sections in terms of the amount of extracted rock mass, running meters of workings of rock and coal were determined. This allows determining the ranges of changes in power consumption indicators depending on the shift capacity relative to its stable level and establish boundary values in accordance with the criteria of energy efficiency [10,11,12].
The analysis of the dynamics of power consumption for each production site is carried out with the aim of establishing a general trend of the actual and specific power consumption and its subsequent comparison with planned (normalized) indicators. At the same time, the analysis of the causes of deviations from the general trend, including additive interference having a random nature, is performed.
Based on the determination of the stable operation levels of tunnelling mechanized complexes (TMC), as well as the study of the dynamics of power consumption, a predictive power consumption model is proposed. Its distinguishing feature is that it takes into account random processes of deviation from a stable level of power consumption (additive interference) associated with a violation of the technological process. The proposed algorithm for calculating the energy characteristics and energy technology profiles allows justifying and developing the energy efficiency management system taking into account their probabilistic nature [13,14].
To implement the predictive model of energy flows taking into account the additive component, it is proposed to implement a time trend, the distinguishing feature of which is its limitation within the standard deviation.
In the work, a time series forecasting model is proposed, consisting of the ( ) trend and the additive component ( ) for the daily values of the ω indicator: where a linear function is used as a trend: ( ) = � ± • . The additive component (interference) is random. To model it, a probabilistic approach is used. In the case when A(t) (deviation from the trend) obeys the normal distribution law and has a standard deviation SA(t), the random noise obeys the "three sigma rule" and is determined by the formula (3): where: n -the length of the range; ̅ �(�) -interference average level; A(t) -deviation from the trend. At the preliminary stage, the average shift productivity indicators of the sites were determined by the volume of extracted rock mass Q1 (t), running meters of mining workings Q2 (m), as well as by the shift power consumption W (kWh) and its specific consumption ω1 (kWh/t) and ω1 (kWh/m).
Results and Discussion
As a result of the analysis of indicators of annual productivity and technological power consumption in MWT, the following was established.
The range of changes in the indicators for the specific power consumption relative to the rock mass ω1and relative to the drivage ω1 has a wide range of values, because direct extraction of the rock mass as part of the technological process is determined by its volume, rock density, degree of loosening and fullness of loading of the bucket or loading machine's feeding table.
The analysis of energy-and-technological characteristics for various shifts showed that the main parameters largely depend on the level of qualification, the degree of organization and compliance with the planned performance indicators. This allows us to make operational adjustments to the maintenance of MWT. Moreover, the zone of optimal power consumption for stable modes for TMC is determined by the correlation ellipse for the technological power consumption [7].
To assess the actual specific power consumption for MWT, it is proposed to use the energy technology profiles presented in Fig. 1 and 2.
Since the initial parameter is the planned indicator of power consumption, then in accordance with it the optimal levels of power consumption for the TMC are determined. Exceeding these indicators shows an increase in power consumption and a drop in productivity. A decrease in these indicators shows an increase in wear of the cutting mechanism and the need to stop MWT for unscheduled repairs and maintenance.
The analysis of the dynamics showed that there are stable levels of specific power consumption. Deviations from them in the range exceeding σ (standard deviation) lead to an excessive consumption of electricity or to an increase in the likelihood of stopping the MWT for unscheduled maintenance or repair. The graph of changes in the daily specific power consumption of the tunnelling site is shown on Fig. 3 as an example. The overall trend for the quarter is described by a linear dependence ω = 61.7 -0.046t. Analysis shows that the range of change for ω(t) indicates significant deviations from the general trend and exceeds the recommended limits corresponding to the stable operation of the MWT. A predictive model of power consumption, obtained as a result of exponential smoothing and consisting of a time trend taking into account additive interference, which is a deviation within ± Δε, is shown in Fig. 4. The forecast model, based on the quarterly schedule, is designed for 10 days with a lag of 12 for fixed indicators.
Conclusion
As a result of the research, the following statements were made. 1. The energy flow indicators have a fairly wide range of variation, therefore, the zone of optimal stable operation of the TMC should be determined taking into account permissible deviations from the normal operating mode. 2. The reasons for the absolute indicators of power consumption outside the acceptable range are excess of the speed of workings' drivage by individual teams during the shift, violation of the overall cross-section of the mine, a sharp change in the geological characteristics of soils, loss of time to eliminate accidents on technological equipment, organizational reasons associated with violation of the technological process of tunnelling. 3. As a result of the regression analysis, energy-and-technological profiles of energy characteristics were obtained to assess the effectiveness of tunnelling operations. 4. Dynamic models taking into account general trends and additive components for determining energy efficiency indicators for MWT are developed. The obtained dynamic models are the basis for predicting the level of power consumption, substantiating regulatory indicators, as well as implementing power management algorithms 5. Practical recommendations for improving operational energy efficiency, taking into account the technical conditions for tunnelling at coal mines, include the following.
-for rock loading machines -ensuring the quality of face preparation in accordance with the categoricity of the mine, mining and geological conditions, the process scheme of drilling and blasting works and the blasting pattern in order to prevent the appearance of bulk rock pieces or their hanging, as well as exceeding the cross-section of working in the rough; -for tunnelling combines of selective action -selection of parameters of the mode of destruction taking into account the degree of abrasiveness and permissible limits of rock strength for uniaxial compression and tension, in order to reduce wear of the rock-breaking tool, change its geometry and dynamic forces acting on the combine;
|
2020-06-25T09:07:33.798Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6dc57bc90a3cf02910c9f7018a9040b16f2cf014",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/34/e3sconf_iims2020_01006.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "031136f5741a41178d195518cafed83fd4cca3ea",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
239016496
|
pes2o/s2orc
|
v3-fos-license
|
Rheumatoid Arthritis: Automated Scoring of Radiographic Joint Damage
Rheumatoid arthritis is an autoimmune disease that causes joint damage due to inflammation in the soft tissue lining the joints known as the synovium. It is vital to identify joint damage as soon as possible to provide necessary treatment early and prevent further damage to the bone structures. Radiographs are often used to assess the extent of the joint damage. Currently, the scoring of joint damage from the radiograph takes expertise, effort, and time. Joint damage associated with rheumatoid arthritis is also not quantitated in clinical practice and subjective descriptors are used. In this work, we describe a pipeline of deep learning models to automatically identify and score rheumatoid arthritic joint damage from a radiographic image. Our automatic tool was shown to produce scores with extremely high balanced accuracy within a couple of minutes and utilizing this would remove the subjectivity of the scores between human reviewers.
Introduction
Rheumatoid arthritis (RA) is an autoimmune disease where the immune system mistakenly attacks the body's own tissues. This causes inflammation in the synovium, which eventually leads to joint damage. External symptoms can include red and swollen joints accompanied by pain. About 0.5 -1% of the global population are affected by RA [1]. Inflammation of the joint will slowly cause cartilage, the layer of tissue that covers the ends of the bones, to erode. As the amount of cartilage decreases, the joint space also narrows. Long-term inflammation can also cause an increase in osteoclasts, cells that break down the tissue in bones, resulting in bone erosion. The degrees of narrowing and erosion observed in radiographs for RA are used in the Sharp/van der Heijde (SvH) method [2] to measure joint damage. This method looks at specific joints in the hands and feet, usually linked to inflammation caused by RA. Radiographs can provide a fair representation of joint damage but are presently not used to their full potential because there is no fast way to measure the damage quantitatively [1]. Currently, the scoring of the degree of joint damage in RA patients is done by manually reviewing their radiographs. This is generally expensive as it takes effort and time. Additionally, joint damage associated with RA is not quantitated in clinical practice, but instead, subjective descriptors such as "mild, moderate, or severe" are used in official reports [1]. Thus, it is desirable to have a method that can quickly and objectively classify joints to allow for more consistent and accurate scoring in the clinical and research settings without the need for much medical expertise.
For image classification, Deep Learning (DL) models have been outperforming classical Machine Learning (ML) models since the 2012 ImageNet challenge [3]. The DL model, AlexNet, which contained 5 layers of convolutional neural network (CNN) had achieved 15.2% Top-5 classification error [4]. Thereafter, subsequent years of ImageNet challenges have been dominated by CNN DL models. CNN DL models can achieve such remarkable performance due to their abilities to extract essential features during training [5]. Raw images can be used directly as inputs without the need for prior feature extraction. The accuracies of these CNN DL models have been constantly increasing in plenty of diverse applications in computation vision medical tasks such as disease classification, brain cancer classification, organ segmentation, haemorrhage detection, and tumour detection [6]. Much work has also been done on applying CNN models to classifying X-ray images [7] [8], and how to enhance the image contrast [9]. Existing work done on the automatic scoring of erosion due to RA produced a model based on VGG16, a CNN architecture [10], that is as accurate as human scorers [11]. Instead of SvH, the Ratingen erosion scoring was used [12]. The segmentation of the joints was not part of their work as they used pre-extracted joint images.
Our work built upon their approach and managed to achieve an even higher accuracy.
As a deviation from their work, we treated the problem as a classification of ordinal classes by utilizing ordinal class encoding. To deal with the imbalanced data, an under-sampling approach was attempted. We additionally included the automatic segmentation and extraction of joints. State-of-the-art DL models for computer vision were applied to remove image noise, and accurately segment the joints, which ensured quality of the training samples. A lightweight U-Net architecture which was constructed for bones segmentation [13] was used for the purpose of removing background noise. As for robust joint detection in X-ray images, the YOLOv3 model [14] was implemented. This paper presents the results of our CNN model trained on these joint images for the automatic measurement of joint damage according to the SvH scoring method.
Dataset
The X-ray datasets used for the analyses described in this paper were contributed by the University of Alabama at Birmingham, and were compiled from two sources -CLEAR (Consortium for the Longitudinal Evaluation of African Americans with Rheumatoid Arthritis [15]), and TETRAD (Treatment Efficacy and Toxicity in RA Database and Repository: A study of RA patients starting biologic drugs [16]).
A total of 367 sets of 4 radiographs each per patient were provided as JPG files. Each patient had a radiograph of their left hand (LH), right hand (RH), left foot (LF), and right foot (RF). Corresponding SvH scores were given in CSV format. It includes each patient's overall total damage scores, total erosion scores, total narrowing scores, and the narrowing and erosion scores of each joint. In addition, it was noticed that the size of the X-ray images varies considerably. Thus, this called for the need to resize them before the images could be used for training inputs. In addition, the dataset consists of a large number and percentage of score = 0 for both narrowing and erosion (Fig. 1).
Image re-scaling and padding
All the images were resized to 1500 x 1200 pixels. The original aspect ratio was retained via padding with black pixels at the borders where necessary to ensure the aspect ratio of the original is maintained.
Cropping
The raw images were first cropped to remove unnecessary parts of the limb that do not include the joints. For the hands, the bottom of the image was removed. This mostly removed the beginnings of the ulna and radius bones and part of the wrist. For the feet, the bottom ¼ of the image was removed. This did not remove any of the joints involved in scoring as the toes are found nearer the top of the feet.
Noise removal and increasing image contrast
The contrast of the images was then increased using CLAHE with a clip limit of 2 and the default grid size of (8,8). This was done to address the issue that some images were almost too dark or too light to be seen clearly. Fig. 2 show the importance of improving contrast.
Fig. 2.
Randomly selected before and after CLAHE processing images for side by side comparison.
Joint Segmentation and Identification
Apart from the joints themselves, the rest of the image is unnecessary information.
As such, by first extracting only the joint images and using them to train a model to predict their scores would save computation and improve accuracy. Hence, a two-step method was used: U-Net background Masking, and YOLOv3.
Mask Extraction Algorithm
Before the U-Net could be trained and used for background removal, an algorithm was generated and applied on the images first. This included 3 main steps: 1) Getting entropy of the image, 2) applying Otsu thresholding, and, 3) further removal of mask noise.
Entropy is a measure of the amount of randomness in an image [17]. As there is variation in pixel values between the background and the skin and bones of the limb, this textural feature can be obtained as the entropy of the areas across the image. Entropy can be defined as follows: where is the number of discrete levels of pixel values within a region of 37 x 37 pixels, and ( ) is the probability of a pixel belong to the discrete level , which is just the proportion of pixels that are in that level.
After obtaining the entropy across the image, Otsu thresholding [18] is applied to differentiate between background and limb. An intensity threshold level is supplied by minimizing the intensity variance within each class. This is defined as minimizing: where ( ) and ( ) are the probabilities of a pixel being in the two classes when separated by the threshold . and are the intensity variances of the two classes.
The mask obtained from applying the threshold mostly have some amount of noise due to the varying and random X-ray photons detected in the background of the raw image. This method is adapted from a paper which only used flood filling [19], but it was found to be unsatisfactory. Hence, additional steps, such as contour identification and filling, and flood filling from multiple origins, were taken to further remove this noise.
Contours, the borders of a region with the same intensity, were first identified and filled with white pixels. Regions with sizes smaller than a threshold of 1% of the total white area were then removed by applying a layer of black pixels along its boundary. This mostly cleared any small regions of noise in the background.
Flood filling was then used to remove any noise found within the limb. The mask image was flood filled from the four corners in case any corner might have some noise remaining which could prevent the flood filling from working. These steps resulted in a mostly full and cleaned mask of the limb. The mask obtained was then applied onto the cropped image to remove all the noise from the background.
U-Net Masking
To achieve robustness against the background noise, the effectiveness of the nonuniform background noise removal step had to be improved. It is almost impossible to develop a robust unsupervised algorithm to remove noises which can come in various forms. As such, a deep learning model was used to learn how to generate masks of the limbs to remove noise in the background. The U-Net architecture was selected for this model. The traditional U-Net has been commonly used for semantic image segmentation especially for biomedical applications. It has been modified from a conventional Fully Convolutional Network (FCN) [20] so that it is able to perform well on medical images [21].
It is implemented like an encoder-decoder network but contains skip connections [21].
These skip connections create a link between layers and others that are deeper in the network. Unlike the classical encoder-decoder network, the output space mapping depends on both the latent space and the input space instead of only the latent space.
The specific architecture chosen was a lightweight U-Net architecture which was constructed for bones segmentation [13], but in this project, it was used for the purpose of background masking. Here, the number of down and up-sampling operations were adjusted to achieve higher performance in radiographic image segmentation [13].
Additionally, a multi-scale block (MSB) structure was used to do feature extraction. The MSB utilizes filters of different kernel sizes to deal with features at various scales [13].
Masks that were used to train the U-Net were first obtained using the Mask Extraction algorithm. From there, good mask outputs which match the limb well and do not contain noise from the background were selected by eye. A total of 296 hand masks and 238 feet masks were selected for training. These were then used to train 2 separate U-Net models, one for each type of limb. Fig. 3 shows examples of good masks that were chosen from the Mask Extraction algorithm. The optimiser used was the Adam optimizer with a learning rate set as 0.0001.
The batch size was set at 16, and training ran for 200 epochs. Early stopping was used to prevent overfitting of the model. The loss function selected was binary cross entropy since a mask pixel can only either be 0 or 1. This is defined mathematically by: where is the value of the -th pixel from the predicted output, is its corresponding target value, and is the number of pixels in total.
YOLOv3 and Joint Identification
YOLOv3 is used for fast object detection where multiple bounding boxes would be predicted at the same time together with their class probabilities from one full image directly by a single network [14]. It outperforms Faster R-CNN (FRCNN) in terms of fewer background errors [14]. YOLOv3 is also generalizable as it was able to learn broader representations of images. Hence, it can be applied onto medical images such as radiographs in this case.
All selected images were annotated via the use of an open-source graphical image annotation tool called labelImg [22]. Since all the joints are of varying sizes, all boundary boxes had to be manually drawn individually.
The hands and feet images were trained separately. Since both the hands and feet contain the same number of classes, PIP (Proximal Interphalangeal) and MCP (Metacarpophalangeal) for hands, PIP and MTP (Metatarsal-Phalangeal) for feet, the parameters for training these 2 sets of data were the same. The configurations that were changed from the default settings are as follows: 1. max_batches = 4000 2. steps = 3200, 3600 3. classes = 2 4. filters = 21 for the end of each convolution blocks -layer 82, 94, and 106 However, the YOLOv3 models were not trained from scratch. A pre-trained weight on Common Object in Context (COCO) dataset, which could predict 80 classes were used [14]. The YOLOv3 trained models could now detect the ROIs and identify the type of joint -whether it is PIP or MCP for hand images, and whether it is PIP or MTP for foot images, by keeping the threshold at 0.5 confidence level even though most of the joints were classified to be around 95 -100% confident. If more than 10 or 6 joints were identified for the hands and feet respectively, the top 10 or 6 joints identified based on their confidence levels were picked (Fig. 4).
However, the predictions do not provide information of which PIP, MCP/MTP the joints belonged to. To determine which fingers and toes, such as index, middle, etc, a simple algorithm was developed to help in the identification. The algorithm was conducted based on the detected joints' positions along the horizontal axis. A backup algorithm was also created to determine joint type and location if the YOLOv3 model predicted the number of each joint type wrongly.
In the case where the model detects fewer than the required number of joints (10 for hands, and 6 for feet), the joint types identified by the model would not be utilized and their locations cannot be determined accurately. As such, this information would be unavailable during prediction of the scores for these joints. Consequently, the scores cannot be tagged to the exact joint.
Metric for evaluating the model performance
The joint scores for both erosion and narrowing take discrete integer values. As such, they can be deemed as separate classes. This means that joint scoring becomes a classification problem as compared to initially using RMSE for regression. However, it should be noted that these classes should be considered ordinal in nature.
The metric used to measure the performance of the models tested thus had to change. A modified accuracy score, the ±1 balanced accuracy, was used. To understand this, balanced accuracy is first defined as: where is the total number of classes.
The ±1 balanced accuracy is similar to balanced accuracy just that samples which were predicted as a neighbouring class to its actual class (off by one class) would be considered as correctly predicted. This helps to account for the ordinality of the classes as well and an off-by-one classification is still medically acceptable [11]. This is also a suitable metric for a test set with an imbalanced distribution of the classes. In this case, a prediction of all samples as class-0 could achieve a high percentage accuracy. Hence, a better measure of performance would be how accurate the model is at predicting samples amongst each class. Using a balanced accuracy achieves this.
VGG16 with Transfer Learning (TL)
VGG16 is a vision model with CNN architecture that performs well in image classification [22]. Transfer learning refers to applying previously learnt knowledge from other tasks onto new but related tasks. Unfortunately, the pretrained weights for VGG16 were not trained on any radiographic images. As such, the model had to be trained on the radiographs using transfer learning.
The original VGG16 architecture was used but the convolution layers were frozen, leaving only the weights on the fully connected dense layers to be trainable. This was to prevent any changes in the pre-trained weights so the model could be tweaked to suit radiographic images.
Ordinal Class Encoding
Ordinal class encoding is a special encoding that is similar to multi-label classification encoding. It is as though the higher numbered classes have to have the labels of all the classes lower than them and one more additional label. This suggests that each class is a subset of the classes lower than it. For example, the encoding for classes 0 -2 would be as follows:
Loss and activation Functions
When using ordinal class encoding for the training of joints, the loss function needs to be binary cross entropy (BCE) with a 'sigmoid' final activation. The sigmoid function allows for the multi-labels to work. BCE is used here because this classification is done as multiple binary classifications. For example, the first binary classification would be between the 0 class samples and the non-0 class samples. The next binary classification would be between the 1 class samples and the non-1 class samples. This continues with all the other classes. In this way, the multi-labels and BCE will take the class ordering into account during training.
Under-sampling
Under-sampling refers to using fewer samples of a class for training. In this case, the number of joints with 0s was reduced to an amount close to the class that had the next highest sample size.
Training
A total of 4 models were trained. Two separate models for the erosion and narrowing for the joints in the hands, and 2 separate models for erosion and narrowing of the feet joints. All models were trained for 250 epochs with learning rate of 0.0001. As the convolution layers were frozen, the number of trainable parameters was reduced to: • 3.
Results
Since the joint detection and identification step using the lightweight U-Net and YOLOv3 respectively was proved to have an accuracy of 99.991%, the accuracy of the full pipeline with these models integrated at the joint scores prediction step can be approximated as the same accuracies shown in Table 1, which shows the joint-wise ±1 balanced accuracy for each prediction model. See Fig. 6 for the models' confusion matrix. For more extensive use, the model needs to be able to be reliable enough for use in the industry where patients' wellbeing is involved. Hence, the standards for the model must be extremely high. The approach proposed here and the performance it achieved shows great promise that it can reach this high standard. Once it is achieved, the solution would have a great impact on RA patient care through better disease management. This would be due to saving time and effort from manual scoring and providing an objective measure of the degree of damage for clinical diagnoses. It would also aid research that requires the use of objective joint damage scores.
Future Work/ Limitations
To achieve a standard high enough for use, some possible further improvements to overcome certain limitations identified could be implemented.
Firstly, improvements on the approach could be made. Bone segmentation could be used to remove the skin tissue in the radiograph. This could help improve the joint detection rate and increase prediction accuracy as there is less unnecessary information from the skin. Another improvement could be to do binary classification of the 0 class and non-0 classes before differentiating between the non-0 classes. This could help with the imbalance in the distribution of classes where there are a lot of 0 class joints. Computer vision could also be used to get the physical characteristics of the joint spaces such as distance measures for narrowing scores or the contours of the joint.
Apart from adjusting the method, accuracy of the prediction model can also be improved by obtaining more data samples for training. This could be done by getting data from hospitals and laboratories. Hopefully, with these improvements, the model will be able to achieve a high level of accuracy and hence, reliability, so that it can be deployed, and have its benefits realized.
Conclusion
Using a joint segmentation approach and training joint score prediction models with ordinal class encoding, under-sampling and transfer learning, the joint wised ±1 balanced accuracies ranging from 91.51% to 97.30% were achieved.
|
2021-10-19T01:15:44.459Z
|
2021-10-17T00:00:00.000
|
{
"year": 2021,
"sha1": "42e2c9bf0ebd2525a82c70dacd1f161f1115aef2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "42e2c9bf0ebd2525a82c70dacd1f161f1115aef2",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
}
|
8923616
|
pes2o/s2orc
|
v3-fos-license
|
Procalcitonin as a Diagnostic and Prognostic Factor for Tuberculosis Meningitis
Background and Purpose We investigated the potential role of serum procalcitonin in differentiating tuberculosis meningitis from bacterial and viral meningitis, and in predicting the prognosis of tuberculosis meningitis. Methods This was a retrospective study of 26 patients with tuberculosis meningitis. In addition, 70 patients with bacterial meningitis and 49 patients with viral meningitis were included as the disease control groups for comparison. The serum procalcitonin level was measured in all patients at admission. Differences in demographic and laboratory data, including the procalcitonin level, were analyzed among the three groups. In addition, we analyzed the predictive factors for a prognosis of tuberculosis meningitis using the Glasgow Coma Scale (GCS) at discharge, and the correlation between the level of procalcitonin and the GCS score at discharge. Results Multiple logistic regression analysis showed that a low level of procalcitonin (≤1.27 ng/mL) independently distinguished tuberculosis meningitis from bacterial meningitis. The sensitivity and specificity for distinguishing tuberculosis meningitis from bacterial meningitis were 96.2% and 62.9%, respectively. However, the level of procalcitonin in patients with tuberculosis meningitis did not differ significantly from that in patients with viral meningitis. In patients with tuberculosis meningitis, a high level of procalcitonin (>0.4 ng/mL) was a predictor of a poor prognosis, and the level of procalcitonin was negatively correlated with the GCS score at discharge (r=-0.437, p=0.026). Conclusions We found that serum procalcitonin is a useful marker for differentiating tuberculosis meningitis from bacterial meningitis and is also valuable for predicting the prognosis of tuberculosis meningitis.
INTRODUCTION
Tuberculosis is caused by the bacteria Mycobacterium tuberculosis and is a global concern worldwide, especially in developing countries. 1 Despite the rapid economic development of the Republic of Korea, the prevalence of tuberculosis is high, with an incidence of 108 cases per 100,000 people. 1,2 Tuberculosis meningitis is known to be the most severe form of tuberculosis and is detected in less than 1% of all tuberculosis cases. 3 It primarily affects the meninges of the brain and spinal cord along with the adjacent brain parenchyma. 1 Tuberculosis meningitis is a devastating disease because half of the affected patients either die or suffer from permanent sequelae. 3 Early diagnosis and treatment is important for better prognoses. However, the diagnosis is often problematic despite significant advances in diagnostic techniques, resulting in tuberculosis meningitis often only being diagnosed after a substantial delay. 4 There are several reasons for a delayed diagnosis of tuberculosis meningitis. First, the initial presentation of tuberculosis meningitis may be similar to other types JCN of meningitis, and it is especially difficult to differentiate from partially treated bacterial meningitis. Second, smear tests for acid-fast bacillus are positive in only 5-30% of patients with tuberculosis meningitis. In addition, cultures of Mycobacterium tuberculosis from the cerebrospinal fluid (CSF) are positive in approximately 50% of cases, and a result takes several weeks to obtain. The detection of Mycobacterium tuberculosis DNA in the CSF using the polymerase chain reaction (PCR) is a widely used diagnostic method, but its sensitivity is only 56%. 1 Thus, simple and rapid predictors for diagnosing tuberculosis meningitis are needed.
Procalcitonin is a peptide consisting of 116 amino acids and is a calcitonin precursor. 5 It is usually synthesized in the thyroid gland. 6 However, there is evidence that cells of the monocyte-macrophage system are capable of synthesizing procalcitonin, as well as other nonthyroidal tissues, mostly parenchymal cells, when stimulated by bacterial products. 5 During a bacterial infection, the production of procalcitonin is induced by tumor necrosis factor alpha (TNF-α) and interleukin 2 (IL-2). 7 This makes serum procalcitonin a sensitive marker of severe bacterial infection, and many studies have found serum procalcitonin to be the best marker for differentiating bacterial meningitis from viral meningitis. 8,9 In addition, there is evidence that measuring the level of procalcitonin in the blood is useful for differentiating bacterial pneumonia from tuberculosis and pyogenic spondylodiscitis from tuberculosis spondylodiscitis. 2,7,[10][11][12] Moreover, the levels of procalcitonin in the blood and pleural fluid differ significantly between patients with tuberculosis and nontuberculosis pleurisy. 13 However, no study has investigated whether serum procalcitonin is a useful marker for discriminating between tuberculosis meningitis and bacterial meningitis. Additionally, no published studies have addressed the value of serum procalcitonin as a predictive factor for the prognosis of tuberculosis meningitis.
The aim of this study was to elucidate the potential role of serum procalcitonin in differentiating tuberculosis meningitis from bacterial and viral meningitis, and in predicting the prognosis of tuberculosis meningitis.
Participants
This study was approved by the Institutional Review Board at our institution. It had a retrospective study, and involved patients with a clinical suspicion of tuberculosis meningitis admitted to the neurology departments of two tertiary hospitals in Busan, Korea from January 2009 to July 2015. All of the patients had typical clinical histories and laboratory findings of tuberculosis meningitis. In addition, all patients who were admitted to centers with a clinical suspicion of bacterial and viral meningitis during the same period were included as the disease control group for comparisons.
We enrolled patients with tuberculosis meningitis according to the current consensus of the diagnosis and a scoring system, and classified them into definite and probable tuberculosis meningitis. 14 The scoring system comprised four partsclinical criteria, CSF criteria, cerebral imaging criteria, and evidence of tuberculosis elsewhere-and points were given for variables that were present. Patients were judged to have definite tuberculosis meningitis if Mycobacterium tuberculosis was identified in their CSF. Patients had probable tuberculosis meningitis if they scored at least 10 with cerebral imaging results and 12 points without cerebral imaging results. 14 In addition, we defined bacterial meningitis based on a combination of the clinical history and laboratory findings. These included 1) clinical features, such as the acute onset of headache, fever, and signs of meningeal irritation; 2) positive CSF findings, including pleocytosis (≥5/mm 3 , mainly neutrophilic), elevated protein concentration (≥45 mg/dL), a reduced ratio of CSF glucose to serum glucose (≤0.60); 3) a negative CSF stain, culture, or PCR for viruses, mycobacteria, and fungi; and 4) a positive CSF culture, smear, or PCR for bacterial pathogens or a good specific response to antibacterial therapy. 15 Moreover, we included patients with viral meningitis. These patients had 1) clinical features, such as the acute onset of headache, fever, and signs of meningeal irritation; 2) no signs of cortical involvement such as altered consciousness, aphasia, or seizures; 3) positive CSF findings including pleocytosis (≥5/mm 3 , mainly lymphocytic); 4) a negative CSF stain, culture, or PCR for bacteria, mycobacteria, and fungi; and 5) a positive PCR for viral pathogens or full recovery without any specific treatments including antibacterial or antituberculosis therapy. 15 We excluded patients whose blood procalcitonin levels had not been measured.
A total of 26 patients with tuberculosis meningitis met the inclusion criteria for this study (Fig. 1). In addition, we included 70 patients with bacterial meningitis and 49 patients with viral meningitis as disease control groups. Only 12 of the 26 patients with tuberculosis meningitis had a positive CSF culture, smear, or PCR for Mycobacterium tuberculosis, and they were classified as definite tuberculosis meningitis. The remaining 14 patients were classified as probable tuberculosis meningitis. 14
Measurements
Differences in the demographic and laboratory data among patients with tuberculosis, bacterial, and viral meningitis were analyzed using the demographic profile including age, sex, and diabetes mellitus; blood profile including the white blood cell (WBC) count, platelet count, C-reactive protein (CRP), and procalcitonin; CSF profile including the WBC count, lymphocyte percentage of the total CSF WBC count (lymphocyte percentage), protein, adenosine deaminase (ADA), and the glucose ratio (CSF/blood); and clinical profile including systolic and diastolic blood pressures, heart rate, and score on the Glasgow Coma Scale (GCS) at the times of admission and discharge. We analyzed the blood pressure, heart rate, and blood and CSF profiles that were obtained on the day of admission. In addition, we analyzed the predictive factors for the prognosis of tuberculosis meningitis, which was based on the GCS score at discharge, which was used to divide the patients with tuberculosis meningitis into two groups: good prognosis (GCS score >8) and poor prognosis (GCS score ≤8). Moreover, we analyzed the correlations between the GCS score at discharge and demographic profile including age; blood profile including the WBC count, platelet count, CRP, and procalcitonin; CSF profile including the WBC count, lymphocyte percentage, protein, ADA, and glucose ratio (CSF/blood); and clinical profile including systolic and diastolic blood pressures, heart rate, and GCS score at admission.
Procalcitonin concentrations were measured using an electrical chemiluminescence assay (cobas e 411, Roche Diagnostics, Indianapolis, IN, USA), for which the measurement range was 0.05-200 ng/mL.
Statistical analysis
Comparisons were analyzed using the chi-square test for categorical variables and Student's t-test, Mann-Whitney U-test, analysis of variance, or Kruskal-Wallis test for numerical variables. Categorical variables are presented as frequency and percentage values. Numerical variables conforming to a normal distribution are presented as mean±standard deviation values, while other numerical variables are presented as median and interquartile-range values. In addition, separate bivariate logistic regression models for each tool were used to determine the odds ratios for differentiating tuberculosis meningitis from bacterial meningitis and tuberculosis meningitis with a poor prognosis from that with a good prognosis. To perform multivariate analyses and evaluate the sensitivity and specificity for distinguishing tuberculosis meningitis from bacterial meningitis, we analyzed the clear cutoff values using the receiver operating characteristic (ROC) curve. The data obtained using ROC curves are presented as 95% confidence interval (CI) and standard error (SE) values. Continuous variables were converted into categorical variables by dichotomizing them in accordance with the clear cutoff values in the logistic regression analysis. The correlation analysis was performed using Spearman's rank correlation test. Probability values of p<0.05 were considered to be indicative of statistical significance. We set p<0.017 (0.05/3) as significant when comparing the demographic and laboratory differences among the patients with tuberculosis, bacterial, and viral meningitis with multiple corrections in post-hoc analysis. All statistical tests were performed using MedCalc ® (version 13, MedCalc Software, Ostend, Belgium). Table 1 compares the demographic and laboratory profiles among the patients with tuberculosis, bacterial, and viral 24 patients were excluded (20 patients with incomplete data, 4 patients with finally diagnosed with other meningitis) 571 patients with clinical suspicion of tuberculosis, bacterial, or viral meningitis admitted to the Neurology Departments according to ICD 10 code JCN meningitis. The demographic profile including age; the blood profile including the WBC count, platelet count, CRP, and procalcitonin; the CSF profile including the WBC count, lymphocyte percentage, protein, and ADA; and the clinical profile including GCS scores at admission and discharge differed significantly among the patients with tuberculosis, bacterial, and viral meningitis.
Differences in measurements among the total participants
In the post-hoc analysis, the patients with tuberculosis meningitis had a higher platelet count in blood and GCS score at admission, and a lower WBC count, CRP, and procalcitonin in the blood, and a lower WBC count, lymphocyte percentage, and ADA in the CSF than those with bacterial meningitis. In addition, the patients with tuberculosis meningitis were older and had higher levels of protein and ADA in the CSF, and lower lymphocyte percentages in the CSF and lower GCS scores at admission and discharge than those with viral meningitis. However, the level of procalcitonin did not differ significantly between patients with tuberculosis meningitis and those with viral meningitis.
Differences in measurements between patients with tuberculosis meningitis and bacterial meningitis
The area under the ROC curve (AUC) was 0.784 (95% CI= (<630/mm 3 ) in the CSF were independently significant variables for predicting tuberculosis meningitis ( Table 2). The risk of having tuberculosis meningitis with a low serum level of procalcitonin (≤1.27 ng/mL) was at least 22 times higher than the risk of having bacterial meningitis. These variables have very high sensitivity and negative predictive values, but low specificity and positive predictive values. Compared to the WBC count in the CSF, the level of procalcitonin in the blood had a higher sensitivity [96.2% (95% CI=80. 4
Differences in measurements between tuberculosis meningitis patients with good and poor prognoses
Of the 26 patients with tuberculosis meningitis, 21 had a good prognosis (GCS score >8 at discharge) and 5 had a poor prognosis. The WBC count and procalcitonin in the blood were higher in patients with a poor prognosis than those with a good prognosis ( Table 3). The AUC was 0.848 (95% CI=0.653-0.957, SE=0.124) for the WBC count in the blood and 0.876 (95% CI=0.688-0.972, SE=0.097) for the procalcitonin level in the blood. Multiple logistic regression analysis showed that only a high level of procalcitonin (>0.4 ng/ mL) in the blood was an independent and significant predictor of a poor prognosis in patients with tuberculosis meningitis ( Table 4). The risk of having a poor prognosis in patients with tuberculosis meningitis with a high serum level of procalcitonin (>0.4 ng/mL) was at least 20 times higher than for those with a good prognosis. The level of procalcitonin (r=-0.437, p=0.026), diastolic blood pressure (r=-0.493, p=0.010), and heart rate (r=-0.471, p=0.015) in patients with tuberculosis meningitis were negatively correlated with the GCS score at discharge (Fig. 2A). In addition, there was a positive correlation between the GCS scores at admission and discharge (r=0.439, p=0.025). However, age (r=-0.154, and protein (r=0.309, p=0.142), and glucose ratio (CSF/ blood) (r=0.154, p=0.474) were not correlated with the GCS score at discharge. In addition, the levels of procalcitonin in patients with bacterial and viral meningitis were not correlated with the GCS score at discharge (r=-0.043, p=0.725 and r=-0.182, p=0.211, respectively) ( Fig. 2B and C).
DISCUSSION
This study was the first to investigate the usefulness of the serum procalcitonin level as a diagnostic and prognostic factor for tuberculosis meningitis. We found that the level of procalcitonin in the blood at admission was lower in patients with tuberculosis meningitis than in those with bacterial meningitis. Moreover, we identified that a high level of procalcitonin in the blood at admission was an independent significant predictor of a low GCS score at discharge in patients with tuberculosis meningitis. In addition, the level of procalcitonin in the blood at admission in patients with tuberculosis meningitis was negatively correlated with the GCS score at discharge. These results suggest that serum procalcitonin is a useful marker for the diagnosis and prognosis prediction of tuberculosis meningitis at the initial diagnosis stage.
We found that the blood profile including the WBC count and CRP and the CSF profile including the WBC count, lymphocyte percentage, and ADA differed significantly between patients with tuberculosis and patients with bacterial meningitis. This finding is in agreement with those of previous studies. 16 In addition to these findings, we have demonstrated that the level of procalcitonin in the blood differs between patients with tuberculosis meningitis and patients with bacterial meningitis. The high sensitivity and negative predictive value for differentiating the diagnoses of tuberculosis meningitis and bacterial meningitis suggest a supplementary role for serum procalcitonin in the diagnostic exclusion of tuberculosis meningitis from bacterial meningitis in countries where tuberculosis is endemic, such as the Republic of Korea. This result is consistent with previous studies that demonstrated the utility of serum procalcitonin for differentiating pulmonary tuberculosis from bacterial pneumonia. 7,[10][11][12] The reason why the level of procalcitonin in the blood remains relatively low in tuberculosis meningitis is unclear. The cascade of inflammatory cytokines released during a systemic infection may determine the rate and intensity of procalcitonin synthesis and release, thus accounting for differences observed in the blood levels of procalcitonin. This cascade is probably greater in bacterial meningitis than tuberculosis meningitis. 12 The elevated levels of TNF-α and interferon-gamma (IFN-γ) produced during mycobacterial infection play key roles in the cellular host response in the immunopathogenesis of tuberculosis meningitis. 1 The production of procalcitonin is induced by TNF-α but inhibited by IFN-γ. 7,17,18 Therefore, serum procalcitonin seems to increase only moderately in patients with tuberculosis meningitis, in contrast to bacterial meningitis, during which the production of procalcitonin is induced by TNF-α and IL-2. 7,19 Interestingly, we found that serum procalcitonin was a better marker than CRP for differentiating tuberculosis meningitis from bacterial meningitis. CRP is also an inflammatory marker and an acute-phase protein released by the liver after the onset of inflammation or tissue damage. 20 However, CRP is neither highly specific nor sensitive for bacterial infections because it can remain present at low concentrations in bacterial infections and can be significantly increased in viral infections. 20 In addition, increases in procalcitonin occur more rapidly than increases in CRP-a previous study detected procalcitonin in the plasma at 2 hours after injecting endotoxins, with its concentration increasing to reach a plateau after approximately 12 hours, whereas CRP was de- Fig. 2. Results of the correlation analysis. A negative correlation between the serum procalcitonin level and Glasgow Coma Scale score at discharge is identified in patients with tuberculosis meningitis (A), whereas no significant correlation is identified between these parameters in patients with bacterial (B), or viral (C) meningitis.
JCN
tected in the plasma after 12 hours and reached a plateau after 20-72 hours. 21 Previous studies found that the serum procalcitonin level was higher in bacterial meningitis than in viral meningitis. 8,9 However, no previous study has investigated differences in the serum procalcitonin levels between patients with tuberculosis and viral meningitis. We found that the serum procalcitonin level in patients with tuberculosis meningitis did not differ significantly from that in patients with viral meningitis, although there was a strong tendency. These findings suggest that serum procalcitonin is not a useful marker for discriminating between tuberculosis meningitis and viral meningitis.
Several studies have investigated factors related to the prognosis of patients with tuberculosis meningitis. 1,3,[22][23][24] In most of these studies, the stage of disease emerged as the single most important factor associated with mortality. 1,23 It has also been demonstrated that low levels of glucose and high levels of protein in the CSF are associated with poor outcomes. 23 Recent studies found that the presence of altered consciousness, diabetes mellitus, immunosuppression, neurological deficit, hydrocephalus, and vasculitis predicted unfavorable outcomes in patients with tuberculosis meningitis. 3,22 In addition to these findings, our study demonstrated that serum procalcitonin is a good predictor for the prognosis of tuberculosis meningitis. Although the serum level of procalcitonin was lower in tuberculosis meningitis than in bacterial meningitis, we found that a relatively high level of procalcitonin in the blood suggests a poor prognosis in tuberculosis meningitis patients. This was consistent with previous findings that the level of procalcitonin in the blood was a good predictor of the severity of pneumonia and sepsis. 10 Another previous study found that the level of procalcitonin in the blood differed significantly between survivors and nonsurvivors in patients with sepsis and cardiac surgery with cardiopulmonary bypass. 25 Moreover, our results are in line with another study regarding pulmonary tuberculosis, which showed that the risks of mortality and the presence of disseminated tuberculosis increased with the level of procalcitonin in the blood. 6,26 Another study found that the level of procalcitonin in the blood was also related to the severity of bacterial meningitis. 27 However, the exact mechanisms underlying the synthesis of procalcitonin and its role in infectious diseases remain unknown. The cascade of inflammatory cytokines released during infection may determine the intensities of procalcitonin synthesis and release, and thus the increase in procalcitonin in infectious disease may be ascribed to the inflammatory response and cytokines, which might be related to poor prognoses. 6,12 Further research may be needed to fully elucidate these features.
This study was subject to several limitations. First, it had a retrospective design, and it involved a relatively small sample, which was due to the inclusion of only patients for whom blood levels of procalcitonin had been assessed at admission. Second, although radiologic characteristics such as hydrocephalus and tuberculoma may aid in the prognosis prediction of tuberculosis meningitis, we did not include these determining factors in the analysis performed for the study protocol. 1,28 Third, due to the smallness of the sample we did not restrict the diagnostic criteria to a microbiological confirmation, instead also enrolling those with probable tuberculosis meningitis. Fourth, we did not confirm the long-term prognoses, only analyzing the prognoses at the time of discharge.
In conclusion, we found that serum procalcitonin is a useful marker for differentiating tuberculosis meningitis from bacterial meningitis, and that a higher level of procalcitonin is an independent and significant predictor of a poor prognosis in patients with tuberculosis meningitis. These results suggest that measuring the serum procalcitonin level is useful for the diagnosis and prognosis prediction of tuberculosis meningitis at the initial diagnosis stage.
Conflicts of Interest
The authors have no financial conflicts of interest.
|
2017-10-27T05:39:22.996Z
|
2016-05-04T00:00:00.000
|
{
"year": 2016,
"sha1": "6b0f00e9d9ca6ac495d222e1de5d597f39348eea",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4960218?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "deb6ebfd40d75f3c2303734885c00432e643a79d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251021886
|
pes2o/s2orc
|
v3-fos-license
|
A novel immune checkpoints-based signature to predict prognosis and response to immunotherapy in lung adenocarcinoma
Background Except for B7-CD28 family members, more novel immune checkpoints are being discovered. They are closely associated with tumor immune microenvironment and regulate the function of many immune cells. Various cancer therapeutic studies targeting these novel immune checkpoints are currently in full swing. However, studies concerning novel immune checkpoints phenotypes and clinical significance in lung adenocarcinoma (LUAD) are still limited. Methods We enrolled 1883 LUAD cases from nine different cohorts. The samples from The Cancer Genome Atlas (TCGA) were used as a training set, whereas seven microarray data cohorts and an independent cohort with 102 qPCR data were used for validation. The immune profiles and potential mechanism of the system were also explored. Results After univariate Cox proportional hazards regression and stepwise multivariable Cox analysis, a novel immune checkpoints-based system (LTA, CD160, and CD40LG) were identified from the training set, which significantly stratified patients into high- and low-risk groups with different survivals. Furthermore, this system has been well validated in different clinical subgroups and multiple validation cohorts. It also acted as an independent prognostic factor for patients with LAUD in different cohorts. Further exploration suggested that high-risk patients exhibited distinctive immune cells infiltration and suffered an immunosuppressive state. Additionally, this system is closely linked to various classical immunotherapy biomarkers. Conclusion we constructed a novel immune checkpoints-based system for LUAD, which predicts prognosis and immunotherapeutic implications. We believe that these findings will not only aid in clinical management but will also shed some light on screening appropriate patients for immunotherapy. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-022-03520-6.
Introduction
Lung adenocarcinoma (LUAD) is the most common variant of non-small cell lung cancer (NSCLC) across the world. LUAD has become significantly more prevalent over the past two decades, with a similar decrease in squamous cell carcinoma during the same period [1]. There have been advances in the current strategy of combining various therapies with tyrosine kinase inhibitors (TKIs) to treat patients with LAUD. However, the 5-year overall survival (OS) rate for patients with LUAD remains poor [2,3]. At present, there is an urgent need to identify specific prognostic biomarkers in patients with LAUD, to enable the development of optimized, appropriate treatment protocols and clinical management for patients with different elements of risk. Several studies have attempted to predict and evaluate prognosis in patients with LAUD, through diverse gene expression profiles and bioinformatics approaches [4][5][6]. However, these studies usually incorporate genes from the entire genome or transcriptome and do not consider the intrinsic biological functions. This may give rise to the fact that many of these signatures remain merely mathematical but unable to truly reflect the real characters inherent in the tumor. In recent years, the treatment of LUAD has developed rapidly. In addition to traditional surgery, radiotherapy, and chemotherapy, molecular targeted therapy, novel strategy like immunotherapy has been becoming of increasingly great concern [7,8]. The introduction of immune checkpoint-blocking antibodies that target programmed death 1 (PD-1) receptor as well as against its ligand, programmed death-ligand 1 (PD-L1) of the B7-CD28 family has significantly improved survival rates in patients with advanced lung cancer. These antibodies are available for clinical use [9][10][11]. However, beneficial immunotherapeutic effects were only observed in a subgroup of patients with response rates of 17-21% [12], suggesting that there may be other immune checkpoints in the LAUD tumor microenvironment (TME).
In addition to these star targets of the B7-CD28 family, several emerging and promising immune checkpoints have been identified [13]. Most of these novel immune checkpoints are from the TNF superfamily, including the TNF ligand superfamily (TNFSF) as well as the TNF receptor superfamily (TNFRSF), which are responsible for regulating pathways related to cell survival, differentiation, and non-immunogenic death. Most notably, these novel immune checkpoints belonging to the TNF superfamily also play a critical function in regulating the immune system, providing co-stimulatory or co-inhibitory signals vital for natural and adaptive immunity with an emphasis on T-cell responsiveness [14,15].
These novel immune checkpoints molecules can trigger inflammatory activities in several cells in the TME, including cells like T and B lymphocytes as well as tissueresident cells such as epithelial and fibroblasts cells [16]. Therefore, modulating and controlling the interaction of these novel immune checkpoints would be a novel therapeutic target with great potential for tumor treatment. Meanwhile, many cancer therapies targeting these novel immune checkpoints are in full swing, including preclinical and clinical trials. For example, therapies targeting OX40, CD40, and CD27 have achieved favorable progress in various tumor treatments, including for patients with lung cancer [14,17,18]. This observation inspired us to explore and establish a novel immune checkpoints-based prognosis system for LUAD, revealing immune features for patients with LUAD.
Herein, we firstly enrolled 1883 LUAD samples from nine independent cohorts, including eight public cohorts and an independent cohort. Then, we systematically explored these novel immune checkpoints in LAUD and filtered out genes with the most prognostic value. From this, we constructed a signature based on the combination of LTA, CD160, and CD40LG, which was also well-validated in different cohorts. Finally, we further examined the clinical characteristics, immunotherapy responses, and immune cells infiltration of the prognostic signature. The novel combination of LTA, CD160, and CD40LG, may help us understand the immune status of patients with LUAD, but also optimize available tumor immunotherapies.
Publicly available mRNA expression datasets
We collected 1731 publicly available samples from The Cancer Genome Atlas (TCGA), including level three RNA-seq expression data from 502 LAUD cases (Illumina HiSeq 2000). All samples had corresponding prognostic data and other clinical information. These data were downloaded from the Cancer Genomics Browser of the University of California Santa Cruz (UCSC) (https:// genom ecanc er. ucsc. edu) and served as the training cohort. The microarray data and corresponding survival information for the remaining 1670 samples were gathered from seven different Gene Expression Omnibus (GEO) datasets (http:// www. ncbi. nlm. nih. gov/ geo), including 90 samples from GSE11969, 83 samples from GSE30219, 226 samples from GSE31210, 106 samples from GSE37745, 127 samples from GSE50081, 442 samples from GSE68465 and 105 samples from GSE81089, which were used as public validation cohorts. First, we log2-transformed the mRNA expressions of these public validation sets. After the quantiles were normalized, we selected the expression of genes with several probes as the mean expression.
Sample analysis and quantitative real-time polymerase chain reaction (qRT-PCR)
Between May '13 and September '14, a total of 102 frozen and surgically resected tissue samples from patients with LUAD were collected from the First Affiliated Hospital of Zhengzhou University. The samples were snapfrozen promptly after surgical resection and stored in liquid nitrogen. We extracted the total RNA from the stored tissues using RNAiso Plus reagent (Takara, #9109) 20:332 in accordance with the manufacturer's instructions. Next, we reverse-transcribed the total RNA signature into single-stranded cDNA using the Prime Script ™ RT reagent kit (Takara, #RR047A). The expression of the three selected genes in this prognostic signature was then detected via qRT-PCR. We used SYBR Premix Ex Taq II (Takara, #RR820A) reagent and Agilent Mx3005P software for data analysis. The three genes' expression values were first standardized to GAPDH and then log2 transformed for subsequent analysis. All primer sequences used for the three genes of interest, as well as GAPDH, are shown in Additional file 6: Table S1. The First Affiliated Hospital of Zhengzhou University's Ethics Committee Board approved this protocol and monitored the study's progress.
Functional enrichment analysis and prognostic meta-analysis
We analyzed the gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways to analyze the prognostic signature using DAVID 6.8 (http:// david. abcc. ncifc rf. gov/ home. jsp). Then, we used STATA software (version 12.0) to carry out a prognostic metaanalysis to better understand this signature's prognostic significance across different public cohorts. We used a random-effects model to calculate the pooled HR value.
Immune cell infiltration analysis
The novel CIBERSORT method estimates cell fractions within complex tissues using gene expression levels in solid tumors. The results of this process stand in agreement with ground truth evaluation [19,20]. The LM22 signature matrix included 547 genes that distinguished 22 immune cell phenotypes. These included distinct Tand B-cell subsets, as well as natural killer (NK) cells and other various myeloid subsets. We used CIBERSORT and the LM22 signature matrix to calculate the immune cell infiltration of patients with high-and low-risk disease statuses.
Tumor Immune Dysfunction and Exclusion (TIDE) analysis
TIDE is an accurate computational framework to predict immunotherapeutic responses, especially to therapeutic strategies based on checkpoint blockade [22]. We integrated the mechanisms of primary tumor immune evasion (T-cell exclusion and dysfunction) into this novel method for modeling tumor immune evasion. The TIDE score is superior to many known biomarkers in predicting immunotherapy responses for melanoma and lung cancer, especially for patients who are treated with anti-PD-1/PD-L1 or anti-CTLA4 [22]. We uploaded the transcriptome profiles of patients from the TCGA cohort to the TIDE website (http:// tide. dfci. harva rd. edu), and after analyzing through the website, downloaded the scores for T-cell dysfunction and exclusion as well as the TIDE scores for all patients.
Tumor Inflammation Signature analysis
Tumor Inflammation signature (TIS) is a novel clinical assay to predict immunotherapy responses in various tumors. We calculated the TIS scores for patients from the TCGA cohort according to the methods mentioned in the previous study [23].
Signature generation and statistical analysis
We used a univariate Cox proportional hazards regression model to screen for genes that were strongly associated with OS in patients with LAUD. Then, we chose to use stepwise Cox proportional hazards regression analysis to reduce the total number of covariates and filter out the most valuable prognostic genes for the prediction of OS. A risk formula was constructed for each patient based on the expression of selected genes and corresponding regression coefficients from the stepwise Cox proportional hazards regression analysis. This information was used to classify patients into two groups: low risk and high risk. Next, we used a Kaplan-Meier analysis to determine the OS and RFS among the two risk groups. We used the Mann-Whitney U-test to explore estimates of immune cell infiltration, TMB load, the number of neoantigens (including clonal and subclonal neoantigens), expression of PD-L1 protein, the TIDE score, the T-cell dysfunction score, the T-cell exclusion scores, and TIS scores in the high-and low-risk patient subsets. Then, a Cox proportional hazards regression analysis was used to determine if the combined gene signature was capable of predicting prognoses dependently. We used R software version 3.5.1 (https:// www.r-proje ct. org) to analyze all data and generate the corresponding figures. A P-value < 0.05 indicated statistical significance. All statistical tests were two-tailed.
Identification of prognostic novel immune checkpoint members in LUAD
After a careful literature review, 21 clearly defined novel immune checkpoint members, which all belong to the TNF superfamily, were included in this study [15,[24][25][26][27]. We used a set of 502 patients with LUAD from the TCGA database as the training cohort for our model. The demographics of this cohort are listed in Table 1. The expression correlations among these 21 immune checkpoint members are displayed in Fig. 1A, which exhibited that many members have a strong positive association. The expression of the 21 defined members was explored in the training cohort. Then, a univariate Cox proportional regression analysis found that seven immune checkpoint genes were significantly associated with OS (P < 0.05, Additional file 6: Table S2). Interestingly, all seven genes (BTLA, CD160, CD27, CD40LG, LTA, TNFRSF14, and TNFSF8) with low hazard ratios (hazard ratio < 1) were positively correlated with favorable survival (Additional file 6: Table S2).
Generation of the prognostic signature in the TCGA cohort
The seven immune checkpoint genes with prognostic significance in univariate comparisons were used for further analysis. Next, we used a stepwise Cox proportional hazard regression model to determine which set of variables provided the most efficient estimate of patient prognoses. This procedure identified three genes of interest: CD160, LTA, and CD40LG. We established the risk score formula based on the expression of these three genes and their corresponding coefficients: risk score = 0.1379 × LTA-0.1525 × CD160-0.2595 × CD40LG. Each patient's risk score was calculated, and patients were categorized as high-or low-risk based on the optimal cutoff point (determine from the training data set, Fig. 1B). The high-risk group had a worse OS (P < 0.0001) and RFS (P = 0.0050) than the low-risk group (Fig. 1C, D). According to the multivariable Cox analysis, this novel risk score independently predicted OS in the TCGA cohort even after correcting for potential confounding from the other clinical variables (Table 2).
Validating the prognostic signature in independent cohorts
To test the prognostic signature's reproducibility in LAUD, we also applied this risk score formula in seven publicly available independent cohorts. Demographic data from these cohorts are presented in Table 1. All patients in these seven cohorts were divided into highand low-risk groups based on the optimal risk-score cutoff that was determined from the training data set.
Of interest, we found that patients in the high-risk group had an unfavorable OS than those in the low-risk group, especially in six of the test data sets: GSE11969 (P = 0.0496, Fig. 2A indicated a borderline difference between the OS in the high-and low-risk groups in the GSE37745 cohort (P = 0.0630, Fig. 2D). Furthermore, to validate the prognostic value of the novel immune checkpoints-based signature, we also performed a prognostic meta-analysis in these seven GEO cohorts and the TCGA cohort.
The results of the analysis provide further evidence that this prognostic signature is a risk predictor in LAUD (P < 0.001) (Fig. 2H).
To assess our signature's applicability to clinical settings, we used qRT-PCR to validate this risk score in another independent cohort consisting of frozen samples from 102 patients with LUAD. This cohort's demographic data are presented in Table 3. We used the risk score formula to divide the patients into two groups (Fig. 3A). We found a significant difference in OS (P < 0.0001) and RFS (P = 0.00015) between the two risk groups (Fig. 3B, E). Since LUAD patients with different clinical stages have different treatment principles and prognoses [28], we also applied this risk score within early-(clinical stage I and II) and advanced-stage (clinical stage III and IV) subgroups. We observed that patients in these clinicalstage subgroups who were at high risk had shorter survival than patients at low risk (Fig. 3C, D). Finally, we confirmed that this risk score was independently related to OS in these 102 patients with LAUD using qPCR data and multivariable Cox analysis ( Table 3).
Validation of the prognostic signature in important clinical subsets.
We next explored the association between prognosis and risk score for early-and advanced-stage samples in the TCGA cohort since the clinical stage is known to affect prognosis. For the early-stage subgroup, highrisk patients suffered shorter OS (P = 0.00012) and RFS (P = 0.0058) than their low-risk counterparts (Additional file 1: Fig. S1A, C). Similarly, in the advanced-stage subgroup, high-risk samples had a worse OS relative to lowrisk ones (P = 0.17) (Additional file 1: Fig. S1B). However, among patients with advanced-stage disease, there was a borderline difference in RFS between the high-and lowrisk groups (P = 0.071) (Additional file 1: Fig. S1D).
Patients above the risk cutoff suggested significantly inferior OS in other important clinical feature subtypes of the TCGA cohort, including older (age ≥ 60), younger (age < 60), male, female, and smoker relative to nonsmokers (Additional file 2: Fig. S2). Next, considering the critical significance of the EGFR and KRAS mutations in LAUD, we further investigated the prognostic signature's performance in patient subgroups with EGFR and KRAS mutation status. Our prognostic signature accurately stratified varying OS for the EGFR wild-type (WT) vs. mutation (MUT), KRAS WT/MUT, or EGFR/KRAS WT subtypes (Additional file 3: Fig. S3). Last, we evaluated the risk scores across the three expression subtypes in LAUD: magnoid, squamoid, and bronchioid [29]. The risk score was also effective in stratifying patients with different OS in the magnoid and bronchioid subgroups, while showing a borderline difference in the squamiod subgroup (P = 0.0580) (Additional file 4: Fig. S4).
Biological pathways and inflammatory responses analysis of the prognostic signature
To explore this prognostic signature's related and potential biological pathways, we selected the genes that are strongly related to the signature (Pearson |R|> 0.45). The results demonstrated that about 436 genes were negatively, and 12 genes were positively associated with the risk score (Fig. 4A). Next, we performed GO and KEGG analysis for these genes, which showed that they are mainly enriched in immune response and leukocyte activation pathways (Fig. 4B). Meanwhile, we sought to further investigate the relevant inflammatory responses of the risk score. We analyzed the relationships between the seven clusters of metagenes identified from univariate (HCK, interferon, LCK, MHC-I, MHC-II, STAT1, IgG) and risk score [30]. The seven clusters of metagenes found here broadly represent pathways related to inflammatory responses and immune regulation. The profiles of these metagenes are shown in Fig. 4C. A Gene Sets Variation Analysis (GSVA) was performed to validate the seven metagene clusters [31]. The findings from this analysis indicated that the risk score was positively related to IgG, interferon, and STAT clusters, but negatively related to LCK and MHC-II clusters (Fig. 4D).
Immune landscapes of the prognostic signature
Since the prognostic signature is linked to immune response pathways, we intend to delve into the immune cell infiltration in patients with LUAD. Next, we combined CIBERSORT with LM22, a novel method for estimating 22 immune cell fractions and subtypes, to evaluate immune cell infiltration within the training cohort. We found that patients in the high and low-risk groups had distinct profiles of immune cells (Fig. 5A). The high-risk patient groups exhibited higher infiltration of resting CD4 T-cells, macrophagocyte M1, but lower infiltration of memory B-cells and resting T-cells (Fig. 5B, C). Meanwhile, because some immune checkpoints can construct a communication system to modulate antitumor immune responses [32], we next sought to explore the correlation between patient risk scores and several essential immune checkpoints. The Pearson correlation analysis suggested that the risk score was positively correlated with CD276. CD276, a vital immune checkpoint of the B7-28 family, and the immunotherapies targeting CD276 have achieved positive results in various tumors (Fig. 5D-G) [33]. This may indicate that patients at higher risk may benefit from treatment with immunotherapies that target CD276.
Relationship between the prognostic signature and immunotherapy responses
Nowadays, immunotherapies targeting immune checkpoints are increasingly significant to cancer treatment and become the first-line choice in various tumors, especially for LAUD. Meanwhile, these novel immune checkpoint members also are potential and crucial candidate checkpoints for immunotherapies. We sought to find out the relationship between this signature and immunotherapy responses. We selected some classical and widespread biomarkers and analyzed their connection to the risk score [22,32]. First, we calculated and evaluated the tumor mutation burden (TMB), the number of neoantigens, as well as the number of clonal and subclonal neoantigens among the high-and low-risk patients. The high-risk group exhibited higher TMB, and these three kinds of neoantigens (Fig. 6A-D). Second, we also compared the protein expression of PD-L1 in high-and low-risk patients and found a borderline, difference between the patients in the high-and lowrisk groups in average PD-L1 protein expression levels (Fig. 6H). Finally, the TIDE score, an accurate and reliable biomarker for immunotherapy, was also applied in our analysis. We systematically explored the TIDE and T-cell dysfunction and exclusion scores in patients from different groups. As expected, high-risk patients exhibited lower TIDE scores and higher T-cell dysfunction and exclusion scores (Fig. 6E-G). However, the TIS score is negatively associated our risk scores, and the low-risk patients also demonstrated a higher TIS score than highrisk score counterparts, which contradicts with the above results (Additional file 5: Fig. S5). Together, these results indicated high-risk patients may be more likely to benefit from immunotherapies.
Discussion
High-throughput sequencing and genomics research has ushered in the identification of emerging biomarkers and therapeutic targets. These advancements have also improved our understanding of tumors. However, there is little information on which biomarkers may predict immune therapy responses and prognoses, revealing tumor immune phenotypes in LAUD. To improve our understanding of the co-stimulatory signals important in LAUD, we combined three crucial novel immune checkpoints (LTA, CD160, and CD40LG) to create a novel signature. We used the TCGA database as a training cohort and systematically explored the association between the gene expressions of some novel immune checkpoint members and prognostic outcomes in patients with LUAD. In the process, we identified a novel immune checkpoints-based risk signature, which is closely related to OS and RFS of patients with LAUD. This novel risk signature was validated in seven different publicly available cohorts as well as 102 samples of frozen tumor tissues using qPCR data. The validity of our signature was further confirmed using a meta-analysis. Our results indicated that the risk signature was well-validated and significantly related to OS in various important clinical and mutation subgroups. Our novel signature independently predicted prognosis in patients with LAUD. We also explored and analyzed relevant mechanisms, immune predictors for immunotherapy, and immune cells infiltration of this risk signature. Thus, this signature may contribute to a deep and comprehensive understanding of precision immunotherapy for LAUD. Currently, apart from B7-CD28 family members, there are several emerging and potential immune checkpoints for immunotherapy, such as some TNF superfamily members [16]. After analyzing the association between these novel immune checkpoints and prognoses in LUAD, we determined the most significant prognostic genes in these novel immune checkpoints, including protective (LTA) and risky (CD160 and CD40LG) genes.
Regarding the protective gene, LTA, a proinflammatory cytokine, belongs to the TNF superfamily, involved in the inflammatory and immune responses [34]. This molecule also can assist lymphocytes and stromal cells to induce cytotoxic effects on tumor cells [34,35]. It was reported that the polymorphisms of the LTA gene are closely related to cancer risk, including LAUD and other adenocarcinoma malignancies [36]. Meanwhile, researchers indicated that depleting LTA-expressing lymphocytes with LTA-specific monoclonal antibodies may be useful in treating autoimmune diseases [37]. Also, in terms of risky genes, CD160-also known as BY55, is an immunoglobulin-like, glycosylphosphatidylinositolanchored protein and expressed on natural killer cells, γδ T-cells, and a subset of CD4 + and CD8 + T-cells [38,39]. CD160 was proven to bind to herpesvirus entry mediator (HEVM) with high affinity, which induces robust natural cells effector activity and suppressed T-cell responses in vitro [40,41]. B and T lymphocyte attenuator (BTLA) is a novel checkpoint receptor for immunotherapy, while CD160 shares the same ligands with it as BTLA repaired receptor, suggesting that CD160 inhibitory also may be a promising target for immunotherapy [27]. CD40LG is the ligand of CD40, and after they get combined, it will stimulate proinflammatory gene expression, including interleukins (IL)-1, IL-6, IL-8, IL-12, TNF-α, IFN-γ, and monocytic chemoattractant protein (MCP)-1. The activation of the CD40/CD40LG system is a major contributor to carcinogenesis. When CD40LG antibodies are used to disrupt the CD40/CD4OLG system's function, it leads to the suppression of tumor cells [42]. However, some researchers demonstrated that the membrane-stable CD40LG mutant gene transfer into the CD40-positive LUAD cell line. This gene transfer reduces cell proliferation and enhances apoptosis [43]. Thus, the role of CD40LG in LUAD is still uncertain and requires further in-depth study and exploration. Similarly, the functions of LTA and CD160 in LUAD are unclear, and more relevant research are urgently needed. We also investigated the potential genetic mechanisms of this signature. Risk-score-related genes were mainly enriched in immune response and leukocyte activation processes and pathways after correlation analysis. We analyzed seven immune-related metagenes to better understand the relationship between risk signature and immune responses. We found that risk score was negatively associated with LCK and MHC_II clusters, suggesting that high-risk patients have decreased B-cell function and impaired antigen-presentation ability. We also found higher infiltration of resting type CD4 + T-cells in high-risk patients, suggestive of immunosuppression. Next, we found that the patient risk score was positively associated with the expression of CD276. CD276 is a crucial immune checkpoint member of the B7-28 family that plays a pivotal role in inhibiting T-cell function, and immunotherapies that target CD276 have achieved positive results in various tumors [33]. This showed that patients at elevated risk may benefit from immunotherapies that target CD276. We also found that patients at elevated risk exhibited higher average TMB levels, more T-cell dysfunction, greater exclusion scores, lower TIDE scores. Of interest, despite this trend, PD-L1 protein expression differed across risk groups. PD-L1 and TMB are currently the most reliable biomarkers for predicting responses to PD-1/PD-L1 immunotherapy [44,45]. These findings suggest that high-risk patients are mostly immunosuppressed and are therefore better candidates for immunotherapy. Although this signature was successfully validated across multiple cohorts and appeared to act as an independent prognostic factor for LUAD, this study had several limitations. Firstly, it was a retrospective study that should be validated in prospective, largescale cohorts. Secondly, we only examined some novel immune checkpoint genes, which may limit our signature's predictive capacity. However, this new classifier also provides more information on the state of the TME. Finally, none of the patients in this study underwent immunotherapy, thus our prediction of immunotherapy responsiveness is merely theoretical. Future studies should address these limitations.
In conclusion, we found that analyzing tumor expression of LTA, CD160, and CD40LG represents a novel immune signature that may be useful for predicting prognosis and response to immunotherapy in patients with LUAD. Further validation of these findings may improve the ability to shed some light on screening appropriate patients for are most likely to benefit from immunotherapy, enabling increasingly personalized, evidence-based care.
|
2022-07-25T13:26:43.722Z
|
2022-07-25T00:00:00.000
|
{
"year": 2022,
"sha1": "35eaa03503aa79f0f8b1d793d30ad24b8bb1506c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1a9db34dbb9a2f4107e0fbb75cf711d91b26c370",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3692903
|
pes2o/s2orc
|
v3-fos-license
|
Distilling one, two and entangled pairs of photons from a quantum dot with cavity QED effects and spectral filtering
A quantum dot can be used as a source of one- and two-photon states and of polarisation entangled photon pairs. The emission of such states is investigated from the point of view of frequency-resolved two-photon correlations. These follow from a spectral filtering of the dot emission, which can be achieved either by using a cavity or by placing a number of interference filters before the detectors. The combination of these various options is used to iteratively refine the emission in a"distillation"process and arrive at highly correlated states with a high purity. So-called"leapfrog processes"where the system undergoes a direct transition from the biexciton state to the ground state by direct emission of two photons, are shown to be central to the quantum features of such sources. Optimum configurations are singled out in a global theoretical picture that unifies the various regimes of operation.
There is an alternative way to control, engineer and purify the emission of a quantum emitter which relies on extrinsic components at the macroscopic level, in contrast with the intrinsic approach at the microscopic level that supplements the quantum dot with a built-in microcavity. Namely, one can use spectral filtering. This approach is 'extrinsic' in the sense that the filters are placed between the system that emits the light and the observer who detects it. As such, it belongs more properly to the detection part. The filter can in fact be modelling the finite resolution of a detector that is sensible only within a given frequency window. In this paper, to keep the discussion as simple as possible, we will assume perfect detectors and describe the detection process through spectral filters (this means that the detector has a better resolution than that imparted by the filter in front of it). Each filter is theoretically fully specified by its central frequency of detection and linewidth. We will assume Lorentzian spectral shapes, which correspond to the case of most interference filters. Commonly used spectral filters of this type are the thin-film filters and Fabry-Perot interferometers (in the figures, we will sketch such filters as dichroic bandpass filters, with different colours to imply different frequencies). Since Figure 1. Sketch of the various schemes investigated in this work to study twophoton correlations from the light emitted by a quantum dot (left). The various detection configurations are, from left to right: (i) the direct emission of the dot (g (2) [σ H ]), (ii) the enhanced and filtered emission of the dot by a cavity mode (g (2) [a]); (iii), the filtered emission of the dot (g (2) [σ H ](ω; ω)), (iv) the twophoton spectroscopy of the dot (g (2) [σ H ](ω 1 ; ω 2 )), (v) the filtered emission of the dot-in-a-cavity emission (g (2) [a](ω; ω)), (vi) the two-photon spectroscopy of the dot-in-a-cavity (g (2) [a](ω 1 ; ω 2 )) and (vii) the tomographic reconstruction of the density matrix for the polarization-entangled photon pairs, θ (ω 1 , ω 2 ; ω 3 , ω 4 ).
they rely on interference effects, they are basically cavities in weak-coupling. This reinforces the main theme of this paper, which is to investigate cavity effects on a quantum emitter. The cavity itself can be, again, intrinsically part of the heterostructure itself, all packaged on-chip, or extrinsically realized by the external filters. Combining these features, such as filtering the emission of a cavity-QED system, we arrive at the notion of 'distillation' where the emitter sees its output increasingly filtered by consecutive sequences to finally deliver a highly correlated quantum state of high purity.
While the idea is general and could be applied to a wealth of quantum emitters, we concentrate here on a single quantum dot, sketched as a little radiating pyramid in figure 1. Theoretically, it will be described as a combination of two two-level systems, representing two excitons of opposite spins. Such a system will be used for the generation of photons one by one or in pairs, with various types of quantum correlations. The four-level system formed by the two possible excitonic states (corresponding to orthogonal polarizations) and the doubly occupied state, the biexciton, is ideal to switch from one type of device to the other by simply selecting and enhancing the emission at different intrinsic resonances [31][32][33]. Figure 1 gives a summary of the various filtering and detection schemes that will be applied, with the 'naked' dot on the left. Its emission will be considered both from within or without a cavity, with various numbers of filters interceding. We will assume the microcavity both in the weak-and strong-coupling regimes. The latter system has been extensively studied [31][32][33][34][35] and will be revisited here in the light of its spectral filtering [36] and distillation.
The rest of the paper is organized as follows. In section 2, we present the system and its basic properties and we introduce the two-photon spectrum, which is the counterpart at the two-photon level of the photoluminescence spectrum at the single-photon one. In section 3, we provide the first application of two-photon distillation, achieved via a cavity mode weakly coupled to the dot transitions or through spectral filtering. In section 4, we compare the cavity filtering in the weak-coupling regime with the enhancement of emission in the strong-coupling regime. In section 4.1, we go one step further in the distillation of the two-photon emission and filter it from the cavity emission as well. In section 5, we consider one of the most popular applications of the biexciton structure, the generation of polarization entangled photon pairs. In section 6, we draw some conclusions. (a) Level scheme of the quantum dot investigated, modelled as a system able to accommodate two excitons |V and |H in the linear polarization basis, with an energy splitting δ between them and which, when present jointly, form a biexciton |B with binding energy χ. The excitons decay radiatively at a rate γ . When placed in a cavity (with linear polarization H), two extra decay channels are opened for the H-polarization: through the one-photon cascade at rate κ 1P and the two-photon emission at rate κ 2P . In the sketch, the cavity is placed at the two-photon resonance ω a = ω B /2. (b) PL spectra from the quantum dot system in H-polarization (orange) consisting of two peaks at ω BH and ω H , and in V-polarization (green) consisting of two peaks at ω BV and ω V . (c) PL spectrum when the dot is placed inside a cavity, both for the case of strong (g = κ, solid line) and weak (g → 0, dotted line) coupling. A new peak appears at the centre from the two-photon emission. Parameters: P = γ , χ = 100γ , δ = 20γ and κ = 5γ . We consider ω X → 0 as the reference frequency.
Theoretical model and standard characterization
The system under analysis consists of a quantum dot that can host up to two excitons with opposite spins. The corresponding orthonormal basis of linear polarizations, horizontal (H) and vertical (V), reads {|G , |H , |V , |B }, where G stands for the ground state, H and V for the single exciton states and B for the biexciton or doubly occupied state. The four-level scheme that they form is depicted in figure 2(a). The Hamiltonian of the system reads (h = 1) where we allow the excitonic states to be split by a small energy δ, as is typically the case experimentally, through the so-called fine structure splitting [24]. The biexciton at ω B = 2ω X − χ is far detuned from twice the exciton thanks to the binding energy, χ, typically the largest parameter in the system. We include the dot losses, at a rate γ , and an incoherent continuous excitation (off-resonant driving of the wetting layer), at a rate P, in both polarizations x =H, V, in a master equation where L c (ρ)=2cρc † − c † cρ − ρc † c is in the Lindblad form. In this work, we are interested in the steady state of the system resulting from the interplay between the incoherent continuous pump and decay of the quantum dot levels.
We assume in what follows an experimentally relevant situation, χ = 100γ , δ = 20γ , and study the steady state under P = γ , in which case all levels are equally populated (the populations read ρ G = γ 2 /(P + γ ) 2 , ρ H = ρ V = Pγ /(P + γ ) 2 and ρ B = P 2 /(P + γ ) 2 ). The photoluminescence spectra of the system, S(ω), are shown in figure 2(b) for the H-and Vpolarized emission with orange and green lines, respectively. The four peaks are well separated thanks to the binding energy and fine structure splitting, corresponding to the four transitions depicted in panel (a) with the same colour code with ω X → 0 as the reference and full-width at half-maximum γ BH = γ BV = 3γ + P, γ H = γ V = 3P + γ . We concentrate on the H-mode emission because it has the largest peak separation and allows for the best filtering but all results apply similarly to the V polarization. The emission structure at the single-photon level is very simple: two Lorentzian peaks are observed corresponding to the upper and lower transitions The second-order coherence function of the H-emission in the steady state (set as the starting time t = 0) with a positive delay τ , reads where the H-photon destruction operator is defined as In equations (7) and (8), we have specified the channel of emission in square brackets, since this will be an important attribute in the rest of the paper. Surprisingly, the quantum dot described with the spin degree of freedom always exhibits uncorrelated statistics in the linear polarization g (2) [σ H ](τ ) = 1 .
One recovers the expected antibunching of a two-level system [37] by turning to the intrinsic two-level systems composing the quantum dot, namely, the spin-up and spin-down excitons: g (2) [σ ↑ ](0) = g (2) [σ ↓ ](0) = 0. The Pauli exclusion principle that holds for the spin σ ↑↓ breaks in the linear polarization, i.e. while σ 2 ↑↓ = 0, one has σ 2 H = |G B| =0. We can find a simple explanation for this if we write the total correlations in terms of the four contributions (different from zero) which are given by (τ 0) in their normalized form, g (2) [s i ; . As shown in figures 3(c) and (d) with pale grey lines, all these functions are antibunched except for g (2) [s 1 ; s 2 ](τ ), which corresponds to the natural order in the H-cascaded emission of two photons and is, consequently, bunched. It compensates fully for the other three terms, leading to total correlations of 1 at all τ . Figure 2 provides a clear picture of the physical grounds on which such a system can be used as a quantum emitter, but it lacks even a qualitative picture of how quantum correlations are distributed. Figure 2(b) merely shows where the system emits light but nothing on how correlated is this emission. All these crucial features are revealed in the two-photon spectrum g (2) (ω 1 ; ω 2 , τ ) [38]. This is the extension of the Glauber secondorder coherence function-which quantifies the correlations between photons in their arrival times-to frequency. By specifying both the energy (ω 1 and ω 2 ) and time of arrival (t = 0 and t = 0) of the photons, one provides an essentially complete description of the system. denotes the linewidth of the frequency window of the filter over which this joint characterization is obtained. The corresponding time resolution is given by its inverse, 1/ , in accordance with the Heisenberg uncertainty principle. It is a necessary variable without which nonsensical or trivial results are obtained.
Two-photon spectrum
The two-photon spectrum unravels a large class of processes hidden in single-photon spectroscopy and can be expected to become a standard tool to characterize and engineer quantum sources. The computation of such a quantity 1 has remained a challenging task for theorists since the mid-eighties [39][40][41], until a recent workaround [36] has been found which allows an exact numerical computation. The method consists in coupling very weakly the system to two sensors or quantized modes (such that the dot dynamics remains unaltered), which model the filtering and detection processes. The intensity-intensity cross correlations of the emission from such sensors, with operators 1 and 2 , provide the two-photon spectrum in a simple form, g (2) (ω 1 ; ω 2 , τ ) = n 1 (0)n 2 (τ ) where two-photon emission is optimum. In (c) and (d), we also plot with pale grey lines the second-order correlations of the corresponding effective operators, equation (12). Scales are logarithmic. Parameters: P = γ , χ = 100γ and δ = 20γ .
where n i = + i i for i =1, 2. The sensors natural frequencies, ω i , decay rates, , and the delay in the detection, τ , can be varied to scan the relevant frequency and time ranges with the appropriate resolution. This method will be applied here for the first time to the case of biexciton emission and used to understand, characterize and enhance various processes useful for its quantum emission. We start by solving the coupling of two sensors with the H-mode only, obtaining g (2) [σ H ](ω 1 ; ω 2 , τ ). A detailed discussion on even more fundamental emitters is given in [38].
Experimentally, the two-photon spectrum corresponds to the usual Hanbury Brown-Twiss setup to measure second-order correlations through photon counting, with filters or monochromators being placed in front of the detectors to select two, in general different, frequency windows. The technique has been amply used in the laboratory [15,16,22,[42][43][44][45][46][47][48][49], but lacking hitherto a general theoretical description, the global picture provided here has not yet been achieved experimentally. Note finally that when considering correlations between equal frequencies, ω = ω 1 = ω 2 , the result is equivalent to placing a single filter before measuring the correlations of the outcoming photon stream [36] or embedding the quantum dot in a cavity and measuring the correlations from the emission of a weakly coupled cavity mode with frequency ω. Figure 3(a) shows the rich landscape provided by the two-photon spectrum g (2) [σ H ](ω 1 ; ω 2 ), in contrast with the one-photon spectrum S[σ H ](ω) and the colour-blind second-order correlations g (2) [σ H ], equation (10). It is shown at zero delay (τ = 0), where results are symmetric under exchange of ω 1 and ω 2 . An analytical expression is found in this case but it is too lengthy to be reproduced here. The sensor linewidths are chosen to filter the full peaks ( = 5γ ) in order to observe well-defined regions of enhancement and suppression of the correlations: sub-Poissonian values (<1) are shown in blue, Poissonian (=1) in white and super-Poissonian (>1) in red. This figure is the backbone of this paper. We now discuss in turn these different regions where the quantum dot operates as a quantum source with different properties.
Single-photon source
When the filters are tuned to the same frequency (diagonal black line in the (ω 1 ; ω 2 ) space in figure 3(a)), there is a systematic enhancement of the bunching as compared to the surrounding regions due to the two possibilities of detecting identical photons [36,38]. Despite this feature, which is independent of the system dynamics, when both frequencies coincide with one of the dot transitions, (ω H ; ω H ) or (ω BH ; ω BH ), there is a dip in the correlations. This is more clearly shown in the cut at equal frequencies, the black line in figure 3(b). The blue butterfly shape that is observed in the two-photon spectrum locally around each of the dot transitions is characteristic of an isolated two-level system [38]. This zero delay information is complemented by the antibunched τ dynamics, shown in figure 3(c). The two dot resonances, upper and lower, coincide in this case due to the symmetric conditions P = γ but they are typically different (the upper level being less antibunched at low pump). Filtering and detection makes it impossible to have a perfect antibunching, getting closest to the ideal correlations g (2) [ H ]( H ; H , ) ≈ g (2) [s 1 ; s 1 ]( ) from equations (12), at around ≈ χ/2. At this point, the peaks are maximally filtered with still negligible overlapping of the filters. It is possible to derive a useful expression for the filtered correlations at τ = 0, with χ/2, in the limit typically relevant in experiments.
All this shows that one can recover or optimize the quantum features of a single photon source (antibunching) in a system whose total emission is uncorrelated, by frequency filtering photons from individual transitions. Consequently, the system should exhibit two-photon blockade when probed by a resonant laser at frequency ω L in resonance with the lower transition, ω L = ω H . The antibunched emission of each of the four filtered peaks of the spectrum has been observed experimentally [15].
Cascaded two-photon emission
When the filtering frequencies match both the upper and lower quantum-dot transitions, i.e.
(ω BH ; ω H ), the correlations are close to one at zero delay, like for the total emission g (2) [σ H ] (which is exactly one). However, although the latter is uncorrelated, since it remains equal to one at all τ , the filtered cascade emission is not uncorrelated, since it is close to unity only at zero delay and precisely because of strong correlations that are, however, of an opposite nature at positive and negative delays, i.e. showing enhancement for τ > 0 when photons are detected in the natural order that they are emitted, and suppression for τ < 0 when the order is the opposite. This is depicted in figure 3(d) where the solid line corresponds to (ω BH ; ω H ) and the dotted line to exchanging the filters, (ω H ; ω BH ). As this also corresponds to detecting the photons in the opposite time order, the two curves are exact mirror image of each other.
The identification of the upper and lower transition photons with frequency-blind operators (g (2) [s 1 ; s 2 ](τ ) in equations (12)) provides crossed correlations different from our exact and general frequency resolved functions, especially at τ = 0, as shown in figure 3(d), where there is a discontinuity for the approximated functions. The frequency resolved functions have the typical smooth cascade shape that has been observed experimentally [15,45]. The dynamics at large τ converges to the approximated functions only for ≈ χ/2.
Simultaneous two-photon emission
For simultaneous two-photon emission, the strongest feature lies on the antidiagonal (red line) in figure 3(a), which is also shown as the solid red line in figure 3(b). The strong bunching observed here, when both frequencies are far from the system resonances ω BH and ω H , corresponds to a two-photon deexcitation directly from the biexciton to the ground state without passing by an intermediate real state 2 . Such a two-photon emission, from a Hamiltonian (equation (1)) that does not have a term to describe it, is made possible via an effective virtual state, supported by the interaction with the output fields (or detectors) and revealed by the spectral filtering. As the intermediate virtual state has no fixed energy and only the total energy ω 1 + ω 2 = ω B needs to be conserved, the simultaneous two-photon emission is observed on the entire antidiagonal (except, again, when touching a resonance, in which case the cascade through real states takes over). We call such processes 'leapfrog' as they jump over the intermediate excitonic state [38]. The largest bunching is found at the central point, ω 1 = ω 2 = ω B /2 = −χ/2, and at the far-ends ω 1 ω BH and ω 1 ω H . Among them, the optimal point is that where also the intensity of the two-photon emission is strong. The frequency resolved Mandel Q parameter takes into account both correlations and the strength of the filtered signal [50]: (where also for the single-photon spectra, the detection linewidth, , is taken into account [51]). As expected, Q [σ H ](ω 1 ; ω 2 ) becomes negligible at very large frequencies, far from the resonances of the system, and reaches its maximum at the two-photon resonance, (ω B /2; ω B /2) (not shown). The latter configuration is therefore the best candidate for the simultaneous and, additionally, indistinguishable, emission of two photons. The bunching is shown in figure 3(e). The small and fast oscillations are due to the effect of one-photon dynamics with the real states but are unimportant for our discussion and would be difficult to resolve experimentally. While the bunching in such a configuration has not yet been observed experimentally, recently, Ota et al [24] successfully filtered the two-photon emission from the biexciton with a cavity mode, which corroborates the above discussion.
Filtering and enhancing photon-pair emission from the quantum dot via a cavity mode
Large two-photon correlations are the starting point to create a two-photon emission device. When they have been identified, the next step is to increase their efficiency by enhancing the emission at the right operational frequency. The typical way is to Purcell enhance the emission through a cavity mode with the desired polarization (for instance, H) and strongly coupled with the dot transitions (at resonance). Theoretically, this amounts to adding to the master equation (2) a Hamiltonian part that accounts for the free cavity mode (ω a ) and the coupling to the dot (with strength g), in the rotating wave approximation along with a Lindblad term κ 2 L a (ρ) that accounts for the cavity decay (at rate κ). The cavity emission is typically characterized by its second-order coherence function, g (2) [a] = a † a † aa / a † a 2 .
By placing the cavity mode at the two-photon resonance, ω a = ω B /2, the virtual leapfrog process becomes real as it finds a real intermediate state in the form of a cavity photon. The deexcitation of the biexciton to ground state is thereby enhanced at a rate κ 2PR ≈ (4g 2 /χ) 2 /κ, producing the emission of two simultaneous and indistinguishable cavity photons at this frequency [31][32][33]. There is as well some probability that the cavity mediated deexcitation occurs in two steps, through two different cavity photons at frequencies ω BH and ω H , at the same rate κ 1PR ≈ 4g 2 κ/χ 2 . The two alternative paths are schematically depicted with curly blue arrows in figure 2(a). The cavity being far from resonance with the dot transitions, the ratio of two-versus one-photon emission can be controlled by an appropriate choice of parameters [32]. We set g = κ = 5γ , to be in strong-coupling regime and have κ 2P = 0.2γ > κ 1P = 0.05γ , but with a coupling weak enough for the system to emit cavity photons efficiently. The cavity parameters are such that κ > 2P and the pump does not disrupt the two-photon dynamics [33]. The two-photon emission indeed dominates over the one-photon emission as seen in figure 2(c), where the cavity spectrum, S[a](ω), is plotted with a solid line: the central peak, corresponding to the simultaneous two-photon emission, is more intense than the side peaks produced by single photons. A better cavity (smaller κ) does not emit the biexciton photons right away outside the system, but spoils the original (leapfrog) correlations and leads to smaller correlations in the cavity emission g (2) [a] → 2. This is shown in figure 4(a) with a blue solid line. Our previous (2) [a] and the frequency-resolved correlations at the two-photon resonance (in red) g (2) [a](ω B /2; ω B /2) are considered, with the former relating to the bottom axis (the cavity linewidth κ) and the latter to the upper axis (the detector linewidth ) at κ = 5γ (indicated by a circle). (b) Mandel Q parameter in the same configuration as in (a), supplementing the information of the previous panel with the intensity of the emission. (c) The same as (a) but now as a function of the cavity frequency ω a (bottom axis) for the colour-blind correlations (in blue) and of the detection frequency ω (upper axis) for the frequency-resolved correlations (in red). Parameters: (a) the strong coupling is for g = 5γ and the weak coupling is in the limit of vanishing coupling g → 0 where the cavity is completely equivalent to a filter. Parameters: P = γ , χ = 100γ and δ = 20γ . choice κ = 5γ , is close to that which maximizes bunching (vertical line in figure 4(a)). A weak coupling due to small coupling strength (g → 0), plotted with a blue dotted line, recovers the case of a filter, discussed in the previous section: lim g→0 g (2) [a] = g (2) κ [σ H ](ω a ; ω a ) .
The cavity spectrum in this case, plotted with a blue dotted line in figure 2(c), is no longer dominated by the two-photon emission. Regardless of the coupling strength g (within the validity limits of the rotating wave approximation), the system goes into weak coupling at large enough κ, so both the blue lines, solid and dotted, converge to the same curve at κ → ∞. The cavity then filters the whole dot emission and recovers the total dot correlations for the H-mode, g (2) [a] → g (2) [ H ] = 1. Note that while the bunching in g (2) [a] is better in weak coupling or with a filter, this is at the price of decreasing the enhancement of the emission and, therefore, the efficiency of the quantum device, as the total Mandel Q[a] parameter shows in figure 4(b).
Distilling the two-photon emission from the cavity field
In view of the preceding results, we now consider the possibility to further enhance the twophoton emission by filtering the cavity photons from the central peak in figure 2(c). That is, we study g (2) [a](ω 1 ; ω 2 ) for an H-polarized cavity with κ = 5γ and ω a = ω B /2. In this way, firstly, the cavity acts as a filter, extracting the leapfrog emission where it is most correlated, but also enhances specifically the two-photon emission and, secondly, the filtering of the cavity emission selects only those photons that truly come in pairs. Such a chain is similar to a 'distillation' process where the quantum emission is successively refined.
The results are plotted with red lines in figure 4, for the case κ = 5g pinpointed by circles on the blue lines. The filtered cavity emission is indeed generally more strongly correlated at the two-photon resonance than the unfiltered total cavity emission, plotted in blue: g (2) [a]( B /2; B /2) g (2) [a]. This is so for all > for the cavity in weak coupling, where the distillation always enhances the correlations. In strong coupling, the filter must strongly overlap with the peak ( κ). This is because the side peaks are prominent in weak coupling (κ 2P < κ 1P ), and the filtering efficiently suppresses their detrimental effect, whereas in strong coupling, two-photon correlations are already close to maximum, thanks to the dominant central peak as seen in figure 2(c), and it is therefore important for the filter to strongly overlap with it. In all cases, at large enough , the full cavity correlations are recovered as expected: lim →∞ g (2) [a](ω 1 ; ω 2 ) = g (2) [a]. In figure 4(a), this means that the red lines converge to the value projected by the circle on the blue lines at κ = 5γ . Here, again, filtering enhances correlations but reduces the number of counts, as shown in figure 4(b).
In figure 4(c), we do the same analysis as in figure 3(b) where there was no distillation. We address the same cases but now as a function of frequency, fixing the cavity decay rate κ = 5γ and the filtering linewidth at = 10γ . In blue, we consider the cavity QED case in weak and strong coupling, without filtering. Since the weak-coupling limit is identical to the single filter case, note that the blue dotted line in figure 4(c) is identical to the black solid line in figure 3(b). Off-resonance, the cavity acts as a simple filter due to the reduction of the effective coupling. The stronger coupling to the cavity has an effect only when involving the real states, where it spoils the correlations, less bunched at the two-photon resonance ω a = ω B /2 and less antibunched at the one-photon resonances, ω a = ω BH or ω a = ω H . This shows again that useful quantum correlations are obtained in a system where quantum processes are Purcell enhanced and quickly transferred outside, rather than stored and Rabi-cycled over within the cavity. The same is true for the red line, further filtering the output. Finally, comparing solid lines together, we see again that there is little if anything to be gained by filtering in strong coupling, whereas in weak coupling, the enhancement is considerable. As a summary, the filtering of the weakly coupled cavity (red dotted) provides the strongest correlations (at the cost of the available intensity), corresponding to distilling the photon pairs out of the original dot spectrum without any additional enhancement.
In figure 5, we show the full two-photon spectra for a quantum dot in a cavity, in both (a) strong and (b) weak coupling. The antibunching regions on the diagonal and the bunching Figure 5. Two-photon spectra g (2) [a](ω 1 ; ω 2 ) for a quantum dot in a cavity in the (a) strong-, g = κ, and (b) weak-coupling regime, when the cavity is at the two-photon resonance, ω a = ω B /2. The bunching regions are strengthened by the cavity (the total, colour-blind correlations are g (2) [a] ≈ 21 for (a) and 130 for (b)). Horizontal and vertical structure also emerges at the two-photon resonance (white lines), betraying the presence of real states. These are stronger, the stronger the coupling. Parameters: κ = 5γ , P = γ , χ = 100γ , δ = 20γ and = 10κ.
along the antidiagonal are qualitatively similar to the filtered dot emission, but antibunching is milder and less extended in contrast to bunching that greatly dominates the two-photon spectrum as a direct consequence of the cavity. At the central point, bunching is now, respectively, weaker (stronger) than the filtered dot emission, due to the saturated (efficient) distillation in strong (weak) coupling. Another striking feature added by the cavity is the appearance of an additional pattern of horizontal and vertical lines at ω a . While diagonal and antidiagonal features correspond to virtual processes horizontal and vertical stem from real processes that pin the correlations at their own frequency. Therefore, the new features are a further illustration that the two-photon emission becomes a real resonance of the cavity-dot system, in contrast with figure 3(a) where it was virtual. The effect is more pronounced in the strong-coupling regime since this is the case where the new state is better defined. Another qualitative difference between figures 5 and 3(a) is in the regions surrounding the cascade configuration, (ω BH ; ω H ), which has changed shape to become two antibunching blue spots. This is due to the fact that the single-photon cascade is much less likely to happen through the cavity mode than the direct deexcitation of the dot, as κ 1P = 0.05γ γ . Even if the first photon from the biexciton emission decays through the cavity, the second will most likely not.
Distilling entangled photon pairs
One of the most sophisticated applications of the biexciton structure in a quantum dot is as a source of polarization entangled photon pairs [15,16,21,35,[52][53][54][55][56][57][58][59][60][61]. Without the fine-structure splitting, δ = 0, the two possible two-photon deexcitation paths are indistinguishable except for their polarization degree of freedom (H or V), producing equal frequency photon pairs (ω BX ; ω X ), with ω BX = ω B − ω X . This results in the polarization entangled state The splitting δ provides 'which path' information [62] that spoils indistinguishability, producing a state entangled in both frequency and polarization [34]: Although these doubly entangled states are useful for some quantum applications [63,64], it is typically desirable to erase the frequency information and recover polarization-only entangled pairs. Among other solutions, such as cancelling the built-in splitting externally [16,65], filtering has been implemented with ω 1 = ω BX and ω 2 = ω X to make the pairs identical in frequencies again [15,34,54], at the cost of increasing the randomness of the source (making it less 'on-demand'). Recently, the cavity filtering of the polarization entangled photon pairs with ω a = ω B /2 was proposed by Schumacher et al [35] taking advantage of the additional two-photon enhancement [32]. Let us revisit these effects in the light of the previous results.
Characterization of the filtered entangled photon-pair emission
The properties of the output photons can be obtained from the two-photon state density matrix, θ(τ ), reconstructed in the basis {|H1, H2 , |H1, V4 , |V3, H2 , |V3, V4 }, denoting by |xi with x =H or V and i = 1, 2, 3, 4, the state |x(ω i ) . The frequencies ω i are, in general, different. The second photon is detected with a delay τ with respect to the first one (detected in the steady state). Let us express this matrix in terms of frequency resolved correlators, as is typically done in the literature [53,55]. However, in contrast to previous approaches, we do not identify the photons with the transition from which they may come (using the dot operators |G x|, |x B| with, again, x standing for either H or V) but with their measurable properties, that is, polarization, frequency and time of detection (for a given filter window). This is a more accurate description of the experimental situation where a given photon can come from any dot transition and any transition can produce photons at any frequency and time with some probability. We describe the experiments by considering four different filters, that is, including all degrees of freedom of the emitted photons in the description. Each detected filtered photon corresponds to the application of the filter operator j with j =H1, H2, V3, V4 corresponding to its coupling to the H or V dot transitions with ω i frequencies. Then, the two-photon matrix θ (τ ) (the prime refers to the lack of normalization) corresponding to a tomographic measurement is theoretically modelled as where n i = + i i (we have dropped the frequency dependence in the notation, writing θ (τ ) instead of θ (ω 1 , ω 2 ; ω 3 , ω 4 , τ )). Since a weakly coupled cavity mode behaves as a filter, this tomographic procedure is equivalent to considering the four dot transitions coupled to four different cavity modes with the corresponding polarizations, central frequencies and decay rates [35,61]. Unlike in other works where for various reasons and particular cases, some of the elements in θ (τ ) are set to zero or considered equal, here we keep the full matrix with no a priori assumptions since, in general, it may not reduce to a simpler form due to the incoherent pumping, pure dephasing, frequency filtering and fine-structure splitting.
There are essentially two ways to quantify the degree of entanglement from the density matrix θ (τ ). The most straightforward is to consider the τ -dependent matrix directly, which merely requires normalization at each time τ , yielding ( ) = ( )/Tr[ ( )]. The physical interpretation is that of photon pairs emitted with a delay τ , that is, within the time resolution 1/ of the filter or cavity [34,61]. In particular, the zero-delay matrix, θ (0), represents the emission of two simultaneous photons [59,60]. The second approach is closer to the experimental measurement which averages over time. In this case, one considers the integrated quantity (τ max )=( τ max 0 θ (τ ) dτ )/N that averages over all possible emitted pairs from the system [15,53]. It is also normalized (by N ), but after integration, so that the two approaches are not directly related to each other and present alternative aspects of the problem, discussed in detail in the following. Without the cutoff delay τ max , the integral diverges due to the continuous pumping.
The degree of entanglement of any bipartite system represented by a 4 × 4 density matrix θ can be quantified by the concurrence, C, which ranges from 0 (separable states) to 1 (maximally entangled states) 3 [66]. High values of the concurrence require high degrees of purity in the system [67], it being, for instance, impossible to extract any entanglement from a maximally mixed state (in which case all the four basis states occur with the same probability). The filtered density matrices, (τ max ) and θ (τ ), provide each their own concurrence that we will denote by C int (τ max ) and C (τ ), respectively.
Comparison between different filtering configurations
We begin by considering the standard cascade configuration by detecting photons at the dot resonances, i.e. ω 1 = ω BH , ω 2 = ω H , ω 3 = ω BV , ω 4 = ω V , as sketched in the inset of figure 6(a). The upper density plot shows C int (τ max ) and the density plot below shows the time-resolved concurrence C (τ ), as a function of τ max or τ and δ, for = 2γ . The two concurrences, C int (τ max ) and C (τ ), are qualitatively different except at τ = τ max = 0 where they are equal by definition. They also have in common that the maximum concurrence is not achieved at zero delay (it is most visible as the darker red area around (γ τ max , δ/γ ) ≈ (0.4, 0)). This is because this filtering scheme relies on the real-states deexcitation of the dot levels, and thus exhibits the typical delay from the cascade-type dynamics of correlations (see figure 3(d)). The major departure between the two is that the decay of C int (τ max ) is strongly dependent on δ, while C (τ ) is not. With no splitting, at δ = 0, the ideal symmetrical four-level structure efficiently produces the entangled state |ψ and the concurrence is maximum both for the integral and the time-resolved forms. The decay time of C (τ ) is the simplest to understand as, when filtering full peaks, it is merely related to the reloading time of the biexciton, of the order of the inverse pumping rate, ∼2/P [61]. The asymmetry due to a non-zero splitting in the fourlevel system causes an unbalanced dynamics of deexcitation via the H and V polarizations. The entanglement in the form |ψ is downgraded to |ψ . The fact that the concurrence C (τ ) 3 Its definition reads C ≡ [max{0, typically used in the literature to quantify entanglement. All these plots show that as far as the degree of entanglement is concerned, the leapfrog emission is the best configuration and is optimum at the two-photon resonance. Parameters: P = γ , χ = 100γ and = 2γ .
is not affected shows that it accounts for the degree of entanglement in both polarization and frequency, which is oblivious to the 'which path' information that the last variable provides. On the other hand, C int (τ max ) is suppressed by the splitting as fast as τ max > 2π/δ (this boundary is superimposed on the density plot). That is, as soon as the integration interval is large enough to resolve it. The integrated C int (τ max ), therefore, accounts for the degree of entanglement in polarization only, and is destroyed by the 'which path' information provided by the different frequencies.
The exact mechanism at work to erase the entanglement is shown in the lower row of figure 6, which displays the upper right matrix element [θ (τ )] 1,4 of the density matrix, which is most responsible for purporting entanglement. The modulus is shown on the left (with blue meaning 0 and red meaning 0.5) and the phase on the right (in black and white). The time-resolved concurrence C (τ ) is very similar to the modulus of [θ (τ )] 1,4 which justifies the approximation often made in the literature [53]. Although each photon pair is entangled, it is so in the state |ψ with a phase, φ = −π + δτ , that accumulates with τ at a rate δ [34], but this does not matter as far as instantaneous entanglement is concerned, and this is why the splitting does not affect C (τ ).
On the other hand, when integrating over time, the varying phase, which completes a 2π -cycle at intervals 2π/δ, randomizes the quantum superposition and results in a classical mixture. This is why the splitting does completely destroy C int (τ max ) for τ max > 2π/δ. And this is how the system restores the 'which path' information: given enough time, if the splitting is large enough, the photons lose their quantum coherence due to the averaging out of the relative phase between them because of their distinguishable frequency. Another possible configuration proposed in the literature [15,34,54] is sketched in the inset of figure 6(b). Photons are detected at the frequency that lies between the two polarizations, ω 1 = ω 3 = ω BX , ω 2 = ω 4 = ω X . For small splittings, the entanglement production scheme still relies on the cascade and real deexcitation of the dot levels. Therefore, the behaviour of C int (τ max ) is similar to (a) (decaying with δ) but slightly improved by the fact that the contributions to the density matrix are more balanced by filtering in between the levels. For splittings large enough to allow the formation of leapfrog emission in both paths, δ , there is a striking change of trend and C int (τ max ) remains finite at longer delays τ max when increasing δ. This is a clear sign that the entanglement relies on a different type of emission, namely simultaneous leapfrog photon pairs, rather than a cascade through real states. Accordingly, the intensity is reduced as compared to that obtained at smaller δ or to the non-degenerate cascade in (a), but the bunching is stronger, as evidenced by the two-photon spectrum, and results in a larger degree of entanglement than at δ = 0. A similar result is obtained qualitatively with the time-resolved concurrence, a strong resurgence of entanglement with δ, indicating again very high correlations in this configuration. Note finally that the phase becomes much more constant with increasing δ, resulting in the persistence of the correlation in the time-integrated concurrence. This is because the two-photon emission is through the leapfrog processes. Since the latter, by definition, involve intermediate virtual states that are degenerate in frequency, this is a built-in mechanism to suppress the splitting and not suffer from the 'which path' information as when passing through the real states.
Finally, a configuration proposed more recently in the literature [35] is the two-photon resonance, with four equal frequencies, ω 1 = ω 2 = ω 3 = ω 4 = ω B /2, as sketched in the inset of figure 6(c). This provides a two-photon source in both polarizations that can be enhanced via two cavity modes with orthogonal polarizations [32]. Remarkably, in this case, the splitting has almost no effect on the degree of entanglement, which is maximum. Here, the leapfrog Concurrence computed with the instantaneous emission at τ = 0 (blue) or integrated up to τ max = 1/γ (red). (a) Plotted for a fixed filter linewidth = 5γ as a function of the filter frequencies (ω 1 ; ω B − ω 1 ). The concurrence is high unless a real state is probed and a cascade through it overtakes the leapfrog emission. The optimum case is the two-photon resonance. (b), (c) Plotted as a function of the filter linewidth for the two cases that maximize entanglement in (a), that is, at (b) the degenerate cascade configuration (ω BX ; ω X ) and (c) the biexciton two-photon resonance (ω B /2; ω B /2). The concurrence of the ideal case at δ = 0 is also plotted as a reference with softer solid lines. Parameters: P = γ , χ = 100γ and δ = 20γ . ω X = 0 is set as the reference frequency. mechanism plays at its full extent: the virtual states, on top of being degenerate and thus immune to the splitting, remain always far, and are therefore protected, from the real states. This results in the exactly constant phase (black panel on the lower right end of figure 6). As a result, both C int (τ max ) and C (τ ) remain large. The only drawback of this mechanism is, being virtualprocesses mediated, a comparatively weaker intensity.
Filtering the leapfrog emission
We conclude with a more detailed analysis of what appears to be the most suitable scheme to create robust entanglement, the leapfrog photon-pair emission. The target state is always |ψ , equation (18), due to the degeneracy in the filtered paths. The concurrence is shown as a function of the first photon frequency, ω 1 , in figure 7 (the second photon has the energy ω 2 = ω B − ω 1 to conserve the total biexciton energy). The time-integrated (resp. instantaneous) concurrence C int (1/γ ) (resp. C (0)) is shown in red (resp. blue) lines, both for a large splitting δ = 20γ in strong tones and for δ = 0 in softer tones. In the first panel, (a), the frequency ω 1 of the filters is varied. This figure shows that concurrence is very high (with high state purity) when the filtering frequencies are far from the system resonances: ω BH/BV and ω H/V . These are shown as coloured grid lines to guide the eye. In this case, the real states are not involved and the leapfrog emission is efficient in both polarizations. The concurrence otherwise drops when ω 1 is resonant with any one-photon transition, meaning that photons are then emitted in a cascade in one of the polarizations, rather than simultaneously through a leapfrog process. Moreover, if at least one of the two deexcitation paths is dominated by the real state dynamics, this brings back the problem of 'which path' information, which spoils indistinguishability and entanglement. There is only one exception to this general rule, namely, the δ = 0 integrated case (soft red line) which has a local maximum at ω BX , i.e. when touching its resonance in the natural cascade order. This is because the paths are anyway identical and the integration includes the possibility of emitting the second photon with some delay (up to τ max ). For the case of δ = 0, if this is large enough, it is still possible to recover identical paths while filtering the leapfrog in the middle points, ω 1 = ω BX and ω 2 = ω X , to produce entangled pairs.
Overall, the optimum configuration is, therefore, indeed at the middle point ω 1 = ω 2 = ω B /2 (two-photon resonance), where the photons are emitted simultaneously, with a high purity, and entanglement degrees are also identical in frequency by construction. Entanglement is also always larger in the simultaneous concurrence (blue lines) as this is the natural choice to detect the leapfrog emission, which is a fast process. This comes at the price, expectedly, of decreasing the total number of useful counts and increasing the randomness of the source. Note finally that the blue curves are symmetric around ω B /2, but the red ones are not, given that in the integrated case, the order of the photons with different frequencies is relevant. The left-hand side of the plot, ω 1 < ω B /2, corresponds to the natural order of the frequencies in the cascade, ω 1 < ω 2 . The opposite order, being counter-decay, is detrimental to entanglement.
Since of all leapfrog configurations, those in figure 6(b) and (c) are optimum to obtain high degrees of entanglement, we provide their dependence on the filter linewidths, , in figures 7(b) and (c). First, we observe that in the limit of small linewidths for the filters, i.e. in the region < 1/τ max , the simultaneous and time-integrated concurrences should converge to each other, since the time resolution becomes larger than the integration time. Therefore, the integrated emission provides the maximum entanglement when the frequency window is small enough to provide the same results as the simultaneous emission. Decreasing below P (which in this case is also P = γ ) results in a time resolution in the filtering larger than the pumping timescale 1/P. Therefore, photons from different pumping cycles start to get mixed with each other. As a result C drops for < γ in all cases. In the limit P, the emission is completely uncorrelated and C = 0. This is an important difference from the cases of pulsed excitation or the spontaneous decay of the system from the biexciton state. In the absence of a steady state, entanglement is maximized with the smallest window, → 0 [15,54]. Two opposite types of behaviour are otherwise observed in these figures when increasing the filter linewidth and > 1/τ max . The simultaneous emission gains in its degree of entanglement, whereas the time-integrated one loses with increasing linewidth. In the limit → ∞, this disparity is easy to understand. We recover the colour blind result in all filtering configurations, (b) or (c), that is, the decay of entanglement: from 1 corresponding to |ψ at τ = 0, to 0 corresponding to a maximally mixed state at large τ . Therefore, C (0) → C(0) = 1 and C int ( max ) → C int ( max ) = 0 (for our particular choice of τ max and P).
The decrease in C int (τ max ) with increasing filtering window, has been discussed in the literature for the case in figure 7(b) [15,54], and it has been attributed to a gain of 'which path' information due to the overlap of the filters with the real excitonic levels. In the light of our results, when > δ the real-state deexcitation takes over the leapfrog and C int (τ max ) is suppressed indeed due to such a gain of 'which path' information, as discussed previously. However, we find another reason why C int (τ max ) decreases with in all cases, based on the leapfrog emission: the region 1/P < < δ for case (b) and the region 1/P < < χ for case (c). The maximum delay in the emission of the second photon in a leapfrog processes is related to 1/ , due to its virtual nature. Therefore, the initial enhancement of entanglement starts to drop at delays τ ≈ 1/ , after which the emission of a second photon is uncorrelated to the first one (not belonging to the same leapfrog pair). For a fixed cutoff τ max , this leads to a reduction of C int (τ max ) with . Broader filters have a smaller impact on the case of real-state deexcitation (see the zero splitting case in figure 7(b), plotted in soft red), since the system dynamics is slower than the filtering (γ , P < ), the filter merely emits the photons faster and faster after receiving them from the system. This results in a mild reduction of C int (τ max ) with until the detection becomes colour blind and it drops to reach its aforementioned limit of zero.
Conclusions
In summary, we have characterized the emission of a quantum dot modelled as a system able to accommodate two excitons of different polarization and bound as a biexciton. Beyond the usual single-photon spectrum (or photoluminescence spectrum), we have presented for the first time the two-photon spectrum of such a system, and discussed the physical processes unravelled by frequency-resolved correlations and how they shed light on various mechanisms useful for quantum information processing.
We relied on the recently developed formalism [36] that allows us to compute conveniently such correlations resolved both in time and frequency. This describes both the application of external filters before the detection or due to one or many cavity modes in weak coupling with the emitter. Filters and cavities have their respective advantages, and when combined, can perform a distillation of the emission, by successive filtering that enhances the correlations and purity of the states.
We addressed three different regimes of operation depending on the filtering scheme, namely as a source of single photons, a source of two-photon states (both through cascaded photon pairs and simultaneous photon pairs) and a source of polarization entangled photon pairs, for which a form of the density matrix that is close to the experimental tomographic procedure was proposed. In particular, the so-called leapfrog processes-where the system undergoes a direct transition from the biexciton to the ground state without passing by the intermediate real states but jumping over them through a virtual state-have been identified as key, both for two-photon emission and for entangled photon-pair generation. In the latter case, this allows us to cancel the notoriously detrimental splitting between the real exciton states that spoils entanglement through 'which path' information, since the intermediate virtual states have no energy constraints and are always perfectly degenerate. Entanglement is long-lived and much more robust against this splitting than when filtering at the system resonances. At the twophoton resonance, degrees of entanglement higher than 80% can be achieved and maintained for a wide range of parameters.
|
2012-11-19T16:58:29.000Z
|
2012-10-18T00:00:00.000
|
{
"year": 2012,
"sha1": "f8f55340ba8ccc27204f1f5657d0791e39b3e219",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.1088/1367-2630/15/2/025019",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a7e8808447abc39c1bb92839e7402c2abe8a168c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
237728038
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness of public service advertisements on the use of antibiotics in Pangkalpinang
Background: Previous studies have shown that the public needs information related to drugs from reliable sources of pharmaceutical personnel. One of the information media that has never been used is electronic public service advertisement. Study the effectiveness of public service advertisements on the use of antibiotics in Pangkalpinang City to find a medium that functions in reducing the number of antibiotic resistance. Method: This research was conducted in Pangkalpinang from December 2018 to September 2019 using a quasi-experimental quantitative approach with a time-series design. The sample of this research consisted of 400 people determined by accidental sampling techniques and analysed in univariate and bivariate using a dependent test. Results: There was a significant difference with the value of p = 0.0001 between the use of antibiotic respondents before (pre-test) and after airing of public service ads (post-test). Conclusion: Public service advertisements about antibiotics were effective in terms of antibiotics use.
Introduction
Antibiotics are the most widely used drugs for infections caused by bacteria. Various studies have found that about 40-62% of antibiotics are misused, such as for diseases that do not require antibiotics. A survey about the quality of antibiotic use in various parts of the hospital found that 30% to 80% were not based on indications (Hadi et al., 2009). Antibiotic resistance harms many parties. Infectious diseases caused by bacterial resistance increase the time the patient suffers from the illness. Thus, if the patient is admitted to the hospital, the hospitalization costs will undoubtedly increase. According to the Centers for Disease Control and Prevention, every year in the United States, two million people get infected with bacteria that have become resistant to antibiotics, and at least 23,000 people die each year as a direct result of this resistance. The WHO data states that, in 2016, 480,000 new cases of multidrug-resistant tuberculosis (MDR-TB) have emerged in the world (WHO, 2016).
In Indonesia, public understanding of the benefits and impacts of using antibiotics is still weak, constituting a severe problem as the level of antibiotic use is quite alarming. People today freely buy and take medicines without a doctor's prescription. It is essential to impart knowledge about antibiotics to the community. Therefore, health workers, especially pharmaceutical workers, must work hard to increase public awareness regarding the proper and correct use of antibiotics. Research regarding community knowledge in the village of Penyamun, Bangka Regency, shows that the community still has in-depth knowledge about the use of antibiotics: 38 respondents (35.8%) had good knowledge, 25 (23.6%) had moderate knowledge, and 43 respondents (40.6%) had deep expertise (Septiana, 2016). These results are also in line with those of research regarding the use of amoxicillin in Penagan Village, Bangka Regency, showing that 54.27% of the community still lack knowledge (Zulaika, 2018). and increase the health expenditure of countries. Its impact is expected to be higher in low-and middleincome countries, where healthcare systems are suboptimal and ill-equipped to deal with the issue. As antibiotic misuse is the primary driver for AMR, there is an acute need to create awareness among the general public, calling calls for a comprehensive communication strategy that considers the various drivers of AMR and associated solutions. In the short term, the focus of communication strategies should be to raise awareness in specific interest groups by channelling limited resources to achieve definite objectives, thereby improving the chances of behaviour change. The general public can be targeted at a later stage or as a second phase with specific strategies and messages (Mathew, Sivaraman & Chandy, 2019).
Public service announcements about the use of antibiotics have never been on television because antibiotics are hard drugs that have specific rules. Likewise, in the province of Bangka Belitung Islands, public service advertisements regarding the proper and correct use of drugs have never existed. Despite the expanding use of social media, little has been published about its appropriate role in health promotion and even less has been written about evaluation (Neiger et al., 2012). The five-year strategy of the Department of Health outlined seven key areas to address antibiotic resistance; Public Health England (PHE) is responsible for improving surveillance, optimising prescribing practices, and educating doctors and the public about the risks of antibiotic misuse (Cully, 2014).
The findings of Ashe and the authors (2006) indicate that the educational poster had no effect on antibiotic use. Farmers, physicians, and patients need to recognize the value of antibiotics and protect this vulnerable resource. In the absence of enlightened selfinterest, more effective policies are required because the current ones are insufficient. A global, multidisciplinary effort is needed to slow the development of antibiotic resistance; it will take more than one shepherd to prevent our commons from being overgrazed (Cully, 2014). Videotron is more effective and efficient than billboards and posters as it can reduce visual waste and does not take up much space (Aji, 2018). Therefore, it is necessary to study the effectiveness of public service advertisements on the use of antibiotics in Pangkalpinang City to find a medium that functions in reducing the number of antibiotic resistance.
Method
This research was conducted in Pangkalpinang City in December 2018-September 2019. It used a quasi-experimental quantitative approach with a time-series design, applying the pre-test and post-test methods. The population in this study consisted of the people of Pangkalpinang City. The sample size was calculated using the Slovin formula resulting in a sample of 400 respondents. The instrument used in this study was a questionnaire. Respondents read the explanation and then filled out written informed consent. Before taking primary data, the validity and reliability were tested at Sungailiat city. After that, four pre-test measurements for antibiotics use were carried out for each subdistricts of Pangkalpinang city at one-week intervals. During one month, public service advertisements were aired on Videotron in several strategic places in Pangkalpinang city. Furthermore, four post-test measurements were performed on the same respondents as well. The data analysis methods in this research are univariate and bivariate using the dependent t-test. This study has received ethical approval from the health research ethics committee of The Health Research Polytechnic of The Ministry of Health, Pangkalpinang.
Results
The demographic profiles of this study can be seen in Table I. The results of this study show that the respondents have used antibiotics before. Thus, it was manifest that the answers to the questionnaire corresponded to the reality of the actual use of antibiotics. The most widely used was amoxicillin, i.e. 398 respondents (99.5%). Other antibiotics used were clindamycin, chloramphenicol, metronidazole, ciprofloxacin, ampicillin/cloxacillin, and tetracyclines. (Table II). After conducting a t-test analysis on the use of antibiotics pre-test and post-test in all respondents, there was a significant difference, with a p < 0.0001 (Table III), indicating that public service advertisements are effective in reducing antibiotics misuse. The use of antibiotics categorised as "not good" at pre-test measurements was 8.8%, while at post-test measurements, this number decreased to 3%. Public service advertisements on the use of antibiotics effectively improved the use of antibiotics in the community of Pangkalpinang.
Discussion
The results of this study show that the respondents have used antibiotics before. Thus, it was manifest that the answers to the questionnaire corresponded to the reality of the actual use of antibiotics. The most widely used was amoxicillin, i.e. 398 respondents (99.5%). Our findings are in line with the results of another study showing that all patients (108 patients) have used antibiotics without a prescription and had a low level of awareness. The most purchased antibiotic without a prescription is amoxicillin (Fernandez, 2013). It is also the most widely used by community service participants, i.e. 11 people (45.83%) (Djuria & Sinulingga, 2019).
Analysis of data on the use of antibiotics in respondents shows that at the time of pre-test measurement, most people who misused antibiotics were in Gabek district, nine people (15.8%), while the least was in Rangkui district, one person (1.8%). In the post-test measurement, most people who misused antibiotics were in Gabek district, three people (5.3%), while the least was in Pangkalbalam and Taman Sari districts, non (0%), indicating that all of them used antibiotics appropriately.
This result indicates that most respondents are highly educated, namely 266 people (66.5%) ( Table I).
Education is needed to get information, for example, about things that support health to improve the quality of life. In general, the higher a person's education, the easier it is to retrieve information (Nursalam, 2010).
All respondents declared having received information about antibiotics use. The source of information was printed media (13 respondents, 3.25%), electronic media (15 respondents, 3.75%), but mainly non-mediabased (health workers, neighbours, family, and friends), with 391 respondents (97.75%) ( Table I). People with higher education will easily accept information, especially information about them (Nursalam, 2010).
Most respondents were adults 26-45 years old (180 participants, 45%) (Table I). Among older people, the memory factor considerably affects one's knowledge. The level of maturity and strength of a person depends on age. The elder would be more mature in thinking and working. Therefore, respondents will regularly increase their knowledge so that they can change their behaviour. Most respondents worked, namely 206 people (51.5%) ( Table I). As work is a means to support one's and family life, working respondents earned
Djuria and Sari
Effectiveness of public service advertisements on the use of antibiotic Pharmacy Education 21(2) 241 -245 244 money and could buy antibiotics when they felt sick. Therefore, all respondents have used antibiotics.
A study showed a low awareness regarding prescription medicine, antibiotic use, and AMR among the general population in the highland provinces of Vietnam (Ha, Nguyen & Nguyen, 2019). These findings indicate the need for further systemic and didactic educational interventions targeting females, ethnic minorities, those with low education, low income, and those working in the agriculture/fishery/forestry sector in this setting to improve awareness about antibiotic use and resistance.
The t-test analysis showed a significant difference in antibiotics use in pre-test and post-test among all respondents (Table II), indicating that public service advertisements are effective in reducing antibiotics misuse. These results are consistent with previous findings on the influence of advertising messages, advertising sources/models, and ad execution on attitudes, showing that the three variables simultaneously have a positive and significant effect on the attitudes of faculty students (Mustika & Zakaria, 2012). Moreover, four public service advertisements received high enough attention from the public, namely: Traffic Orderly Ads, Amnesty Tax, Report Hendy, and Saber Pungli (Mukaromah, Yanuarsari & Pratiwi, 2017). Simultaneously, the attractiveness of advertisements, the quality of advertising messages, and the frequency of ad serving moderated by the effectiveness of advertisements have a significant effect on the attitudes of other people (Libradika, 2015).
It seems essential that future antibiotic awareness campaigns base their messages more rigorously on scientific evidence, context specificities, and behavioural change theory. A new generation of messages that encourage first-choice use of narrowspectrum antibiotics is needed, reflecting international efforts to preserve broad-spectrum antibiotic classes. Evaluation of the influence of antibiotic awareness campaigns remains suboptimal (Huttner et al., 2018).
In 2015, a study among the general population of the UK highlighted several issues to be considered when communicating the issue of AMR to the public (Trust, 2015). Another research shows that the framing of antibiotic resistance in the TV advertisement led to an increase in misunderstandings of what becomes resistant to antibiotics (Borgonha, 2019). The advertisement helped to highlight the vulnerability of antibiotics and create a new social norm about being a responsible antibiotic user. However, it was interpreted as childish by participants. It did not communicate the severity of antibiotic resistance or specific risk of antibiotic overuse to the audience or accurately reflect the audience's existing knowledge of antibiotic resistance and current behaviours. As the severity of antibiotic resistance was not conveyed, the advertisement did not motivate a change in antibioticseeking behaviours or attitudes among most participants. The findings highlighted knowledge gaps among study participants; they were unaware of the importance of completing the antibiotic course, and they thought that humans develop resistance, not bacteria (Borgonha, 2019).
Effective communication plays a remarkable role in improving community awareness about important healthcare issues (Mathew, Sivaraman & Chandy, 2019). But increasing awareness alone does not result in significant behaviour change unless the issues are addressed holistically. The messaging should be culturally relevant and adapted to the preferences of the target population. Even though a multi-stakeholder approach is preferred, specific leadership responsibilities should be assigned in the whole communication process. The role of champions and social influencers is essential in deciding the success of messaging, as their presence adds a layer of credibility to the whole exercise. In the case of AMR, it is pertinent that the messaging strategy should not be high-jacked by commercial entities who have conflicting interests in the sector. Even though professional and industry groups can be allies in a potential communication campaign on AMR, care should be taken to ensure that the process is free of any conflicts of interest. More importantly, it is pertinent to accept that awareness is just one part of the entire behaviour change process, and the targets for the communication campaign should not be restricted to raising awareness.
Finally, the application of findings in surveys and associated factors related to antibiotic use and AMR should primarily generate public health interventions and target specific groups to make progress in solving AMR problems and maximise the use of surveys (Kosiyaporn et al., 2020).
Conclusion
Public service advertisements about antibiotics were effective in terms of antibiotics use.
|
2021-09-09T20:47:49.241Z
|
2021-07-28T00:00:00.000
|
{
"year": 2021,
"sha1": "f9a8a0477c3a7c373ef332b823329cda131c4155",
"oa_license": null,
"oa_url": "https://pharmacyeducation.fip.org/pharmacyeducation/article/download/1441/1118",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d13ab930a92279a3f37713af1d7e25215670eb0e",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
216421026
|
pes2o/s2orc
|
v3-fos-license
|
A Model for Estimating Passenger-Car Carbon Emissions that Accounts for Uphill, Downhill and Flat Roads
: The geometric longitudinal slope line of a given road significantly e ff ects the carbon emissions of vehicles traversing it. This study was conducted to explore the carbon emission rules of passenger cars on various highway slopes. The law of conservation of mechanical energy, the first law of thermodynamics and the vehicle longitudinal dynamics theory were utilized to determine the influence of slope design indicators on fuel consumption. The energy conversion, fuel consumption, and carbon emission models of passenger cars on a flat straight road, uphill road, and downhill road sections were derived accordingly. Two types of passenger cars were selected for analysis. A field test was carried out to verify the proposed model where the vehicle maintained a cruise speed on flat straight road, uphill road and downhill road with equal gradient and mileage, and continuous longitudinal slope to gather fuel consumption data. The proposed model showed strong accuracy and a maximum error of 9.97%. The main factor a ff ecting the vehicle’s carbon emissions on the continuous longitudinal slope was found to be the average gradient. For a round-trip longitudinal slope with a small gradient, the main factor a ff ecting the vehicle’s carbon emissions is speed: higher speed results in higher carbon emissions. The results of this study are likely to provide the data for support and a workable reference for the low-carbon gradient. Vehicle carbon emissions are independent of single longitudinal slope and vertical curve designs. This conclusion applies to cases where all gradients of the downhill in the continuous longitudinal slope are not greater than the balance gradient. On the uphill section, the main cause of carbon emissions is the elevation difference; its carbon emission rate is equal to that on the longitudinal slope with an average gradient corresponding to the elevation difference. If there are no road sections of continuous downhill road with a gradient greater than the balance gradient, the carbon emissions of the vehicle traveling downhill is equal to that on the downhill with the average
Introduction
Carbon emissions control is an environmental issue of widespread international concern. The longitudinal slope design indicator has crucial effect on the carbon emitted by various vehicles. When driving conditions such as speed, vehicle type, vehicle load, road pavement conditions, and road environment remain unchanged, vehicles consume fluctuating quantities of fuel when moving uphill versus as they perform additional work to overcome height differences on uphill roads [1,2]. The relationship between longitudinal slope design indicators and vehicle carbon emissions has a great deal of practical significance and has been explored by many researchers. Researchers have found that targeting longitudinal slope roads may be an effective approach to reducing fuel consumption and carbon emissions [2,3]. Quantifying the carbon emissions of vehicles in vertical profile and exploring the influence of design indicators on carbon emissions are important for the design of low-carbon vertical profiles. Governments and road operation management agencies may be able to directly control the total carbon emissions of two-way traffic flows on highways constituted by integral subgrades. The influence of design indexes on the carbon emissions of vehicles traversing flat, uphill, and downhill road sections must first be determined to support such control measures. Previous studies have also speed, road gradient, vehicle frontal area, and road friction coefficient using vehicle dynamics theory; they found a close relationship between driving force and fuel consumption. Different driving forces interact with different operating conditions. The CMEM models of light-duty trucks was developed based on measured operating conditions data from laboratory dynamometer tests. Kang [8] used the tractive effort model of a vehicle on the longitudinal slope road proposed by Chang [9] to build a new model of the relationship between fuel consumption and tractive effort with fuel consumption rate as a conversion factor. Fuel consumption rate is affected by factors such as vehicle load, engine type, and environmental conditions. Fuel consumed is approximately proportional to total propulsive work performed by the prime movers. Chang [9] indicated that vehicle dynamic conditions on flat road, uphill road, and downhill road sections notably differ, and divided the vehicle dynamic load on the downhill section into two cases: one where the gravity potential energy completely offsets the air resistance and the rolling resistance, and one where the gravity potential energy is greater than the road resistance. Various balance gradients applicable to the first case have been proposed for different vehicle loads, but mechanical resistance has not yet been considered and only the fuel consumption in the first case was assessed. The fuel consumption rate in the models was also idealized and was considered a constant. There has been no model verification based on field test data [8,9]. It is worth noting that the models indicate that vehicle fuel consumption is approximately proportional to the total propulsive work of the vehicle, and that the slope affects the total driving power to a certain extent.
Kanok and Barth [1] conducted a fuel consumption field test on uphill, downhill, and flat road sections with a maximum gradient of 6% in a passenger car maintaining a cruise speed of 96 km/h. The results showed that the vehicle's fuel consumption was greatest on the uphill section, followed by the flat road and finally the downhill section. The fuel economy on a plain route was also found to be better than on a mountainous route. The relationship between the fuel consumption measured and the road grade appeared to be linear within the grade range -2% to 2%, which implies that there would be no difference in fuel consumption among round-trip mountainous routes within such a range versus flat route. The fuel consumption of the vehicle on the flat road was 18% lower than the fuel consumption on the combination of uphill and downhill roads with a gradient of 6%. Due to lack of theoretical basis and insufficient experimental data for each road grade, this study did not reveal the extent to which the gradient affects vehicle fuel consumption. The gradient was not clearly defined when the fuel consumption was similar on the uphill-downhill combination and flat road. The extant research on vehicle fuel consumption and carbon emissions centers on driving behavior [11][12][13][14]17] and vehicle power loads [8][9][10]. The road gradient is known to more significantly influence vehicle carbon emission rate than the factors of rolling resistance, acceleration rate, frontal area, temperature [10]. Existing micro carbon emission models, such as CEME and MOVES, were established on large quantities of measured or simulated data [11][12][13][14]; arguably, the theoretical basis of these models is insufficient. The default parameters in these micro models are also inconsistent with China's specific conditions [15,16] and cannot be directly applied to the estimation of vehicle fuel consumption or carbon emissions in China.
Scholars have yet to clarify the impact of gradient on vehicle fuel consumption. It is also not yet empirically known whether vehicle fuel consumption is equivalent on flat roads versus uphill-downhill combination roads. Considering that traffic is usually two-way, it is necessary to explore this issue. It is impossible to secure the scientific mathematical models and theoretical guidance necessary for decision-making regarding innovative, low-carbon highway longitudinal slope designs without this information. The mathematical fuel consumption models established using vehicle dynamics theory are relatively authoritative, but they do not consider the mechanical resistance and fuel consumption rates of different fuel types [8,9]. It is worth considering that the research on vehicle fuel consumption and driving power under certain driving force conditions based on vehicle dynamics theory [8][9][10] does provide a theoretical basis for establishing a mathematical model of propulsive energy, fuel consumption, and carbon emissions for vehicles on longitudinal slopes in China. Passenger cars have good power performance and can generally travel uphill at the average speed of all vehicles on the road [19,20]. The fuel consumption of a motor vehicle is approximately minimized by operating at cruising speed [8][9][10]. In this study, the speed of a passenger car on the longitudinal slope road was assumed to be uniform in conformity with real-world vehicle operation conditions, and as is conducive to further research on the energy-saving and low-carbon highway longitudinal slope design.
The present study was conducted to establish a quantified model of carbon emissions of passenger cars on uphill, downhill, and flat roads. The carbon emissions rules of passenger cars on longitudinal slope sections were investigated to secure low-carbon and fuel-economy-related longitudinal slope design indicators. The energy conversion of a vehicle in overcoming height differences was determined based on vehicle longitudinal dynamics theory, the law of conservation of mechanical energy, and the first law of thermodynamics. Universally applicable fuel consumption and carbon emission models for passenger cars on highway longitudinal slopes were established to explore the corresponding carbon emission rules. A theoretical model was evaluated using field test data. A combination of theoretical and empirical research methods was deployed to ensure reliable conclusions.
The remainder of this paper is organized as follows. Section 2 discusses the law of conservation of mechanical energy and the first law of thermodynamics as-utilized to assess the energy conversion of a vehicle on a longitudinal slope road. The conversion of energy, fuel consumption, and carbon emissions proposed by the Intergovernmental Panel on Climate Change (IPCC) is used to account for the carbon emissions of the passenger car as it traverses a longitudinal slope. The importance of the balance gradient is highlighted in this section as well. In Section 3, the accuracy of the passenger car's carbon emission model is evaluated by field tests conducted on flat, uphill, downhill, symmetrical slope combination, and continuous longitudinal slop road sections. The carbon emission rule of vehicles on the longitudinal highway slope is confirmed by test results. Section 4 summarizes the key findings alongside a discussion on the limitations of this work.
Carbon Emission Model
In this section, the law of conservation of mechanical energy, the first law of thermodynamics, the vehicle longitudinal dynamics theory, and the accounting method proposed by IPCC for energy, fuel consumption, and carbon emissions are used to establish the carbon emission model of passenger car on uphill, downhill, and flat sections. Meanwhile, the proposed carbon emission model reveals the carbon emission rules of vehicles on the longitudinal section.
For the proposed energy conversion, fuel consumption, and carbon emission models, the following are assumed: a. The driver of the passenger car maintains a uniform speed. The American Association of State Highway and Transportation Officials (AASHTO) indicates that the passenger car is relatively unaffected by the road gradient when driving on a longitudinal slope due to its acceptable power performance. Almost all passenger cars can traverse 4% to 5% steep slopes at a uniform speed without deceleration [19]. The Japan Road Association states that the general gradient value ensures that passenger cars can travel uphill at a uniform speed; this speed is the average speed of all vehicles on the road [20]. The Chinese "Technical Standard of Highway Engineering" indicates that passenger vehicles generally run at a constant and relatively high speed on highways in a free flow pattern, do not interfere with each other, and do not significantly fluctuate in speed [21]. The assumption that the passenger vehicle maintains a uniform speed in traversing a longitudinal slope is consistent with real-world driving conditions in free flow patterns. Cruising can prevent the interference from speed fluctuations on carbon emissions, which is conducive to research on fuel economy and carbon reduction in highway longitudinal slope design; b. All frictional heat is absorbed by the brake drum when braking. The effects of other parts of the brake chamber during heat generation and heat dissipation are generally negligible [4,22], which is conducive to the establishment of an energy conversion model for vehicles on longitudinal slope sections. The positive work of gravity converts the gravitational potential energy into the kinetic energy of the vehicle, during grade descent. The kinetic energy is controlled by the brake operation, while the friction at the drum interface simultaneously generates heat [22].
Energy Conversion Model
As the vehicle runs, chemical energy in its fuel is converted into mechanical energy through a combustion process in the engine. Mechanical energy is converted into kinetic energy via the vehicle's transmission and tire systems. There is energy loss in every link, mainly in regards to engine combustion efficiency, transmission system efficiency, tire rolling resistance, aerodynamic resistance, and brake friction [4,5,23]. The vehicle dynamic conditions on a flat road, uphill road, and downhill road sections are different [3][4][5]. Vehicle dynamics theory was used in this study to analyze the force of vehicles on various road sections and to establish vehicle energy conversion models accordingly. The influence of the gradient on vehicle carbon emissions was observed by comparing the vehicle energy conversion model of the flat road and a combination of symmetrical slope road sections.
Flat Road
Aerodynamic resistance and rolling resistance act on the vehicle as the vehicle travels across a flat, straight road. The effective tractive effort of the vehicle is related to its engine power, travel speed, and transmission efficiency [4,5,24], as follows: where F i is the indicated tractive effort (N), which is equal to the total resistance of the vehicle [4,5]; F is the effective tractive effort. The transmission efficiency n tf is reflected in the mechanical losses of the various transmission system components (transmission, transmission shaft, differential, and drive shaft). As the vehicle travels at a uniform speed, the engine speed is in an optimal working state and the transmission efficiency remains constant. When predicting vehicle power performance, the transmission efficiency of the current common passenger car is usually estimated to be approximately 85% [4,24]. The vehicle is also subject to aerodynamic resistance during driving. The aerodynamic resistance is proportional to the dynamic pressure of the relative speed of the air [5,25], as follows: where A is the vehicle frontal area (m 2 ), i.e., the area of the car from the front to the rear [5]. ρ is the air density, generally 1.2258N·s 2 /m 4 . The aerodynamic resistance coefficient C D is related to the shape of the vehicle. The air resistance coefficients of various shapes of vehicles can be obtained from related documents on vehicle characteristics [5,25]. V is the relative speed of the vehicle and air considering the angle between the wind and the driving direction (m/s). On the highway longitudinal slope, the gradient is very small. It can be assumed that the length of the slope is approximately equal to the horizontal distance. The work done by aerodynamic resistance is as follows: Rolling resistance is related to tire type, road type, and speed [22,24]. It can be approximated as follows: where v is the vehicle speed (km/h). The rolling coefficient C r is related to the pavement type and condition, typical values are available in the literature of Rakha [24]. Rolling resistance constants C 2 and C 3 are related to tire type and can be assigned values as determined empirically. For the mixed tires commonly used in passenger cars, C 2 = 5.3 and C 3 = 0.044 [22]. The work done by rolling resistance is:
Uphill
The tractive effort of vehicles is affected by driving resistance on uphill roads. The driving resistance includes slope resistance, rolling resistance, and aerodynamic resistance. The slope resistance is the component of the vehicle's weight parallel to the road surface [4]. The vehicle's gravity performs negative work parallel to the road surface, as indicated in Equations (6) and (7).
where i is the road gradient (%).
Downhill
A portion of the gravity parallel to the road surface plays a positive role as the vehicle travels downhill. Rolling resistance, aerodynamic resistance, and braking force do negative work. It is worth noting that when the vehicle travels uphill along a straight trajectory, the torque direction of the engine is transmitted from the engine to the wheels. When the vehicle goes downhill, conversely, gravity increases the kinetic energy of the vehicle. If the accelerator pedal is not pressed and the gear is still in the gear position, the engine is in the reversed state and the torque direction is opposing. The reverse tractive force of the transmission system generates resistance to the vehicle [4,5]. A greater gradient produces greater gravitational potential energy. When the gradient is large, the vehicle will accelerate even if the accelerator is not pressed and the vehicle can slide downhill in the gear position. A portion of the gravitational potential energy offsets the energy lost by the transmission system [4]. Chang [9] analyzed a case in which the vehicle requires no braking to maintain a constant speed on the downgrade. The inherent forces (rolling resistance, wind resistance) and gradient force cancel one another in this case and no propulsive work is required. However, the transmission resistance was not considered.
There are three cases of driving behavior which emerge as the vehicle maintains a cruise speed across various gradients.
a. In Case I, when the accelerator is not pressed, the vehicle can slide downhill in the gear position at a cruise speed. The gravitational potential energy can offset the negative worked by rolling resistance, wind resistance, and reverse tractive force [4,5,9]. In this case, the vehicle meets the following energy relationship: where, W t ' is the indicated the energy loss caused by reverse tractive force (J). When the vehicle does not slide downhill at the gear position, the reverse traction does not work and W t ' = 0 [4]. The balance gradient corresponding to the cruise speed can be calculated according to the specific driving road conditions of the vehicle.
b. In Case II, when the work of driving resistance and aerodynamic resistance is greater than the gravitational potential energy, the driver must step on the accelerator to provide additional energy and ensure that the vehicle maintains the cruise speed. The energy relationship is as follows: c. In Case III, when the work of driving resistance and aerodynamic resistance is less than the gravitational potential energy, the remaining gravitational potential energy is converted into kinetic energy which provides a certain increase in the vehicle's speed. In order to maintain the cruise speed at this point, the driver needs to step on the brake to ensure that the kinetic energy not increased. The braking force produces brake drum heat Q h [22]. According to the first law of thermodynamics, the energy relationship in this case is as follows: The energy loss value on the downhill road is theoretically equal to the difference between the energy of gravity potential and the energy consumed by driving resistance. Compared to the propulsive energy traveling uphill with the balance gradient, the increase in propulsive energy when traveling uphill (which overcomes the increased elevation difference) is equal to the energy loss of vehicle when traveling downhill.
Symmetrical Slope Combination Road
The energy of a combination of symmetrical uphill-downhill and flat roads were compared to explore the impact of the road gradient on vehicle carbon emissions The uphill-downhill combination is required to be symmetrical in gradient and slope length, and the slope length is equal to the flat road length. The energy provided by the effective tractive effort of the vehicle on the flat, uphill, and downhill road sections in this case are W 1 , W 2 , W 3 , respectively. The propulsive energy formulations of the vehicle on flat and uphill sections are expressed as follows: The vehicle's energy conversion formulations on downhill road sections in Case I, Case II, Case III are reflected in Equations (13) and (14), Equation (15), Equations (16) and (17), respectively.
The respective energy relationships of the vehicle on the three types of symmetrical slope combination roads are as follows.
Equations (11) and (19) show that the vehicle carbon emissions of the longitudinal slope combination road II and the flat road are equivalent. The difference between energy consumed by the transmission system and energy loss caused by reverse tractive force was usually ignored in derivation of vehicle longitudinal power [4]. This indicates that the carbon emissions of the symmetrical slope combination road I and the flat road are basically equivalent. The energy provided by the engine on symmetrical slope combination roads I, II and the flat road is basically equal. The difference in fuel consumption between the uphill section and flat road is equal to the difference in fuel consumption between the flat road and downhill section, as shown in Equations (11), (18) and (19).
In the symmetrical slope combination road III, the gradient is greater than the balance gradient. The energy loss during braking on the downhill section is equal to the difference in energy provided by the tractive effort between the uphill section and the flat road, as shown in Equations (11) and (20).
When the gradient of the longitudinal slope is not greater than the balance gradient, the vehicle emits roughly the same amount of carbon on the symmetrical slope combination road and the flat road. When the gradient of the downhill slope exceeds the balance gradient, a steeper gradient and longer grade length cause more energy to be lost by the brake. The energy loss is equal to the propulsive energy difference on the uphill and a flat road with equal mileage. Brake energy loss is offset by the excess propulsive energy on the uphill compared with the propulsive energy on the flat road with equal mileage of symmetrical slope combination roads. It is not conducive to energy-saving or emission-reduction in regard to the two-way traffic on the longitudinal slope. The balance gradient is the minimum gradient that affects vehicle's fuel consumption and carbon emissions of two-way traffic. The gradient should be avoided to be greater than the balance gradient during the low-carbon longitudinal slope design.
Continuous Longitudinal Slope
The energy of continuous longitudinal slope sections and single longitudinal slope sections were compared to explore the carbon emissions rules of passenger vehicles on the vertical profile. The propulsive energy corresponding to the continuous uphill road section can be calculated by Equations (21) and (22). The energy is equal to the propulsive energy on the slope with average gradient and equal mileage, as reflected in Equation (23).
When the gradient of each downhill section on the continuous downhill road is not greater than the balance gradient, the propulsive energy can be calculated by Equations (24) and (25). The energy is equal to the propulsive energy on the slope with the average gradient and equal mileage as derived by Equation (26): where i j , x j represent the gradient and length of each slope section, respectively. The number of single slope sections on the continuous longitudinal slope is n; i is the average gradient (%) of the continuous longitudinal slope.
A quadratic parabola is usually used as a vertical curve to connect the two longitudinal slopes at the turning point of the slope sections (Equation (27)) [26]. It is necessary to consider the vertical curve section in predicting the vehicle's carbon emissions on longitudinal slopes.
where x is the lateral distance, that is, the distance from any point to the starting point of the vertical curve; i is the gradient of any point of the vertical curve, i = x R + i 1 . At the beginning of the vertical curve, x = 0, i = i 1 . At the end of the vertical curve, x=L, i = i 1 + L R = i 2 . i 1 and i 2 are the gradients of the longitudinal slopes connected to the ends of the vertical curve. L is the length of the vertical curve.
The vertical curve is divided into n longitudinal slopes each with length of ∆x. The fuel consumption of the vehicle in the vertical curve section is the sum of the fuel consumption of the vehicle on the longitudinal slope of different gradients corresponding to each step, as shown in Equation (28).
The vertical curve section can be regarded as the connection of multiple slope sections. The energy of vehicles on the vertical curve satisfies the above energy conversion relationship.
For continuous longitudinal slopes with a fixed height differences, when the gradient of each downhill section is not greater than the balance gradient, the propulsion energy on the continuous longitudinal segment is equal to the propulsive energy on the slope with average gradient and equal mileage regardless of the design index (gradient and slope length) of each slope section.
Conversion Among Energy, Fuel Consumption, Carbon Emissions
Fuel is the source of propulsive energy for vehicles. In recent years, China's motor vehicles have been required to meet the National Fifth-phase Motor Vehicle Pollutant Emission standards [27]. Gasoline vehicles are generally equipped with a spark ignition gasoline engine. Moreover, the characteristic of fuels supplied to gasoline vehicles must meet the National Fifth-phase Standards of Gasoline for Motor Vehicles [28]. The density of gasoline(p) type 92#, 95#, and 98# is 0.725 kg/L, 0.737 kg/L, and 0.753 kg/L, respectively [29]. The average low calorific value of gasoline(A) is 44.800 (kJ/kg). The calorific values of burning one liter of different density gasoline is 32480 kJ/L, 33017.6 kJ/L, and 33734.4 kJ/L respectively. According to the first law of thermodynamics, energy is conserved when thermal energy and mechanical energy are converted. The fuel utilization rate is the proportion of fuel that provides drive energy to the total fuel, as presented in Equation (29). The fuel utilization rate of a spark ignition gasoline engine, n m , is generally 27% [5,10,30]. The remaining energy is converted into heat and then lost. The fuel consumption of gasoline of different densities in China can be calculated according to Equations (30)- (32).
where, Fuel indicates fuel consumption (L).
The IPCC accounting method [31] was adopted here to observe the relationship between vehicle fuel consumption and carbon emissions, as shown in Equation (33). This prevented measurement error otherwise caused by various factors when directly detecting gas from the exhaust pipe of the test vehicle. The conclusions of this study are comparable with extant results from similar work in other countries; the universal applicability of the research results presented here is guaranteed [15,16]. Data relevant to the extant vehicle fuel characteristics in China can be found in the "National Communication on Climate Change of China" [32,33]. The potential carbon emission factor (C) is 18.9 (t/TJ), the carbon conversion factor(K) is 44/12, and the carbon oxidation rate (B) is 98%. It can be obtained that the carbon emission from burning one liter of gasoline type 92#, 95#, and 98# is 2.206 kg/L, 2.242 kg/L, and 2.291 kg/L, respectively, as shown in Equations (34)-(36).
where, CO 2 represents carbon emissions (kg). The vehicle engine burns a certain amount of fuel to provide the effective tractive effort to overcome aerodynamic resistance and rolling resistance while traversing a flat road [5,22]. The effective tractive effort of the vehicle on the uphill section must overcome slope resistance, aerodynamic resistance, and rolling resistance [4,5]. Greater tractive effort results in greater the fuel consumption and more carbon emissions [3]. On the downhill section, when the vehicle's gravitational potential energy is equal to or greater than the energy consumed by aerodynamic resistance and rolling resistance, the vehicle does not need to provide any additional tractive effort and there is no fuel consumed or carbon emitted. When the aerodynamic resistance and rolling resistance consumes more energy than the gravitational potential energy, the vehicle must provide tractive effort to maintain the cruise speed and consumes a certain amount of fuel while emitting a certain amount of carbon [4,5,9]. There is constant idle fuel when the vehicle is not under neutral operation conditions [3].
Considering the heat generation of different fuel types, transmission efficiency and fuel utilization rate, the fuel consumption of different fuel types can be determined as the engine provides a certain driving energy, as indicated in Equations (37)-(39). When fuels of different densities produce equal driving energy, they correspond to uniform quantities of carbon emissions, as demonstrated in Equations (40)-(42).
where, Q is the constant idle fuel rate (L/h). t is the travel time (h).
Model Verification
Field test can directly reflect the vehicle's carbon emissions under actual road conditions. A field test was carried out in this study to validate the proposed theoretical carbon emission model. The above carbon emission rules for longitudinal slope sections were also validated against real-world data from the field test.
Field Test
Typical flat road, single slope, and continuous longitudinal slope road sections were selected to gather vehicle speed and fuel consumption data. The IPCC carbon emission conversion methodology was adopted to convert the fuel consumption into carbon emissions. There are different balance gradients for vehicles with different loads [9]. The vehicle to slid downhill in the gear position and the accelerator was not pressed in the balance gradient test to measure the speed at which the vehicle can cruise. In the fuel consumption test, the driver was required to enable the cruise control function of the test vehicle to maintain a constant speed on the test roads. These tests formally began once the vehicle speed stabilized. To ensure comparability across the test results, the vehicle's fuel consumption and carbon emissions are expressed here according to the distance of 100 km traveled.
Test Instrument and Vehicle
The explanatory variable in the field test is velocity and the dependent variable is the vehicle's fuel consumption and carbon emissions. An Ecan analyzer was used to analyze the raw CAN bus data from the passenger car and thus derive the velocity and fuel consumption data. An anemometer monitoring instrument (Xima AS8336, Guangdong, China) was used to measure wind speed. The weather was mild, including stable and relatively slow wind speed. As breezes below 6.0 m/s have almost no effect on the movement of ground objects, the wind speed restricted to 6.0m/s and allowed fluctuate within a range of 2.0 m/s during the test [5,25]. The average measured wind speed was taken as the wind speed value. An AxleLightRLU11 vehicle classification statistical instrument was placed on the test roads prior to the test to select for the passenger car with a large traffic volume. Based on the traffic volume data on the test road and the development prospects of passenger cars, two type of common passenger cars were selected as the dominant vehicle types. One small (Chevrolet Malibu) and one large (Chevrolet Captiva SUV) passenger car were selected for analysis. The Malibu (Vehicle I) is equipped with a four-stroke naturally aspirated gasoline engine, 154 horsepower, 1.5 L displacement, 1.8 m 2 frontal area, and aerodynamic resistance coefficient of 0.35 [23]. The Captiva (Vehicle II) is equipped with a four-stroke naturally aspirated gasoline engine with a power of 167 hp, a displacement of 2.4 L, a frontal area of 2.0 m 2 , and an aerodynamic resistance coefficient of 0.40 [23]. Ten small passenger cars and ten large passenger cars were used for field testing to ensure data reliability. The characteristics of the test vehicles were the same as the dominant vehicle types. The test vehicles were common and economical passenger cars, and they were both supplied with 92# gasoline [15,29]. In balance gradient field test and fuel consumption field test, every group of data was collected and averaged at least 30 times on the longitudinal slope with the specified gradient.
Route Selection
Drivers were required to maintain a cruise speed (from 40 to 120 km/h with a mean increase of 10 km/h) throughout the test. According to the road grade and speed limit regulations, the vehicle was required to maintain a cruise speed from 60 km/h to 120 km/h on the flat straight road sections of Xibao Expressway, and a uniform speed from 40 km/h to 60 km/h on the flat straight road sections of Provincial Road 306. The grade of Provincial Road 306 is secondary. The length of the flat straight road sections exceeds 1.5 km and the absolute value of their gradients range from 0.3% to 0.5%. The test vehicle's fuel consumption was measured on the flat road as it made a round trip; the average value was taken as the fuel consumption of the flat road. Vehicle fuel consumption tests for single and continuous longitudinal slopes were carried out on the Hanzhong to Mianxian first-class road, the Xunyi to Qiupotou second-class road, and the Xianxu Expressway.
The Xibao Expressway and Xianyang-Xunyi Expressway are composed of asphalt pavement in excellent condition with rolling resistance coefficients of 1.25 and wind speeds of 1.0 m/s. The Hanzhong to Mianxian first-class road is also asphalt pavement but in fair condition, with rolling resistance coefficient of 1.5 and wind speed of 3.0 m/s. Provincial Road 306 and the Xunyi to Qiupotou second-class road are asphalt pavement in poor condition with rolling resistance coefficient of 2.5 and wind speed of 6.0 m/s.
Drivers
Driver performance varies depending on personal driving preferences and experience, so drivers were screened before the field test to ensure they were sufficiently experienced and familiar with the road. Each driver was given 10 days of training and testing to prevent any incorrect driving operations from affecting the test results. The driver was required to maintain a cruise speed normally, avoid any aggressive driving behavior, and maintain a safe distance from the vehicle in front of them. The driving behavior of the test vehicle was not impacted by the other vehicles on the road. Twenty male drivers between 15-20 years of driving experience passed the test. Ten drivers were assigned to drive small passenger cars and the remaining ten to test large passenger cars.
Other Factors
Other factors (e.g., traffic flow patterns, pavement condition, emergencies) were controlled throughout the test to prevent undue effects on vehicle carbon emissions. Traffic was in a free-flow pattern throughout the test with no emergencies occurring. The road surface was consistent and free of deformation or damage.
Test Results
Test results were gathered for the flat, single longitudinal slope, and continuous longitudinal slope road sections. The characteristics of the vehicles, fuel quality, and road conditions during field tests were considered in testing the proposed model's ability to predict the carbon emissions of vehicles on test roads. Predicted and measured values were compared to determine the accuracy of the model established and the validate of the carbon emission rules proposed.
Flat Road
Different road grades have different speed limit regulations. Vehicle I and vehicle II were required to maintain a cruise speed from 40 km/h to 80 km/h on the flat straight sections of Provincial Road 306 (Flat Road I) and maintain a cruising speed of 60 km/h to 120 km/h on the flat straight sections of Xibao Expressway (Flat Road II). Figure 1 shows the vehicle speed and fuel consumption data collected in the field test on the flat road sections. A higher vehicle speed appears to produce greater fuel consumption and higher carbon emissions. The field test data on flat road sections was used for model verification. The maximum relative error values of the predicted and measured fuel consumption values of the two types of test vehicles were 7.89% and 8.33%, respectively. This indicates that the proposed carbon emission model for the passenger car traversing the flat road is highly accurate and reliable.
Single Slope Road
The balance gradients satisfying Vehicles I and II at 60 km/h, 70km/h and 80 km/h are 3.6% and 4.0%, 4.0% and 4.5%,4.5% and 5.0%, respectively, on the longitudinal slope sections of the Xunyi to Qiupptou secondary road (Road I). The balance gradients are 2.8% and 3.0%, 3.2% and 3.5%, respectively, for maintaining the cruise speed of 70 km/h and 80 km/h on the longitudinal slope sections of the Hanzhong to Mianxian Road (Road II). On the longitudinal slopes of the Xianyang-Xunyi Expressway (Road III), the test vehicles were driven at 100 km/h and 90 km/h, respectively, for a balance gradient of 3.5%. The balance gradients are 3.0% for Vehicle I and II maintaining cruise speeds of 90 km/h, 80 km/h, respectively.
Single Slope Road
The balance gradients satisfying Vehicles I and II at 60 km/h, 70km/h and 80 km/h are 3.6% and 4.0%, 4.0% and 4.5%,4.5% and 5.0%, respectively, on the longitudinal slope sections of the Xunyi to Qiupptou secondary road (Road I). The balance gradients are 2.8% and 3.0%, 3.2% and 3.5%, respectively, for maintaining the cruise speed of 70 km/h and 80 km/h on the longitudinal slope sections of the Hanzhong to Mianxian Road (Road II). On the longitudinal slopes of the Xianyang-Xunyi Expressway (Road III), the test vehicles were driven at 100 km/h and 90 km/h, respectively, for a balance gradient of 3.5%. The balance gradients are 3.0% for Vehicle I and II maintaining cruise speeds of 90 km/h, 80 km/h, respectively.
The balance gradients of different grades of road were calculated according to Equation (13) as shown in Table 1. The test results are consistent with the calculated balance gradient. The balance gradient from shallow to steep grades versus velocities is basically consistent with the research results of Chang and Morlok [9]. The balance gradient can be used as a boundary condition to explore the differences in carbon emission levels between flat and round-trip longitudinal road sections. The balance gradients of different grades of road were calculated according to Equation (13) as shown in Table 1. The test results are consistent with the calculated balance gradient. The balance gradient from shallow to steep grades versus velocities is basically consistent with the research results of Chang and Morlok [9]. The balance gradient can be used as a boundary condition to explore the differences in carbon emission levels between flat and round-trip longitudinal road sections. A round-trip test was run on the single slopes of Road I and Road III as shown in Table 2. Between the predicted and measured carbon emissions on the single slope, the maximum relative errors of Vehicle I and Vehicle II are 9.97% and 9.33%, respectively. The carbon emission model on the single longitudinal slope thus has a high reliability. The measured carbon emissions indicate that the main factor affecting the carbon emissions of both vehicles on the round-trip longitudinal slope is speed. A higher speed indeed yields higher carbon emissions, which is consistent with the carbon emission rule on the flat road.
As Vehicles I and II traveled downhill on a gradient close to the balance gradient (Nos. 9-12 and 13-16, respectively), the data shows that the energy consumed by the running resistance is balanced with the gravitational potential energy and no carbon is emitted. Only idling fuel consumption is provided, and the carbon emissions generated are equal to those under idling conditions. When the two types of test vehicles traveled downhill on a gradient less than the balance gradient (Nos. 1-8 and 1-12, respectively), the test drivers needed to slightly step on the gas pedal to maintain the cruise speed, resulting in a small amount of carbon emissions. When the gradient was not greater than the balance gradient, the average vehicle's carbon emissions on the symmetrical slope combination were basically equivalent to the vehicle's carbon emissions on a flat section with equal mileage. Compared to the vehicle's carbon emissions on the flat road, the reduction in carbon emissions on the downhill section of the symmetrical slope combination can offset the increased emissions attributable to the uphill section alone. The tested carbon emission data of vehicles on flat road and on the round-trip longitudinal slope sections in this case are consistent with this rule with a maximum relative error of 9.81%.
As Vehicle I and Vehicle II traversed a longitudinal slope with a gradient greater than the balance gradient (Nos. 13-20 and 17-20 respectively), the energy consumed by aerodynamic resistance and rolling resistance was lower than the gravitational potential energy and the vehicle's speed increased. As the driver stepped on the brake pedal to maintain the cruise speed, the brake drum friction generated heat resulting in a certain amount of energy loss. More propulsive energy is needed on uphill sections with the same gradients as the downhill sections, and a portion of the propulsive energy numerically offsets the energy lost caused by braking on downhill. This results in more carbon emissions during uphill climbing. The field test results of this study support this inference. The balance gradient can be used to determine if energy loss occurs during downhill driving, which is directly related to the low-carbon design of the longitudinal slope section composed with an integral subgrade.
The above conclusions indicate that restricting the road gradient below the balance gradient is conducive to energy savings and CO 2 emissions reduction on round-trip longitudinal road sections. Additionally, the results support the fuel consumption rules obtained from field trials conducted by Kanok and Barth [1] that plain routes have better fuel economy than mountainous routes.
Continuous Longitudinal Slope
Passenger cars usually travel longitudinal slopes at a uniform speed, and the speed is the average speed of all vehicles on the road [19,20]. A continuous longitudinal slope of Road III was selected for subsequent analysis. The average speed on Road III is 100 km/h as-measured by the traffic volume survey method. The maximum gradient is 3.5% and is equal to the measured balance gradient corresponding to the average speed. It shows that the design of this continuous longitudinal slope section meets relevant low-carbon requirements in effect. To clearly observe the various driving behaviors that emerge as a vehicle travels downhill, Vehicle I was required to maintain a cruise speed of 90 km/h and its fuel consumption was gathered on the round-trip continuous longitudinal slope. The vertical profile for the continuous longitudinal slope as well as predictions of vehicle propulsion, energy, fuel consumption, and carbon emissions are shown in Figure 2. R in Figure 2a represents the radius (m) of the vertical curve. The brake energy loss in certain road sections and the equivalent fuel consumption corresponding to the energy loss were calculated according to the proposed model. The measured vehicle's carbon emissions on the uphill and downhill road sections is 28.17 kg/100 km and 6.03 kg/100 km, respectively. The gradient of certain longitudinal slope sections in the continuous road do exceed the balance gradient. Using the proposed carbon emission model, the predicted carbon emissions on continuous uphill and continuous downhill sections are 25.81 kg/100 km and 5.80 kg/100 km, respectively. The converted CO2 emission rate and cumulative CO2 emissions corresponding to the brake energy loss of the vehicle on the downhill sections with gradients larger than the balance gradient are 3.86 kg/100 km, and 31.58 kg, respectively. The relative errors of the predicted and measured values of uphill and downhill sections are 8.38% and 3.81%, respectively, which implies that the proposed model of vehicle carbon emissions on continuous longitudinal slopes is accurate. The average gradient of the continuous longitudinal slope is 2.258%. Using the proposed carbon emission model, the predicted carbon emissions on uphill and downhill sections with the average gradient are 25.81 kg/100 km and 3.96kg/100 km, respectively. These above results suggest that CO2 emissions on the continuous uphill section is equal to that on the uphill section with the average gradient and equal mileage. Vehicle braking generates energy loss when the gradient of certain longitudinal slopes in a continuous downhill section is greater than the balance gradient. The gradients of part of the slope in the continuous downhill section are greater than the balance gradient. The carbon emission rate on these sections is the idling carbon emission rate and is not negative. For a fixed height difference, the other longitudinal slope sections have a smaller gradient and a higher carbon emission rate, which leads to an increase in cumulative carbon emissions on the continuous downhill section. It implies that there would be no difference in CO2 emissions among different routes with different road grade profiles within the gradient range that is not greater than the balance gradient.
The main indicator affecting vehicle carbon emissions on the one-way continuous longitudinal slope is the average gradient. Vehicle carbon emissions are independent of single longitudinal slope and vertical curve designs. This conclusion applies to cases where all gradients of the downhill in the continuous longitudinal slope are not greater than the balance gradient. On the uphill section, the main cause of carbon emissions is the elevation difference; its carbon emission rate is equal to that on the longitudinal slope with an average gradient corresponding to the elevation difference. If there are no road sections of continuous downhill road with a gradient greater than the balance gradient, the carbon emissions of the vehicle traveling downhill is equal to that on the downhill with the average The measured vehicle's carbon emissions on the uphill and downhill road sections is 28.17 kg/100 km and 6.03 kg/100 km, respectively. The gradient of certain longitudinal slope sections in the continuous road do exceed the balance gradient. Using the proposed carbon emission model, the predicted carbon emissions on continuous uphill and continuous downhill sections are 25.81 kg/100 km and 5.80 kg/100 km, respectively. The converted CO 2 emission rate and cumulative CO 2 emissions corresponding to the brake energy loss of the vehicle on the downhill sections with gradients larger than the balance gradient are 3.86 kg/100 km, and 31.58 kg, respectively. The relative errors of the predicted and measured values of uphill and downhill sections are 8.38% and 3.81%, respectively, which implies that the proposed model of vehicle carbon emissions on continuous longitudinal slopes is accurate. The average gradient of the continuous longitudinal slope is 2.258%. Using the proposed carbon emission model, the predicted carbon emissions on uphill and downhill sections with the average gradient are 25.81 kg/100 km and 3.96kg/100 km, respectively. These above results suggest that CO 2 emissions on the continuous uphill section is equal to that on the uphill section with the average gradient and equal mileage. Vehicle braking generates energy loss when the gradient of certain longitudinal slopes in a continuous downhill section is greater than the balance gradient. The gradients of part of the slope in the continuous downhill section are greater than the balance gradient. The carbon emission rate on these sections is the idling carbon emission rate and is not negative. For a fixed height difference, the other longitudinal slope sections have a smaller gradient and a higher carbon emission rate, which leads to an increase in cumulative carbon emissions on the continuous downhill section. It implies that there would be no difference in CO 2 emissions among different routes with different road grade profiles within the gradient range that is not greater than the balance gradient.
The main indicator affecting vehicle carbon emissions on the one-way continuous longitudinal slope is the average gradient. Vehicle carbon emissions are independent of single longitudinal slope and vertical curve designs. This conclusion applies to cases where all gradients of the downhill in the continuous longitudinal slope are not greater than the balance gradient. On the uphill section, the main cause of carbon emissions is the elevation difference; its carbon emission rate is equal to that on the longitudinal slope with an average gradient corresponding to the elevation difference. If there are no road sections of continuous downhill road with a gradient greater than the balance gradient, the carbon emissions of the vehicle traveling downhill is equal to that on the downhill with the average gradient and equal mileage. When the average gradient of the continuous downhill section is greater than the balance gradient, there must be certain slope sections with a gradient greater than the balance gradient. Even if the average gradient of the continuous downhill slope is less than the balance gradient, there may be certain slope sections with gradients greater than the balance gradient. In a slope section with gradient exceeding the balance gradient, vehicle braking can produce energy loss when traveling downhill. The difference in propulsive energy between the round-trip continuous longitudinal slope and the flat road with equal mileage is equal to the energy loss of the vehicle braking as it travels downhill. The shaded area in Figure 2b illustrates this rule.
The fuel consumption of the vehicle on the round-trip continuous longitudinal slope is not less than the vehicle's fuel consumption on a flat road with equal mileage. The difference in fuel consumption between them is equal to the fuel consumption corresponding to the energy loss of the vehicle braking as it travels downhill. There is no energy loss when all gradients on the continuous longitudinal slope are below the balance gradient. In this case, the vehicle has the lowest carbon emissions on the round-trip continuous longitudinal slope, which is equivalent to the carbon emissions on a flat straight road with equal mileage.
The propulsive energy, fuel consumption, and carbon emissions of the vehicle are proportional to each other. The specific proportional relationship can be observed in the proposed model and is shown in Figure 2c. The rules of propulsive energy, fuel consumption, and carbon emissions of the vehicle are consistent.
Discussion
A theoretical carbon emission model for passenger cars traveling at cruising speed on uphill, downhill, and flat roads was established in this study based on the law of conservation of mechanical energy, the first law of thermodynamics and the vehicle longitudinal dynamics theory. The impact of highway longitudinal profile design indicators on the vehicle's carbon emissions was also revealed. Other findings can be summarized as follows.
1.
When all gradients of the downhill in the continuous longitudinal slope are below the balance gradient, for a fixed height difference, the vehicle carbon emissions are equal to those on the slope with an average gradient and equal mileage regardless of the design features of the vertical profile. There would be no difference in CO 2 emissions among different routes with different road grade profiles within the balance gradient range.
2.
When the average gradient of the continuous longitudinal slope is greater than the balance gradient, the gradient of certain longitudinal slope sections must be greater than the balance gradient. Braking during downhill travel on these slope sections causes energy loss. Excess carbon emissions are produced on the uphill direction on these slope sections compared to the slope with a balance gradient. The gradient should be kept below the balance gradient to reduce this unnecessary energy loss on downhill and excess carbon emissions on uphill. When the gradient of the longitudinal slope composed with an integral subgrade is larger than the balance gradient, a longer slope length results in greater cumulative carbon emissions; this is not conducive to energy savings or carbon emission abatement.
3.
Under the same driving conditions (e.g., driving speed, road conditions, vehicle types, road environment, traffic flow conditions), driving downhill is generally more fuel-efficient than driving uphill. This perception is correct for a one-way trip longitudinal slope. For a round-trip longitudinal slope, however, the actual fuel efficiency is more complex. According to the law of conservation of mechanical energy, the first law of thermodynamics, the vehicle longitudinal dynamics theory, and results presented in this paper, when all the gradients of the longitudinal slope sections are not greater than the balance gradient, the carbon emissions of the vehicle on the round-trip longitudinal slope are equal to those on a flat road. When there is a gradient of the single slope section in the continuous longitudinal slope which exceeds the balance gradient, the energy loss caused by the vehicle braking as it travels downhill direction on the slope section is equal to the propulsive energy increment during the uphill direction travel on the slope section and during the flat road with equal mileage as round-trip longitudinal slope. It is not conducive to energy-saving or emission-reduction in regard to the two-way traffic on the longitudinal slope.
4.
The road gradient should be kept below balance gradient in the vertical profile design to save energy and minimize CO 2 emissions from two-way traffic. In a low-carbon design tailored to the longitudinal slope, considering the balance gradient, the carbon emissions of vehicles on the two-way longitudinal slope section is approximately the same as that on an equal-mileage flat road. Controlling all gradient of the longitudinal slope can achieve fuel economy and low carbon performance of vehicles traveling at cruising speed on round trip continuous longitudinal slope sections.
This study centers on the carbon emission rules of passenger cars on longitudinal slopes. The carbon emission model in this paper was established in accordance with the real-world driving conditions. The vehicles were required to maintain a uniform operating speed in the field test. The vehicle speed was idealized and regarded as a uniform speed to prevent carbon emissions from suffering interference from vehicle speed fluctuations. The proposed model thus has limited ability to forecast the carbon emissions of vehicles that fluctuate in speed during travel. This factor should be considered in future studies to more accurately estimate real-world driving conditions. Additionally, only two common passenger cars were selected as typical vehicle types for testing, so the model proposed above may not reflect the carbon emissions of other vehicles and do merit consideration as such. In the future, the proposed models may be modified to account for specific parameters (e.g., vehicle type, engine type, fuel type, vehicle load, frontal area, tire type) for application to other vehicle categories. Further, the mechanical efficiency and fuel utilization rate discussed in this paper are idealized for specific gasoline engine and current gasoline fuel characteristics. Other vehicle categories and other fuel-powered vehicles should be taken into consideration. The default values of mechanical efficiency and fuel utilization rate are fixed. These assumptions may have affected the outcomes of the tests presented in this paper.
|
2020-03-12T10:38:33.580Z
|
2020-03-06T00:00:00.000
|
{
"year": 2020,
"sha1": "53b44913448b699b365e48ca19a027c2b2c7b796",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/5/2028/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "68d6f27f86291afc61ddb7a9563386d07b4cf493",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
266770908
|
pes2o/s2orc
|
v3-fos-license
|
Amplified Production and Export of Dissolved Inorganic Carbon During Hot and Wet Subtropical Monsoon
Understanding the origins and processes of riverine dissolved inorganic carbon (DIC) is crucial for predicting the global carbon cycle with projected, more frequent climate extremes yet our knowledge has remained fragmented. Here we ask: How and how much do DIC production and export vary across space (shallow vs. deep, uphill vs. depression) and time (daily, seasonal, and annual)? How do the relative contributions of biogenic (soil respiration) and geogenic (carbonate weathering) sources differ under different temperature and hydrological conditions? We answer these questions using a catchment‐scale reactive transport model constrained by stream flow, stable water isotopes, stream DIC, and carbon isotope data from a headwater karstic catchment in southwest China in a subtropical monsoon climate. Results show climate seasonality regulates the timing of DIC production and export. In hot‐wet seasons, high temperature accelerates soil respiration and carbonate weathering (up to a factor of three) via elevating soil CO2 and carbonate solubility, whereas high discharge enhances export by two orders of magnitude compared to cold‐dry seasons. Carbonate weathering is driven more by soil CO2 than water flow. At the annual scale, 92.9% and 7.1% of DIC was produced in shallow and deep zone, respectively, whereas 64.5% and 35.5% of DIC was exported from shallow and deep zone, respectively. These results highlight the uniqueness of subtropical karst areas as synchronous reactors and transporters of DIC during the hot‐wet monsoon, contrasting the asynchronous production and export in other climate regions. A future hotter and wetter climate with more intensive storms in the region may further intensify DIC production and export, accentuating the potential of subtropical karst regions as global hot spots for carbon cycling.
Riverine DIC originates from two major sources: biogenic carbon via soil respiration and geogenic carbon via chemical weathering (Campeau et al., 2017;Gaillardet et al., 1999;Mayorga et al., 2005).Soil respiration transforms organic matter into soil CO 2 , which further dissolves in water to become DIC and reduces pH.Carbonate weathering consumes soil CO 2 , increases DIC concentrations, often elevates pH, and further modifies the speciation of DIC (bicarbonate HCO − 3 , carbonate CO 2− 3 , and dissolved CO 2 (aq)) (Keller, 2019;Z. Liu et al., 2018).The potential sources of DIC can be deciphered from the distinct signatures of stable carbon isotopes in δ 13 C, with a typical range of −26‰ to −18‰ for organic matter from C3 plants and 0‰ for carbonate minerals of marine sedimentary origin (Land et al., 1980).The relative contributions of different carbon sources can therefore lead to variations in stream δ 13 C from −26 to 0‰.For example, in South China with large karst area (∼550,000 km 2 ), ∼35%-60% of DIC concentrations was estimated to derive from carbonate weathering, much higher than the percentage in non-karst systems (Qin et al., 2019;Zhong et al., 2017).A generic ratio of 50% (i.e., one half of DIC) has been commonly used to designate the contribution from carbonate weathering in karstic catchments and at regional scales (Z.Liu et al., 2018;S. Zeng et al., 2019).
Rates of soil respiration and carbonate weathering are regulated by a multitude of interacting drivers.Soil respiration rates generally increase with temperature but often peak at intermediate soil water content of 50%-70%, depending on organic matter characteristics, soil types, and biological activity (Davidson & Janssens, 2006; Z. F. Yan et al., 2018).Low soil water content facilitates CO 2 emission back to the atmosphere, and therefore reduces soil CO 2 and DIC concentrations (Ilstedt et al., 2000).For carbonate weathering, in addition to the general influence of subsurface spatial heterogeneities (Wen andLi, 2017, 2018), laboratory studies in well-controlled, small-scale reactors have identified the direct control of temperature on reaction affinity, kinetics, and thermodynamics (Kirstein et al., 2016;Plummer et al., 1978).Recent field studies highlighted the indirect control of climate conditions on carbonate weathering via soil CO 2 that in turn depends on water availability, microbial activity, and root respiration (Calmels et al., 2014;Romero-Mujalli et al., 2019a;Wen, Sullivan, et al., 2021;White & Blum, 1995).Global carbonate weathering rates are often estimated assuming carbonate weathering at equilibrium with dependence on soil CO 2 (Gaillardet et al., 2019;Romero-Mujalli et al., 2019b).These multiple, interacting drivers challenge the identification of most influential factors and the assessment of the impact of individual factors.
Understanding the sources, timing, and magnitude of riverine DIC export presents another challenge (Duvert et al., 2020;Qin et al., 2019).During wet periods, especially during storms, biogenic dissolved carbon is primarily transported through shallow soils, promoting lateral export to streams (Qin, Li, et al., 2020;Wen et al., 2020;Zhong et al., 2017).Additionally, soil water rich in CO 2 can recharge rapidly into the deeper subsurface, increasing water-rock interactions and geogenic carbon fluxes into the stream (Clow & Mast, 2010;Wen et al., 2022).During dry times, deeper groundwater becomes the major venue for DIC export to streams (e.g., Stewart et al., 2022).The export of DIC is also influenced by factors such as mineral abundance, weathering agents (e.g., carbonic acid), solute transport, and water residence time (Raymond & Hamilton, 2018).Given the limited accessibility of the subsurface, it remains challenging to pinpoint when, where, and how much stream DIC comes from different subsurface zones and how the flow paths and sources change with flow regimes, among other conditions.
Here we ask the questions: How and how much do DIC production and export vary across space (shallow vs. deep, uphill vs. depression) and time (daily, seasonal, and annual)?How do the relative contributions of biogenic (soil respiration) and geogenic (carbonate weathering) sources differ under changing temperature and hydrological conditions?Previous work has shown that in a forested catchment under temperate climate, the production and export of dissolved organic carbon (DOC) are asynchronous, with production mostly occurring in dry summers whereas its export primarily in wet springs and winters with high discharge (Wen et al., 2020).Here we draw upon the recent development of a watershed-scale reactive transport model (RTM) BioRT-Flux-PIHM (L.Li, 2019;Zhi et al., 2022).The model integrates hydro-biogeochemical processes and can distinguish the role of individual processes and elucidate mechanisms at the catchment scale (L.Li et al., 2017;Steefel et al., 2015).The recently developed isotope-enabled feature in RTMs complements the traditional mixing approach that often neglects the complex interactions of flow and biogeochemical reactions (Druhan, Guillon, et al., 2021;Druhan, Lawrence, et al., 2021).The study catchment is Chenqi, a catchment within the karst Critical Zone observatories in southwestern China.Field data was used to calibrate BioRT-Flux-PIHM, from which we quantified flow paths and reaction rates over time and space and provided insights on hydro-biogeochemical processes.The catchment offers a test bed to understand inorganic carbon cycling and biogeochemical processing in limestone terrain and under subtropical monsoonal climate.
Study Site: A Karstic Catchment With a Subtropical Monsoonal Climate
Chenqi is located in central Guizhou, China (Figure 1), experiencing a subtropical monsoonal climate with the mean annual precipitation of 1,308 ± 244 mm and mean annual temperature of 15.4 ± 0.5°C.The catchment has an underground stream in a peak-depression karstic landscape.Encircled by star-distributed conical hills, it can be divided into two units (Chen et al., 2018;Zhang et al., 2019): the flat depression areas at low elevation (1,320-1,340 m) and the steeper uphills at high elevation (1,340-1,520 m).The uphill (0.88 km 2 , ∼83% of the catchment area) is covered by thin soils (<0.5 m), deciduous forest, and shrub-grassland (Figure 1).The depression area (0.37 km 2 ) has much thicker soils (0.5-2.0 m) and is dominated by farmland (corn and rice paddy).Carbonate bedrock, predominantly limestone, situates above a formation of impermeable marlite and spreads extensively with a thickness ranging from 150 to 200 m (Zhang et al., 2011).Quaternary soils are irregularly developed on carbonate bedrock, with outcrops of carbonate rocks covering 10%-30% of the catchment.
Measurements
All measurements were undertaken at the outlet of the underground stream (Figure 1).Discharge was measured daily with v-notch weirs from 1 September 2007 to 31 December 2008.Stream chemistry samples were collected monthly between September 2007 and August 2009.Rainwater chemistry was measured for individual precipitation events.Water samples were filtered through 0.45 μm using Nylon syringe filters, transferred to acid-washed plastic bottles, and stored immediately in darkness at 4°C prior to lab analysis.
The pH was measured at the time of sampling by a hand-held water quality meter (WTW Multiline P3 pH/LF-SET).Alkalinity was measured by in situ titration with the Aquamerck Alkalinity Test and Hardness Test, with analytical precision of ±0.1 mM C. Major cations (e.g., Ca) concentrations were measured using inductively coupled plasma optical emission spectrometer.Stable isotopic ratios of carbon in DIC (δ 13 C) was measured using GV Isoprime continuous flow mass spectrometer (CF-IRMS).All δ 13 C values were reported as per mil (‰) deviations from the standard Vienna Pee Dee Belemnite, with a standard deviation (1σ) of <0.2‰.Stable isotopic ratios of oxygen in water (δ 18 O) was analyzed using CF-IRMS and calibrated to the Vienna Standard Mean Ocean Water, with a standard deviation less than 0.2‰.Further details about sampling and measurements are available in previous publications (Hao et al., 2019;Zhang et al., 2011;Zhao et al., 2010Zhao et al., , 2015Zhao et al., , 2018)).
Watershed Reactive Transport Model
We used the recently developed model BioRT-Flux-PIHM (Zhi et al., 2022).The code has been applied to understand solute transport, chemical weathering, and microbe-mediated redox reactions at the catchment scale (Bao et al., 2017;Saberi et al., 2021;Wen et al., 2020;Zhi & Li, 2020;Zhi et al., 2019).The code includes three modules: the surface hydrological module PIHM, the land surface module Flux, and the multicomponent reactive transport module BioRT.Flux-PIHM calculates water variables (e.g., water storage, soil saturation, and water table depth) in shallow soil water and deep groundwater zones (Figure 2).The soil water zone (SZ) is most conductive to water flow (e.g., interflow) and responsive to hydroclimatic forcing; the deep groundwater zone (deeper zone [DZ]) is the deeper, less weathered, and lower permeability layer that harbors the old groundwater.The Flux-PIHM module uses daily meteorological data (precipitation, temperature, radiation, etc.) as input, simulates hydrological processes, and outputs daily evapotranspiration (ET), surface runoff Q surf , shallow soil water interflow Q SZ , and the deeper groundwater flow Q DZ .The BioRT module uses water and temperature output from Flux-PIHM at each time step and integrates transport and biogeochemical reactions through the governing mass conservation equations.
The model uses watershed characteristics including topography (e.g., soil depth, surface elevation), subsurface properties in SZ and DZ (e.g., hydraulic conductivity, porosity, macropore fraction, van Genuchten parameters), and vegetation properties to set up the domain.Geochemical characteristics include the geochemistry of rainwater, soil water, deep groundwater, soils, and bedrocks as initial conditions, as well as their thermodynamics and kinetics of geochemical reactions.The model can produce outputs at the timescales of hours to years, including water state variables (e.g., soil saturation, water table depth), water fluxes (e.g., Q), geochemical reaction rates, and solute concentrations in the SZ, DZ, and stream.
Reaction Networks
Here DIC, the sum of CO 2 (aq), HCO − 3 , and CO 2− 3 , is assumed to derive primarily from soil respiration and carbonate weathering (Figure 2).Soil respiration includes the heterotrophic respiration, or the decomposition of soil organic carbon (OC), and the autotrophic root respiration (Ekblad & Högberg, 2001;Jones et al., 2004).Soil CO 2 can diffuse vertically into the atmosphere and/or dissolve in water and export to streams as DIC.We do not have data to track CO 2 emission back to the atmosphere.Therefore, Reaction 1 in Table 1 represents the net production of CO 2 from soil respiration that eventually dissolves in water and contributes to DIC, which likely underestimates the total rates of soil respiration as we ignored the vertical soil CO 2 fluxes.With abundant OC and O 2 in soils serving as electron donors and acceptors, a simple rate law is applied to describe the net CO 2 production rate from soil respiration ( p_bio ): where k bio is the kinetic rate constant of the net biogenic CO 2 production (mol C/m 2 /s); and A is a lumped "surface area" (m 2 /m 3 ) that quantifies OC content, biomass, and root abundance.The function f(T) describes the rate , where Q 10 quantifies the rate increases with T and is set at 4.0, which is within the typical range for karst areas (Davidson & Janssens, 2006;Wang et al., 2020).The function f(S w ) describes the dependence on soil saturation (S w ) with a threshold form (Z. F. Yan et al., 2018).
Soil CO 2 acidifies water and accelerates carbonate weathering (Reaction 2-6).Carbonate weathering follows a transition state theory rate law (Lasaga, 1984): where the local carbonate rate r cal (i.e., local DIC production rate from geogenic source p_geo ) is determined by the kinetic rate constant (k cal , mol/m 2 /s), the activity of hydrogen ion ( + ), the carbonate bulk surface area (A, m 2 /m 3 ), and the extent of disequilibrium represented as the ratio of the ion activity product (IAP) and the equilibrium constant (K eq ).The reaction parameters were calibrated using water chemistry data (Sections 2.6 and 3.2).
To differentiate the contribution from soil respiration and carbonate weathering, carbon isotopes ( 12 C and 13 C isotopes) were explicitly simulated as independent, individual "species" that are distributed in appropriate ratios based on δ 13 C values and radiocarbon compositions measured at Chenqi (Hao et al., 2019;Qin et al., 2022).The stoichiometry in soil respiration and carbonate weathering (Reaction 1 and 6 in Table 1) therefore reflects the distribution of carbon isotopes, resulting in a reported δ 13 C value of −22.0 and 0‰, respectively.Note that the value of −22.0‰ refers to the observations in soil solution as the eventual outcome of soil CO 2 dissolution and evasion (Campeau et al., 2017;Qin et al., 2019Qin et al., , 2022)).Values of δ 13 C can vary via complex interactions between plants, microbes, and environmental conditions.The two end-member δ 13 C values simplify isotope variations in natural systems but capture the key characteristics between biogenic and geogenic sources without overburdening the model with too many parameters.The equilibrium expressions of DIC (Reaction 2-5) are repeated twice to keep track of both 12 C and 13 C isotopes.This multi-isotope simulation approach has been benchmarked and applied in 1D soil and karst cave systems (Druhan, Guillon, et al., 2021;Druhan, Lawrence, et al., 2021).
The Four Grid-Cell Domain Setup
Following Figure 1b, we conceptually simplified the watershed into four prismatic grid cells representing the depression and uphill at each side of the stream (Figure 2).The areas of the depression and uphill grid cells are All parameters refer to the condition at 25°C.b Values of K eq were interpolated using the EQ3/6 database (Wolery et al., 1990), except Reaction 3, 5, and 6.The K eq values related to 13 C in Reaction 3, 5, and 6 were from Druhan, Guillon, et al. (2021) and Druhan, Lawrence, et al. (2021).c The kinetic rate parameters in Reaction 1 and 6 for soil respiration and carbonate weathering (i.e., biogenic and geogenic sources of DIC) were from Palandri and Kharaka (2004) and Plummer et al. (1978), respectively.d The specific surface areas were calibrated by fitting stream chemistry data.The calibrated surface areas are much lower than field measurements because they only reflect the effective surface area that are truly contacting and reacting with water (Wen & Li, 2017).e The stoichiometry in soil respiration and carbonate weathering reflects the distribution of carbon isotopes and is calculated based on field measurements at the study site (Hao et al., 2019;Qin et al., 2022).
0.18 and 0.44 km 2 , respectively.The corresponding slopes are 7° and 24°, averaged based on the digital surface elevation map.The hydrometeorology data are from the China meteorological forcing data set at hourly frequency (He et al., 2020); the leaf area index is from the Moderate Resolution Imaging Spectroradiometer; soil properties are from the soil survey database from the Puding Karst Ecohydrological Observation (Chen et al., 2018).
Each prismatic land element within the modeling domain was discretized into two layers to represent the shallow soil and the deeper underlying bedrock (the SZ and DZ in Figure 2), based on characterization and measurement data (Bai & Zhou, 2020;Chen et al., 2018;Qin, Li, et al., 2020;Qin et al., 2022;Zhang et al., 2019).Soil depths (i.e., the SZ) were set at 0.20 and 1.75 m for the uphill and depression, respectively, while corresponding depths of the deep groundwater zone (i.e., the uphill and depression DZ) were set at 125.0 and 28.0 m.The OC and root content in depression soils were set at 8% (v/v solid phase) compared with 4% of uphill soils and 0.8% of the DZ.Considering the widely observed outcrops of carbonate rocks across the catchment, carbonate abundance was set at 50% (v/v solid phase) in SZ and 75% in DZ.The subsurface matrix properties were parameterized based on the reported median values at Chenqi (Zhang et al., 2011).
Model Calibration
We first calibrated the hydrological module to reproduce stream discharge, followed by the calibration of biogeochemical module using stream chemistry, stable water isotope (i.e., nonreactive tracer 18 O), and carbon isotope data.Stream discharge (daily) and chemistry (monthly) data from September 2007 to August 2009 were used for model calibration.The year 1977-2006 was used as spin-up until a "steady state" for both discharge and water chemistry was reached.The "steady state" here refers to a state where the inter-annual difference between stored mass within the catchment is less than 5% of the total mass.To reduce the potential bias from monthly chemistry data, such as missing dynamics in response to intense precipitation, we further validated the model using high-frequency (hourly) stream discharge and chemistry data collected during rainfall events in June 2018 from Qin, Ding, et al. (2020) and Qin, Li, et al. (2020).The model performance was evaluated using the Nash-Sutcliffe efficiency (NSE), RMSE-observations standard deviation ratio (RSR), and percent bias (PBIAS) (Gupta et al., 1999;Nash & Sutcliffe, 1970;Singh et al., 2005).We used the general satisfactory range of NSE > 0.5, RSR ≤ 0.7, |PBIAS| < 25% (Moriasi et al., 2007), similar to those used in Wen et al. (2020).
Two types of parameters were adjusted (Table S1 in Supporting Information S1), including Zilitinkevich coefficient related to the land-atmosphere interaction, and subsurface properties including van Genuchten parameters α and n, porosity, hydraulic conductivity, and macropore fraction.To reproduce stream chemistry data (pH, Ca, alkalinity, and δ 13 C), we adjusted the specific surface area for the DIC production rates from soil respiration and carbonate weathering (Reaction 1 and 6).Because not all water is fully in contact with OC, roots and carbonate minerals, the calibrated surface area here (Table 1) represents the effective solid-water contact area at the catchment scale, and can be orders of magnitude lower than those measured in labs and fields (Pennell et al., 1995;Wen & Li, 2017, 2018).The stream alkalinity and δ 13 C data together constrained the contribution from biogenic and geogenic sources.
Water Transit Time, Stream Concentrations, and Catchment-Scale Rates
Water transit time.We estimated the transit time of water from the rainfall to the catchment outlet via a reactive tracer-based approach developed by Wen et al. (2022).The tracer concentration in the rainfall (C 0 at 0.9 mol/L) is assumed to decay following zero-order kinetics = −decay , where k decay is the decay rate (=0.1 mol/m 3 /yr) and t is the time (year).Consequently, the mean transit time τ i can be determined through the corresponding tracer concentration C i , calculated as = 0 − decay .Following the above equation, the mean transit time in the shallow soil (τ SZ ), deep groundwater (τ DZ ) and stream (τ stream ) were determined using the tracer concentrations coming out of the SZ (C SZ ), DZ (C DZ ), and stream (C stream ), respectively.
Stream DIC from distinct sources.Stream DIC concentrations (C) reflect the contributions of different source waters from distinct flow paths including surface runoff, soil water flow from SZ and deep groundwater flow from DZ (Stewart et al., 2022).The source water chemistry is influenced by both soil respiration and carbonate weathering in these subsurface zones.To differentiate their contribution to stream DIC, we carried out two additional simulations based on the calibrated case: (a) the transport-only model (i.e., no reactants) without reactions (C r ); and (b) soil respiration only without carbonate weathering (C bio+r , k cal = 0 in Reaction 6).Comparison of these two cases with the calibrated base case with both soil respiration and carbonate weathering can quantify the effects of different reactions.Thus, the DIC concentrations from soil respiration and carbonate weathering can be estimated using C bio = C bio+r − C r and C geo = C − C bio+r , respectively.
Production and export rates.Reaction rates R at the catchment scale were calculated as the sum of local rates in individual grid cells (r i ) multiplied by their corresponding volume (V i ).The DIC production rate from biogenic source is R p_bio = ∑ (p_bio, × ).The DIC production rate from geogenic source is the carbonate weathering rate: R p_geo = R cal = ∑(r cal,i × V i ).The overall DIC production rate (R p ) is the sum of R p_bio and R p_geo .The DIC input from the rainfall R r (mol/d) is the precipitation rate (m/d) times the rainfall DIC concentration (5.0 × 10 −2 mol C/m 3 ) and the catchment drainage area (m 2 ).
The overall DIC export rate R e is the product of discharge and the DIC concentration at the stream outlet, including contributions from rainfall, soil respiration, and carbonate weathering.The export rate from these sources was calculated using their corresponding export concentrations (C r , C bio , and C geo ).
Water Balance and Hydrological Processes From Daily to Annual Time Scales
The annual precipitation was 1,361 mm from 1 September 2007 to 31 August 2008 and 1,462 mm from 1 September 2008 to 31 August 2009.The average discharge approximated 10.0 mm/d in the hot and wet seasons (summer and fall), compared to about 0.2 mm/d in the cold and dry seasons (spring and winter) (Figure 3a).Stream discharge was highly responsive to intense precipitation events in the hot and wet seasons.The model captured the dynamics of daily discharge, with NSE, RSR, |PBIAS| of 0.57, 0.65, and 0.01, respectively.
The model estimated that the average annual ET is 442 mm, about 31.3% of precipitation, with the rest contributing to stream discharge of about 921 mm.The discharge originated from three flow paths: surface runoff (Q surf ), shallow soil water (Q SZ ), and deep groundwater (Q DZ ), contributing about 0.4%, 75.6%, and 24.0%, respectively, consistent with field estimation (C.Zeng et al., 2014).During dry seasons, discharge mostly originated from the old Q DZ at a water age of around 30 years; during wet seasons, the relatively young Q SZ (<1 year old) responded faster than Q DZ and contributed to up to ∼95% of streamflow during intense rainfall events (Figures 3b and 3c), leading to young streamflow (∼2 years).The simulated daily soil saturation and T followed the temporal trends of discharge and ET, respectively (Figure 3d).
Biogeochemical Dynamics: Stream Water Chemistry Varies Most in the Hot and Wet Monsoon
Stream chemistry varied substantially between cold-dry and hot-wet seasons.Due to large variations in the rainfall isotopic signature, stream δ 18 O values were more variable during the hot-wet season, alternating between enriched and depleted from −9.5 to −6.6‰ (Figure 4a).In contrast, stream δ 18 O during the cold-dry season remained relatively constant around −8.0‰.Stream pH, Ca, alkalinity and δ 13 C values varied in similar ways.They were relatively high during the cold-dry season (7.4 ± 0.2, 3.0 ± 0.7 mM, 4.2 ± 0.2 mM C, and −11.1 ± 1.0‰), and dropped during the hot-wet season (7.2 ± 0.3, 2.7 ± 0.4 mM, 3.7 ± 0.3 mM C, and −13.9 ± 1.7‰; Figures 4b-4d and 4f).The model captured the general trends of stream isotope (δ 18 O and δ 13 C), pH, Ca, and alkalinity data (Figure 4; See Figure S1 in Supporting Information S1 for scatterplots comparing model outputs to measurements), with NSE, RSR, |PBIAS| values of 0.51, 0.69, and 0.20, respectively.The simulated stream chemistry exhibited more temporal dynamics than the monthly measurements in 2007-2009 because measurements occurred on non-rain days and do not provide the full picture of stream chemistry during intense rainfalls.The consistency between model output and hourly data from the 2018 summer (Figures 4d and 4f) further validated the model performance in wet seasons.The stream water chemistry reflected the mixing of SZ and DZ, as demonstrated by model outputs (lines in Figure 4).Solute concentrations were stable in SZ (dark dash lines) and fluctuated significantly in SZ (light dash lines), with high values in the cold-dry spring and winter and low values in rainfall events and in the hot-wet summer and fall.The variation of non-reactive tracer 18 O was pronounced in rainwater and was much damped in groundwater.Relatively low stream concentrations of solutes and relatively negative δ 13 C during hot-wet seasons were attributed to greater contributions from SZ.
Spatial Patterns of Reaction Rates and Water Chemistry: Uphill Versus Depression and Shallow Versus Deeper Zones (SZ and DZ)
Spatial patterns of modeled variables at uphill versus depression in SZ generally followed similar cold-dry and hot-wet seasonal dynamics.In the cold-dry season (from December to May), soil saturation at depression was constantly around 0.62, about 30% higher than that at uphill (Figure 5a), whereas the difference in soil T between the two zones was much smaller (<2°C).Both uphill and depression soils were hydrologically disconnected from the stream, with the lateral flow Q SZ less than 0.1 mm/d (Figure 5b).Spatial distributions of soil respiration rates r p_bio in SZ resembled that of OC and soil saturation, and were higher in depression due to higher water content.With minimal water flow and export to the stream, high r p_bio in depression led to the accumulation and high concentrations of soil CO 2 (aq) and H + , reaching a maximum of 5 times higher than that of the uphill at the end of dry periods (Figures 5e-5g).Spatial patterns of Ca and DIC followed that of soil CO 2 , indicating its predominant control on carbonate dissolution.High soil CO 2 and low pH effectively reduced bicarbonate concentration and increased carbonate solubility, leading to higher weathering rates r cal in depression (Figures 5d and 5h)., (e) pH, (f) dissolved inorganic carbon (DIC), (g) aqueous and gaseous soil CO 2 , and (h) ion activity product/K eq .Gaseous soil CO 2 (g) in panel (g) was estimated using Henry's law.Soil CO 2 and r p_bio were high in depression soils that had relatively high organic carbon (OC) and water content, leading to correspondingly high H + (lower pH), DIC and carbonate weathering rates compared to the uphill.These spatial contrasts were more significant during the cold-dry season when the uphill and depression was mostly disconnected and most products accumulated in soils.The DZ with more abundant carbonate had much less temporal and spatial variations because of less abundant OC content, slower groundwater flow, and longer water transit time.Compared to uphill-depression differences, the shallow SZ versus deeper DZ contrasts were much more pronounced.For example, rates in SZ and DZ can reach several orders of magnitude differences (c, d).
In the hot-wet season (June to November), soil T increased to ∼18°C in both depression and uphill.Soil saturation at depression increased by 20%-40% responding to intense rainfall events while variations in uphill were less than ∼10%.R p_bio increased by over four times across the entire catchment but dropped sharply at depression under very wet conditions when soil saturation exceeded 0.7.Soil CO 2 and DIC concentrations dropped by a factor of more than four times and was relatively homogeneous from uphill to depression.This is because the fast soil water flow (Q SZ, uphill and Q SZ, depression in Figure 5b) exported soil CO 2 and DIC from uphill into the stream rapidly, leading to less accumulation and lower concentrations.Correspondingly, carbonate weathering rates were high as the water was at disequilibrium with low IAP/K eq .
The DZ showed much less temporal and spatial variations in water chemistry (dashed line) than the SZ (solid line).With low OC in DZ, soil respiration rates r p_bio were about two orders of magnitude lower than that in SZ.The deeper flows (Q DZ, uphill and Q DZ, depression in Figure 5b) were relatively constant compared to shallow flow and had higher concentrations of CO 2 , H + , and DIC (Figures 5e-5g).The carbonate in DZ was in contact with water that was close to equilibrium (IAP/K eq = ∼0.95, Figure 5h) and had low r cal at ∼10 −5 mol C/m 3 /d (Figure 5d).With faster groundwater flow into stream (Q DZ, depression , dark dashed line in Figure 5b), soil CO 2 (aq), H + , Ca, and DIC concentrations at depression were slightly lower than those at uphill.The soil respiration r cal at the depression DZ fluctuated more and were higher in wet compared to dry seasons, because of the elevated, fluctuating recharge that brought soil CO 2 -enriched water from SZ in wet seasons.Compared to these uphill-depression differences, the SZ versus DZ contrasts were much more pronounced.For example, rates in SZ and DZ can reach several orders of magnitude differences (Figures 5c and 5d). ) and (B) corresponding rates of soil respiration (R p_bio , R e_bio ) and carbonate weathering (R p_geo , R e_geo ) in SZ and DZ.The DIC input from rainfall (R in_r ) was negligible, about 1.0% of R p .Temporal patterns of R p , R p_bio and R p_geo varied within an order of magnitude and generally followed that of soil T, whereas R e , R e_bio and R e_geo mostly followed that of discharge, increasing by up to two orders of magnitude with Q.The relative magnitude of production and export rates in different seasons determines the temporal patterns of the overall DIC storage, which peaks in cold-dry seasons with minimal flow and reached minima in the hot moments of export during hot-wet periods.Rates and mass in SZ generally fluctuated more and were one-to-two orders of magnitude higher than those in DZ.
Amplified DIC Production and Export in the Hot and Wet Monsoon
The catchment-scale rates varied significantly over time.The daily production rate R p (= R p_bio + R p_geo , 8.6 × 10 3 mol C/d) was much higher than the daily rainfall input R in _ r (9.3 × 10 1 mol C/d) such that rainfall input was negligible.R p_bio and R p_geo were 5.1 × 10 3 and 3.5 × 10 3 mol C/d, respectively, accounting for 59.1% and 40.9%, respectively, of the overall DIC production rates R p .These reactions mostly occurred in SZ (Figures 6B1 and 6B2), typically one to two orders higher than those in DZ.They followed the general trend of soil T, except that R p_geo also increased during big storms as excessive water flow drove carbonate dissolution far from equilibrium.The export rate R e (= R e_r + R e_bio + R e_geo ), R e_bio , and R e_geo primarily followed the seasonality of discharge.During the dry-to-wet transition, DIC was flushed out mostly through SZ as the catchment became wetter (Figures 6B3 and 6B4), substantially reducing the overall DIC storage to ∼1.35 × 10 7 mol.During the cold dry periods, although R p was relatively low, the DIC pool increased, reaching as high as ∼1.55 × 10 7 mol due to the minimal flow and export at the time.During the longer dry periods (Q << 1.0 mm/d) from November 2007 to May 2008, DIC accumulated more compared to the same period in 2008-2009.The biogenic rates varied significantly in SZ with seasonal variations but were almost constant in DZ (Figure 6B1).The geogenic rates had similar variations in SZ but varied also in DZ (Figure 6B2), especially during the wet season, following the temporal patterns in SZ.The annual DIC storage changed by about 2.6 × 10 5 mol between the two years, about 0.4% of the overall DIC production, indicating a general mass balance at the catchment scale.(d-f) R e and daily soil T, soil saturation S w , mean water transit time τ stream , and discharge.All dots are daily model outputs; the symbol size in panels (a, b) represents the magnitude of S w and T. R p_bio , R p_geo , and R p all generally increased with soil T, whereas R e_bio , R e_geo , and R e all depended more on τ stream and S w , except the cold-dry periods with low soil T and S w (<15°C and 0.53).In the cold-dry periods, R p_geo decreased with decreasing S w even when T increased; export rates were relatively constant because stream water was mostly contributed by the old deeper zone.
Production Rates Depend More on Temperature Than Water Content
The daily production rates R p_bio , R p_geo , and R p generally depended more on soil T (R 2 > 0.80) than on water content, as illustrated by water variables (S w , discharge and τ stream with R 2 < 0.20) (Figures 6 and 7).All production rates correlated positively with soil T, and the R 2 values decreased from 0.98 to 0.94 to 0.81 for R p_bio , R p , and R p_geo , respectively.At low T and S w conditions (<15°C and 0.53), R p_geo depended more on S w than T and decreased with decreasing S w even when T increased (circled region in Figures 7a and 7b).Export rates correlated more with S w , Q, and τ stream than soil T, rising by over two orders of magnitude as Q increased (or τ stream decreased) by about two orders of magnitude.And under low S w (<0.53) when most stream DIC was contributed by the old DZ (∼30 years), DIC export rates were almost constant at ∼10 3 mol C/d.
Stream Water Sources Switching From Primarily Old, Deep Water in the Cold Dry Season to Young, Shallow Water Under Wet Conditions
Stream DIC exhibited a slight dilution behavior with decreasing concentrations at high discharge, with a power law CQ slope b = −0.12(C = aQ b , Figure 8).The CQ relationship indicated that the stream water sources switched from primarily old water in DZ almost at equilibrium with carbonate (IAP/K eq close to 1, large dots in Figure 8A1) under dry conditions, to primarily young water in SZ far from equilibrium (IAP/K eq << 1.0, small Figure 8. (A) Relationship between daily stream discharge and stream dissolved inorganic carbon (DIC) concentrations (A1), and δ 13 C (A2); (B) Relative contribution of daily carbonate weathering rates/fluxes to the corresponding daily export rates R e (B1) and production rates R p (B2) as a function of soil temperature and water-related measures.All dots are daily model outputs.In panel (A), the dot size represents the saturation state of carbonate weathering.Concentration-discharge relationships were quantified using a power law equation C = aQ b , with b representing the slope of CQ patterns.From low Q (cold dry) to high Q (hot wet), the stream water transitioned from the old groundwater approaching carbonate equilibrium (large dots and close-to-1 ion activity product [IAP]/K eq values) to young, abundant-soil CO 2 water far from the equilibrium (small dots and IAP/K eq ).With increasing discharge, the proportion of CO 2 -abundant soil water (i.e., biogenic source) increased in stream DIC, resulting in less dilution of dissolved carbon from soil respiration compared to carbonate weathering.Correspondingly, R e_geo /R e was almost constant at 0.43 under low discharge (20 years).Beyond that, R e_geo /R e decreased significantly with discharge.R p_geo /R p showed more variations, decreased with soil T, and decreased further under extremely dry conditions (S w < 0.53).dots) under wet conditions.The pattern of δ 13 C versus Q exhibited similar decreasing trend (Figure 8A2), indicating stream DIC primarily from DZ with high δ 13 C of geogenic origin (−10.8 ± 0.1‰) under low discharges and from SZ with more negative δ 13 C of biogenic origin (−13.0 ± 0.7‰) under wet conditions.Correspondingly, the relative contribution of geogenic DIC export to the overall daily DIC export (R e_geo /R e ) was almost constant at around 0.43 under dry conditions (Q < 1 mm/d) when R e predominately originated from DZ (Figure 8B1).Beyond that, as R e increased with more contributions from abundant-soil CO 2 young SZ, R e_geo /R e gradually decreased to ∼0.37.
The geogenic carbon (calcite weathering) contribution to the production, R p_geo /R p , showed more variations with soil T and hydrological regimes compared to the export ratio R e_geo /R e (Figure 8B).The ratio R p_geo /R p was generally lowest under warmest conditions when soil respiration dominated DIC production (Figure 8B2).Minimal geogenic contribution (low ratio) can occur and reach as low as 0.25 under cold conditions (T < 15°C), when the cold conditions coincided with the very dry conditions (S w < 0.53) such that carbonate weathering was mostly at equilibrium with slow flow.
Discussion
Understanding sources (biogenic vs. geogenic) and processes (production and export) of riverine DIC is essential for comprehending global carbon cycling and carbon fluxes.It is however challenging to understand the reactions and transport processes in the subsurface that control when, where, and how much DIC production and export occur (Campeau et al., 2017;Zamanian et al., 2016).These processes vary with climate forcing, landscape, and subsurface characteristics.DIC from karst catchments is often thought to be about 50% geogenic, because carbonate weathering consumes 1 mol of soil CO 2 (aq) and produces 2 mol of DIC.Here we use hydrometeorological, stream flow and chemistry data to constrain a catchment-scale RTM.This integrated model-data approach illuminates the coupled reactive transport processes in the subsurface and distinguishes the effects of individual factors, including the monsoon climate, on the sources, timing, and magnitude of DIC production and export (Figure 9).Results here additionally highlight the distinct contributions of depression-versus-uphill and shallow-versus-deep carbon sources, portraying catchments as hydro-biogeochemical reactors that have been conceptually proposed but not explicitly quantified (L.Li, 2019;L. Li et al., 2021).
Amplified DIC Production and Export During Subtropical Monsoon Season
The subtropical monsoonal climate is characterized by alternating cold-dry winter and spring and hot-wet summer and fall (Green et al., 2019;Zhang et al., 2011).Transitioning from the cold-dry to hot-wet season, soil saturation (S w ) increased by almost a factor of two (0.45-0.75), and soil T rose from 4 to 22°C (Figures 3 and 7).In addition, daily DIC production rates R p escalated by an order of magnitude, and seasonal average increased by almost two times (3.8 × 10 −3 to 5.9 × 10 −3 mol C/m 2 /day).High soil saturation (S w > 0.7) only reduced the rates by less than 10%.Under extremely dry conditions, carbonate weathering is limited by low effective rock-water contact and approaches to equilibrium, regardless of temperature conditions and biological activities.Under hot-wet conditions, elevated soil CO 2 promotes carbonate weathering by increasing water acidity and therefore carbonate solubility.Additionally, the rapid water flow drives carbonate weathering away from equilibrium.This indicates that carbonate weathering is largely driven by biological activity instead of water flow under wet conditions but by water flow under dry conditions.This supports the hypotheses that weathering rates are limited by the availability of reactive gas such as CO 2 in humid climates and by accessible water in arid environments (Calmels et al., 2014;L. Li et al., 2017).DIC production rates R p , whether biogenic or geogenic, depend more on T (R 2 > 0.8) than hydrological conditions (R 2 < 0.2, as shown in Figures 6 and 7), similar to those observed in catchments in temperate climate (Wen et al., 2020).The export rates R e , however, largely depend on discharge into the stream.In cold-dry spring and winter, DIC export is mostly from the slow flow at the deep subsurface.In the hot-wet season, uphill and depression soils are hydrologically connected to the stream, flushing out accumulated DIC produced in spring and winter and enhancing export by orders-of-magnitude.
The temporal patterns of DIC production and export rates therefore are predominantly controlled by the climate seasonality (Figures 6 and 9).Both production and export rates peak in the hot-wet summer and fall, and plummet in the cold-dry winter and spring.This is similar to catchments in boreal climates characterized by moderately warm-moist summers and extremely cold-dry winters.They have been shown to behave similarly due to their synchronous temperature and precipitation patterns (Bowering et al., 2023).This contrasts with catchments in temperate climates, where warmer seasons often coincide with drier conditions, resulting in high production but low export, as shown in the asynchronous pattern of production and export for DOC in Shale Hills in Pennsylvania, USA (Wen et al., 2020).In Shale Hills, the production rates peak in summer but export peak in spring during snow melt events.The catchment acts as a reactor (producer) in hot-dry summer and as a transporter (exporter) in cold-wet winter (L.Li et al., 2022;Wen et al., 2020).In Chenqi, the catchment is a reactor and transporter simultaneously in the hot-wet summer and fall.In tropical ecosystems, where temperature variation is negligible but precipitation varies significantly across seasons, the production of dissolved carbon was observed to be predominantly influenced by water content (Ilstedt et al., 2000;W. J. Zhou et al., 2016), with elevated concentrations and fluxes of dissolved carbon (DIC, DOC) during rainy seasons (Neu et al., 2016).
More Significant Vertical (Shallow-Deep) Contrasts Than Landscape Positions (Uphill-Depression)
Uphill versus depression.Under hot-wet conditions, high soil water saturation and fluxes (>0.1 mm/d) enhance hydrological connectivity and minimize differences between uphill and depression.Under cold-dry conditions when depression and uphill soils are hydrologically disconnected, the differences are amplified.Although with similar temperature, depression soils with high OC abundance and soil water content have higher production rates (up to a factor of two) than the uphill (Figure 5), especially in the cold-dry season.Similarly, solute concentrations in depression and uphill differ more during the cold-dry season.The depression zone generally is more acidic Figure 9. Conceptual representation of the catchment under cold-dry and hot-wet conditions.The subtropical monsoonal climate determines synchronous daily production and export rates of dissolved inorganic carbon (DIC) (R p and R e ) that peak in the hot-wet season and plummet in the cold-dry season.Daily DIC production rates R p are driven predominantly by temperature and vary by about one order of magnitude across the year.Conversely, daily R e is mostly driven by stream discharge, peaking in the hot and wet season with rates higher than those in the cold dry season by more than two orders of magnitude.Soil respiration dominates the production (R p_bio /R p > 0.5) and contributes more under hot-wet conditions; higher soil CO 2 productivity at high T also elevates carbonate solubility and accelerates carbonate weathering.The contribution of biogenic versus geogenic DIC in the stream largely depends more on flow paths.In the hot-wet season, stream water has low DIC concentrations with 13 C depletion, indicating biogenic from biogeochemically active soils.In the cold-dry season, stream DIC mostly originates from the deep subsurface with old groundwater (∼30 years), where geogenic carbon contributes constantly and about half of stream DIC.
(lower pH), and has higher DIC and CO 2 concentrations by a factor from 1.2 to 6.3 compared to the uphill, due to its generally higher soil moisture.In DZ, however, the landscape position difference is dampened and concentrations are similar in these two zones.
Shallow versus deep zones.The shallow-deep contrasts are more pronounced than the uphill-depression differences (Figures 5 and 6).On a daily to seasonal scale, compared to SZ, DZ is more stable, generally with higher pH and DIC concentrations and close to equilibrium.Similarly, respiration rates in DZ are stable and orders of magnitude lower than SZ.Surprisingly, rates of carbonate weathering in DZ are low but vary substantially.This is especially the case during the hot-wet season when carbonate weathering is predominantly driven by large water events and recharge of high soil CO 2 abundant-water.In contrast, soil respiration rates in SZ vary substantially with seasons (Figures 5c, 5d, and 6B1).The annual DIC production rate (R p ) in SZ is about 2.9 × 10 6 mol C/yr, over one order of magnitude higher than that in DZ (2.2 × 10 5 mol C/yr).The DIC export rate through DZ is also lower, at about 1.1 × 10 6 mol C/yr, about half of that through SZ (2.0 × 10 6 mol C/yr) due to generally lower water flow (∼24.0% of Q).
The carbonate weathering rates in the DZ are typically one to two orders of magnitude lower than those in the SZ (Figures 5 and 6), contrasting the common perception of high carbonate weathering rates at the deep weathering front in karst systems (G.Zhou et al., 2015).This can be attributed not only to the abundant carbonate rocks in shallow soils (i.e., outcrops, or shallow reaction front) in Chenqi, but also high flow that enables the relatively persistent disequilibrium conditions such that carbonate can dissolve and uptake CO 2 in soil zones.Such combination of shallow carbonate weathering front and rapid flow in shallow soil accelerate carbonate weathering in SZ, contrasting carbonate weathering in deep zones that is typically limited by slow flow and at-equilibrium water (Figure 5h).
Carbonate weathering is at disequilibrium with soil CO 2 most of the time in the SZ (Figure 5h), especially in the hot-wet season.This contrasts the general perception from small-scale lab systems that carbonate dissolution reaches equilibrium rapidly, often within hours (Plummer et al., 1978).This may largely arise from the combination of high carbonate content in karst formations, high temperature that promotes the generation of soil CO 2 , and high discharge that drives the water to disequilibrium.Together they enhance carbonate weathering and consume more soil CO 2 than expected.This indicates that the common equilibrium assumption for the estimation of riverine concentrations in regional, continental or global models may not hold and can potentially lead to estimation biases (Maher & Chamberlain, 2014;Romero-Mujalli et al., 2019a;S. Zeng et al., 2019).
Although the results highlight the significant differences in solute concentrations and rates between shallow and deep zones, the magnitude of this contrast may vary with a variety of factors that influence DIC production, including soil properties and land cover (Calmels et al., 2014;Pokrovsky et al., 2015;Shan et al., 2021).Low DIC (∼3.0 mM) in thin soils compared to higher concentrations (∼5.0 mM) in thick OC-rich soils have been attributed to limited soil respiration in karst terrains in southeast China (Green et al., 2019;J. H. Yan et al., 2011).DIC levels in shallow zones can also vary tremendously with land cover, with lower concentration ∼0.5 mM in bare soils and high concentrations ∼6.0 mM in grasslands (Macpherson et al., 2008;Q. Zeng et al., 2017).
Biogenic Versus Geogenic DIC Production
This work reveals that on average, about 60% of the riverine DIC is derived from soil respiration.The annual production rate is 2.5 mol C/m 2 /yr, with 1.5 mol C/m 2 /yr being biogenic.Their daily contributions vary significantly with weather conditions (Figures 7-9).Under cold-dry conditions where most stream water is from the old, deep groundwater, the relative contribution of geogenic carbon peaks and can reach almost half of the total DIC production.In the hot-wet season, geogenic carbon can contribute as low as ∼34%.In other words, as the catchment becomes wet and hydrologically connected, stream DIC is mostly from the shallow soil that is more biogenic.Similar dry-wet biogenic versus geogenic shifts were observed in the Congo basin and Howard River catchment of Australia (Bouillon et al., 2014;Duvert et al., 2020), suggesting such changes may be characteristic of dry-wet climates in (sub)tropical regions.The overall mean contributions of geogenic carbon are around 40%, lower than the generally reported 50% for karst formations (Hartmann, 2009;Z. Liu et al., 2010;S. Zeng et al., 2019), indicating generally overestimated geogenic contribution, especially under hot-wet conditions.The δ 13 C signatures in stream DIC ranged from −15.2‰ to −10.8‰.Similar ranges of δ 13 C signatures in stream DIC were observed in catchments primarily composed by carbonate rocks (Jin et al., 2014;J. Liu & Han, 2020), potentially indicating the typical dominance of biogenic carbon in karst systems.The main sources of DIC in headwater streams have been observed to change with a variety of factors, including lithology and climate, resulting in a broader range of δ 13 C signatures (Calmels et al., 2014;Pokrovsky et al., 2015;Shan et al., 2021).Sulfuric acid from oxidation of pyrite and acid rain and other acids (such as nitric acid and organic acids) can enhance carbonate weathering and produce 13 C-enriched DIC (Qin et al., 2019;Song et al., 2020).Strong enrichment from −13.5‰ to −4.8‰ have been observed in southwest China due to the oxidation of pyrite and acid rain (S.-L.Li et al., 2008;Xu & Liu, 2007).Silicate weathering accelerated by soil CO 2 can result in depleted δ 13 C signatures between −22.0‰ and −16.0‰ (Duvert et al., 2020;W. Wu et al., 2008).Co-existing carbonate and silicate rocks often result in carbonate precipitation in dry seasons and in arid and semi-arid regions but dissolution under wet conditions (Monger et al., 2015;Wen et al., 2022;Zamanian et al., 2016), which can further complicate DIC sources and δ 13 C signatures.
Future climate projection indicates southwest China will be warmer and wetter, with an increase of ∼190 mm in precipitation and 3.5°C in temperature during 2051-2100, respectively (Duan et al., 2021).The wetter and hotter conditions may further elevate soil respiration, carbonate weathering, and overall DIC production.If the relationship between temperature and rates in Figure 7a holds, the increase in DIC production is expected to be around 10 5.2 mol C/yr.Meanwhile, extreme precipitation and temperatures are predicted to increase at much higher rates than their mean values (S.-Y.Wu et al., 2019), further amplifying DIC production and export during the hot wet monsoon season.More and intensified precipitation extremes can also reduce riverine DIC concentrations but enhance DIC lateral export at higher discharge.
Global Hot Spots for Soil CO 2 Sink But Riverine CO 2 Sources?
The consumption of soil CO 2 by carbonate weathering, equivalent to the carbonate weathering rates here, has been considered as a sink for atmospheric CO 2 at short timescales (years -millennia) (Gaillardet et al., 1999;Hartmann, 2009;Z. Liu et al., 2018;J. H. Yan et al., 2011;S. Zeng et al., 2019).The carbonate weathering rate is estimated to be 1.0 mol C/m 2 /yr in Chenqi, higher than most of the estimation in the literature (Figure 10).Carbonate weathering rates are 0.48 mol C/m 2 /yr at the global scale (Amiotte-Suchet et al., 2003), 0.43 mol C/m 2 / yr in Chinese karst systems (0.67 mol/m 2 /yr for tropical and subtropical karst, 0.34 mol/m 2 /yr for plateau karst, 0.06 mol C/m 2 /yr for semiarid karst and 0.02 mol C/m 2 /yr for temperate humid karst) (Jiang & Yuan, 1999;Z. Liu & Zhao, 2000), 0.50 mol C/m 2 /yr for the Alpine river basins (Donnini et al., 2016), 0.66 mol/m 2 /yr in Konza Tallgrass Prairie (Macpherson & Sullivan, 2019), and 0.58-1.53mol/m 2 /yr for 5 karst hydrosystems in France (Binet et al., 2020).This indicates that karst formations in subtropical or tropical climate with abundant precipitation may be global hot spots for soil CO 2 sink (Figure 10).On the other hand, high DIC export rates at high discharge 2016), Jiang and Yuan (1999), Z. Liu and Zhao (2000), and Macpherson and Sullivan (2019).The dashed line represents the linear regression trend.Karst formations in subtropical or tropical climate (red cycles) with abundant precipitation tend to have high carbonate weathering rates and therefore may act as global hot spots for soil CO 2 sink.can also lead to high CO 2 evasion in rivers and streams (Duvert et al., 2018;Hotchkiss et al., 2015), which again imposes large uncertainties on whether karst regions are CO 2 sources or sinks.
Model Limitations and Implications for Future Research
The soil respiration rates estimated here correspond to the rates of producing DIC.It does not include processes such as soil CO 2 effluxes back to the atmosphere on land, and in-stream processes such as CO 2 evasion and stream respiration.As such, the soil respiration rates here likely underestimate the total soil respiration rates significantly.For example, the soil CO 2 efflux measured at the ground surface is usually considered as approximating soil respiration rates (Jian et al., 2021).These rates are usually within the range of 3.2-208.3mol C/m 2 /yr at subtropical regions, much higher compared to the soil respiration estimated here.In addition, riverine CO 2 evasion and stream respiration are known to affect DIC concentrations and fractionation of δ 13 C (Duvert et al., 2018;Hotchkiss et al., 2015;Marx et al., 2017;Solano et al., 2023).About 11% of dissolved carbon was estimated to evade the stream in karst formations of southwest China (S.-L.Li et al., 2010).CO 2 evasion and stream respiration can also lead to a more enriched δ 13 C signature.CO 2 evasion and stream respiration however usually occur at slower rates in small catchments with underground streams compared to aboveground rivers with direct interactions with the atmosphere (Butman & Raymond, 2011;Finlay, 2003).In the future hotter and wetter climate (Duan et al., 2021), storms, high discharge, and more contribution of biogenic sources (e.g., CO 2 (aq)) in the riverine DIC can potentially promote CO 2 evasion.Detailed, co-located measurements of soil CO 2 efflux to the atmosphere, riverine CO 2 evasion, and dissolved carbon in streams are often lacking but are essential to gain a comprehensive understanding and quantification of carbon cycling and fluxes in terrestrial systems and terrestrial-aquatic-atmospheric interfaces (Wen et al., 2022).
The four-grid cell model here aims to capture the key dynamics of the systems without overparameterizing the model without sufficient data (Wen, Brantley, et al., 2021).The model simplified the representation of the watershed by having only two cells of depressions and hillslopes in the landscape direction and two layers of soil and bedrock in the vertical direaction.This similarity aligns more with a hillslope-based model than with spatially distributed models that usually consist of hundreds or more cells.Although with some uncertainties (Figure 4), the model reproduced water chemistry data satisfactorily in both dry and wet seasons, indicating the essential features that lead to key stream dynamics are captured.This supports the hypothesis that representing essential landscape functioning units (e.g., hillslope) using simple or lumped models may be essential to capture key dynamics (Wen, Brantley, et al., 2021).In particular, model results show that the shallow and deep contrasts are much more pronounced than the uphill-depression contrasts.This suggests that representing the shallow and deep features accurately might be more crucial than depicting variations in the landscape direction.It shows the promise of using relatively simple hydro-biogeochemical models to represent salient features of watersheds in order to capture the key dynamics.Similar ideas have been implemented to simulate complex hydrological dynamics in karstic catchments through multiple conceptual boxes as the fast, medium, and slow flow paths (Husic et al., 2019;Zhang et al., 2019Zhang et al., , 2020Zhang et al., , 2021)).
Conclusion
This work asks the questions that have not been well addressed in subtropical karst regions: How and how much do sources, production and export of DIC vary across space (shallow vs. deep, uphill vs. depression) and time (daily, seasonal, and annual)?What is the relative contribution of biogenic (via soil respiration) and geogenic (via carbonate weathering) sources?Multiple observations (weather, stream discharge and chemistry, and carbon isotope) and a catchment-scale RTM enable the illumination of subsurface hydrological and biogeochemical processes that determine the timing and magnitude of DIC production and export in a small karst catchment.
Results show that annually, soil respiration was the primary contributor of DIC, accounting for about 60% of the total DIC production (the rest 40% from carbonate weathering), which exceeds the typical estimated 50% in karst formations.The hot-wet monsoon season simultaneously enhances soil respiration and carbonate weathering via disequilibrium conditions amplified by high flow and soil CO 2 levels.Compared to the deeper subsurface zone, the shallow soil typically has significantly higher production and export rates, often up to two orders of magnitude.The disparity between the depression near the stream and the uphill area is much less pronounced, typically within a factor of two.Carbon isotope data and model analysis indicate that the sources of riverine DIC shift from more biogenic (∼58%-66% of the total) from the shallow young soil water (<1 year old) in hot-wet seasons to about half biogenic (∼50%) from the deeper old groundwater (∼30 years old) in cold-dry seasons.These findings highlight the significance of time (hot-wet vs. cold-dry) and distinct contribution of the shallow and deep subsurface to carbon production and lateral export.They also suggest that, in the future, hotter and wetter climates and intensive storms may further elevate biogenic DIC production and lateral export.
Figure 2 .
Figure 2. A schematic representation of major processes in the watershed reactive transport model BioRT-Flux-PIHM.The Chenqi watershed is discretized into four prismatic grid cells, representing the steep uphill and flat depression (green and yellow) on each side of the stream.Each prismatic land element has two layers, representing the shallow soil (shallow zone [SZ]) and the deeper underlying bedrock (deeper zone [DZ]).Stream discharge Q is the sum of surface runoff Q surf , soil water interflow Q SZ , and groundwater flow Q DZ .Dissolved inorganic carbon (DIC) in the stream comes from rainfall, soil respiration (biogenic source), and carbonate weathering (geogenic) with distinct isotope signatures.Soil CO 2 from soil respiration (root respiration and organic carbon mineralization) can dissolve in water and become DIC or further react with carbonate minerals.DIC can export laterally via SZ into stream (Q SZ ), or vertically recharge into DZ and eventually enter stream via a longer flow path (Q DZ ).
Figure 4 .
Figure 4. Temporal dynamics of stream chemistry in 2007-2009, including (a) δ 18 O, (b) pH, (c) Ca, (d) alkalinity, (e) dissolved inorganic carbon, and (f) δ 13 C, as well as the model output and the hourly data during the 2018 summer from Qin, Ding, et al. (2020) and Qin, Li, et al. (2020) (the right side of panels (d, f)).In cold-dry spring and winter, stream discharge was mostly derived from deeper zone with almost constant stream chemistry.Under hot-wet summer and fall, stream chemistry resembled soil water chemistry from soil CO 2 -abundant shallow zone with more fluctuations and negative δ 13 C.
Figure 3 .
Figure 3. Temporal series of hydrological variables at the catchment scale in 2007-2009, including (a) daily precipitation, discharge, and evapotranspiration (ET), (b) normalized flux at the outlet, (c) mean water transit time, and (d) soil water saturation and soil temperature (soil T).Dots are data; lines are model outputs.Stream discharge was primarily from the young soil water Q SZ (<1 year old) in summer and fall with intense rainfall and high ET and from the old deeper water Q DZ (∼30 years) in spring and winter.
Figure 5 .
Figure5.Modeled variables in the shallow zone (SZ) and deeper zone (DZ) at uphill and depression, including (a) soil saturation and T, (b) flow rates, (c) soil respiration rates r p_bio , (d) carbonate weathering rates r p_geo (= r cal ), (e) pH, (f) dissolved inorganic carbon (DIC), (g) aqueous and gaseous soil CO 2 , and (h) ion activity product/K eq .Gaseous soil CO 2 (g) in panel (g) was estimated using Henry's law.Soil CO 2 and r p_bio were high in depression soils that had relatively high organic carbon (OC) and water content, leading to correspondingly high H + (lower pH), DIC and carbonate weathering rates compared to the uphill.These spatial contrasts were more significant during the cold-dry season when the uphill and depression was mostly disconnected and most products accumulated in soils.The DZ with more abundant carbonate had much less temporal and spatial variations because of less abundant OC content, slower groundwater flow, and longer water transit time.Compared to uphill-depression differences, the shallow SZ versus deeper DZ contrasts were much more pronounced.For example, rates in SZ and DZ can reach several orders of magnitude differences (c, d).
Figure 6 .
Figure6.Model output of temporal dynamics for (A) temperature, discharge, dissolved inorganic carbon (DIC) storage, production and export rates at the catchment scale (including both shallow zone [SZ] and deeper zone[DZ]) and (B) corresponding rates of soil respiration (R p_bio , R e_bio ) and carbonate weathering (R p_geo , R e_geo ) in SZ and DZ.The DIC input from rainfall (R in_r ) was negligible, about 1.0% of R p .Temporal patterns of R p , R p_bio and R p_geo varied within an order of magnitude and generally followed that of soil T, whereas R e , R e_bio and R e_geo mostly followed that of discharge, increasing by up to two orders of magnitude with Q.The relative magnitude of production and export rates in different seasons determines the temporal patterns of the overall DIC storage, which peaks in cold-dry seasons with minimal flow and reached minima in the hot moments of export during hot-wet periods.Rates and mass in SZ generally fluctuated more and were one-to-two orders of magnitude higher than those in DZ.
Figure 7 .
Figure 7. Relationships between daily (a-c) R p and(d-f) R e and daily soil T, soil saturation S w , mean water transit time τ stream , and discharge.All dots are daily model outputs; the symbol size in panels (a, b) represents the magnitude of S w and T. R p_bio , R p_geo , and R p all generally increased with soil T, whereas R e_bio , R e_geo , and R e all depended more on τ stream and S w , except the cold-dry periods with low soil T and S w (<15°C and 0.53).In the cold-dry periods, R p_geo decreased with decreasing S w even when T increased; export rates were relatively constant because stream water was mostly contributed by the old deeper zone.
Figure 10 .
Figure 10.Carbonate weathering rates (R cal ) in karst formations as a function of annual precipitation.Data in different regions are from Amiotte-Suchet et al. (2003), Binet et al. (2020), Donnini et al. (2016),Jiang and Yuan (1999), Z.Liu and Zhao (2000), andMacpherson and Sullivan (2019).The dashed line represents the linear regression trend.Karst formations in subtropical or tropical climate (red cycles) with abundant precipitation tend to have high carbonate weathering rates and therefore may act as global hot spots for soil CO 2 sink.
|
2024-01-06T16:15:17.793Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "6bfd28d4a729d4bd662a99243794ea378ec2660d",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023WR035292",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "bcc27bcd8677cbd2c6958e98cc807e2f6a832d06",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
265447855
|
pes2o/s2orc
|
v3-fos-license
|
Exploring exchange students’ global minds in a study abroad project
This study was conducted in the context of an international exchange project which introduced the participating students to curricular and instructional aspects of global education and to the diversity of school systems. The aim of the research was to investigate how the exchange students constructed and re-constructed their cultural and intercultural skills, knowledge, beliefs, and identities. The research data were collected using interviews, an online survey, and students’ messages posted on a Facebook group. Semi-structured interviewing was used as a major data gathering method as this made it possible to better explore the extent and qualities of the students’ sense-making and learning about their exchange experience. Six themes emerged indicating that the exchange students used a range of approaches to interact and communicate with people to gain intercultural perspectives. They made sense of educational systems, and developed their selfhood and social identities in a framework of negative and positive experiences.
Introduction
Globalization accentuates the need to make sense of the changing world.Today's global interdependence has given rise to new concerns of globalization, demanding enhanced knowledge and capabilities as well as authentic empathy and solidarity to acknowledge the variability in human beings and their communities (Deardorff 2006;Hansen 2010;Mansilla & Jackson 2011).Higher education institutions commit to educating graduates who are able to learn new ways of thinking, working, and living in the world (ATC21S 2011;Tarrant 2010;Witte 2010).The growing global interdependence requires young people to engage in solving global issues and problems, and to participate in local, national and global life (Feast, Collyer-Braham & Bretag 2011;Mansilla & Jackson 2011;Reimers 2009).The young educational (professional) leaders for the future need to be aware of global issues and to acquire competencies in intercultural sensitivity, which is essential to develop global minds and perspectives.This is what global education pursues, especially in higher education.
Recently, global education has received a lot of attention from educational practitioners and policy makers.For example, the Council of Europe states that global education can open students' eyes to the realities of the world and motivate them to bring about a world of greater justice, equity and human rights for all (Council of Europe 2004).However, the concept of global education is often found to be complex, diffuse and lacking connections to the lived, everyday life as Reimer and McLean (2009) pointed out in their study.They concluded the vague meaning of global education can influence how individuals implement this concept in real settings.Thus, global education programs must connect to personal experiences and social issues.Lucas (2010) also suggested educators should have a clear understanding of global education and how to incorporate it into school curricula.One of the important approaches to define global education is to identify competences developed by global education.Goodwin (2010) provides five main areas of knowledge that a globally minded teacher educator must demonstrate: personal, contextual, pedagogical, sociological and social knowledge.Mansilla and Jackson (2011) also proposed that global competence is the capacity and disposition to understand and act on issues of global significance.Global competence associates with four core capacities: investigating the world, recognizing perspectives, communicating ideas, and taking action.
In this study researchers decided to use the term global minds to describe the competences that can be developed in global education and the essential change in individuals who are becoming globalized.The global mind is constructed and reconstructed by shared sense-making, negotiation of meaning, and individual change.It is mediated with the help of other people, symbols, and artefacts such as computers (Linell 2003;Wertsch 1991).Thus the global mind does not only comprise a person's knowledge, skills, and attitudes, but, as we propose in this study, the global mind is related to several important aspects of the person, covering beliefs, identities, and interpersonal relations (Kim 2001;Kim 2008;Korthagen 2004;Wenger 1998).
The purpose of the exploratory study is to investigate how exchange students can develop and enhance their global minds in an international exchange project.The research question is as follows: How do the exchange students construct and re-construct their cultural and intercultural skills, knowledge, beliefs, and identities?
The research study is situated in a bilateral cooperation project based on joint European Union (EU) -the Republic of Korea funding and mutual agreement on the selection of higher education partnerships.The project aims to promote competencies of global education for student teachers in Korea and three EU countries [1].The major activities for this three-year project are exchange of undergraduate students with a focus on pedagogy from each consortium, a mobility program for faculty members, and curriculum development for global education.In the student exchange programme, the students spend an academic term at the hosting institutions taking courses in global education; culture-generic and culture-specific courses, as well as a foreign language course, all of these making up the cultural learning component.In addition, they are involved in teaching practice at local schools during their stay of 4 to 5 months.The participating universities provide the students with the academic programme and student services including housing mentoring, and opportunities for social integration.
Context of the study
During the first student exchange in the KE-LeGE[2] project, 10 European and 15 Korean students in total were sent to each partner university in the fall semester of 2011, which is 16 weeks on average.Table 1 explains the basic information about the exchange students.The students enrolled in regular academic programs and took 4-5 undergraduate courses taught in English.They also engaged in teaching practice at K-12 schools and participated in cultural and language programs according to their own preferences.Each university offered the students with student services including housing arrangements and mentoring.One or two faculty members of each university, as key persons in the project, were in charge of planning, implementing, and evaluating the process and outcomes of the project.
Data collection and analysis
Research data were collected from three sources: interviews, an online survey, and messages posted on a Facebook group.Semi-structured interviews were employed as a major data collection method, since it enabled the researchers to explore each participant's experiences and to understand the scope and levels of their intercultural learning.Seven interviewers in all interviewed the exchange students two times, in the first week and in the last week of their exchange period.The interview questions were developed based on the holistic model of student teacher's change levels suggested by Korthagen (2004), and modified for the present study.The following questions were used as guidelines for the thematic interviews, conducted at the beginning and end of the study abroad period: The interviews were audio-recorded and subsequently transcribed.The online survey, a secondary source of data collection, was conducted to study the exchange students' views about the project.The questions in the survey covered the students' satisfaction and opinions about the academic and extracurricular programs as well as educational services available during the exchange period.Three experts in the education field reviewed the questionnaire and suggested minor revisions of the questions.The survey was conducted in the last two weeks of the exchange period and was completed within a month with a 100% response rate.
A range of online messages posted in a closed Facebook group were collected and analyzed for grasping the temporal aspect of the exchange students' personal sense-making and behaviors.The Facebook group was set up by the researchers and used by the students and staff for the whole period of the exchange project.The participants of the Facebook group were encouraged to ask and answer questions, and to share thoughts and feelings.
The data collected by the various methods in this study were analyzed using the method of meaning condensation suggested by Kvale and Brinkmann (2008).For this research, we chose 24 interviews of 12 students whose interviews were more extended than the others, ranging from 30 to 60 minutes.These students were taking their exchange period in four host institutions.First of all, the interviews were categorized independently by the researchers.Each researcher looked at and read the interview transcriptions, determined natural meaning units in the interviews, and stated their central themes (see Table 2).
Table 2: Examples of the natural meaning units and their central themes
Natural Meaning Units Central themes
Yes.I attend "Teaching and Learning" and in the class I have to do team subject, and I made a team with two Austrian students, so I could be a friend with them on Facebook and we have an appointment with our subject and so I can meet them, and actually I had an appointment in front of the railway station last Sunday, but I had to go to the Youth Olympic explanation, so I cancelled the appointment, but it is possible when more actively and The results of coding were compared and condensed into more essential meanings in terms of the purpose of the study.As shown in Table 3, the themes identified by the researchers were reviewed and compared, and condensed as an essential, non-redundant theme, aligning with the notion by Kvale and Brinkmann (2008).
Communication using ICT
Subsequently, the survey results and online messages were reviewed and analysed by the three researchers in order to verify the findings from the interview data.
Five themes of meaning construction became apparent through the analysis, ranging from intercultural perspectives to issues of self-making.In the following section we will present and discuss these themes.
Intercultural perspectives for gaining insights on and outside campus
In an ideal case, the exchange student l focuses on reflectively developing his or her attitudes, behaviours, and skills.Analyzing their own intercultural insights and experience, the exchange students should critically review and question the practices, conventions, and values that they 'naturally' acquired in the home context.They should be able to reflect on the relationships among and with 'local' groups and the experience of those relationships.Furthermore, they should be able to move from one of the many ingroups to which one belongs to one of the many out-groups that contrast with them (Byram 1997;Kim 2001;Wenger 1998).Related to the above attitudes and skills, the exchange students of this study seemed to focus on several issues of cultural identity and the challenge of pluralism (for these constructs, see e.g.Kim 2008; also Dervin 2010 for a post-structuralist critique).This was shown in several interviews of the study.
Interviewer The crucial insights gained by exchange students were related not only to higher education and teacher education, which made up the exchange student's mission and roles in their context, but also to the host country citizens' attitudes to foreigners.Based on the data, it was concluded that a majority of the students were able to successfully reflect on and explore the culture of the host society, and thus, acquire and elaborate genuinely novel meaning structures -most often in an empathetic way.Furthermore, the exchange students' interpretations were about practice and/or policies related to the host's educational and other institutional systems.The insights also included communicating and interacting with nonlocals/foreigners in public and privately, and the status of religions and ideologies.From a wider perspective, the exchange students reflected on the impact of various denominations in the host country.
To put it simply, through making sense of what they had learned in their courses at campus and informally outside of campus, the exchange students attempted to walk in the shoes of the locals -each in their own ways, and with varying degree of success.
This conclusion is in line with Pence & Macgillivray's (2008) findings.They studied the impact of an international practicum period on pre-service teachers and established "professional and personal changes such as increased confidence, a better appreciation and respect for differences of others and other cultures, and an awareness of the importance that feedback and reflection play in professional and personal growth." On the topic of boundary-crossing (Wenger 1998), perspectivising, and developing insights about other cultures (Byram 1997), we should definitely refer to the many contacts, friendships, and social networks that the exchange students typically created and maintained with international non-host students.These connections and bonds might more critically be seen as international student bubbles, for the purpose of just hanging around while avoiding serious academic work.However they were more important and more productive for intercultural learning than only trying to make sense of the locals and their ways.This was also shown by the diary study of Irish students' cultural adjustment in Japan (Pearson-Evans 2006).The researcher concluded that in some cases, establishing relationships with the locals turned out to be problematic and difficult whereas international non-host student networks were productive in facilitating adjustment.
Making sense of educational systems and practice
The interview findings showed that the university campus is the most important place for students' formal and informal learning, for the effort of sense-making, and for the goal of fostering intercultural understanding and sensitivity.Many Facebook group updates confirmed this; for instance, a student wrote: "Yesterday we visited a school and attended three classes.I felt a little bit surprised.It is quite different from our country.I am happy that I am here."(Student, Kokkola, 17 September 2011).
Students learned how to differentiate the pros and cons of different educational approaches and to reflect on educational systems.The online survey completed by all of the exchange students confirmed this finding, with observations and reflections on differences found between the host and the home education systems.In addition, the exchange students reflected on the transferability of methods and techniques from the host to the home country.
Interviewer 3: What did you experience in learning in the different courses that you are taking?Do you think that these courses are helpful for your future?
Student C: The courses gave me the opportunity to think about how I teach and how the class happens and how they talk to each other and it also gives me a chance to concentrate on education, and so I can obtain between the [host] and [home country] school system and so I see some advantages of each school system so I can adapt in the future and it is really helpful I think, because for the teacher training it is really important, I can learn a lot of things, even after I become teacher it is really important and helpful.(First interview, Innsbruck) Similar research findings were established in a student journal study by Brindley, Quinn & Morton (2009) where a group of pre-service teachers taking a study abroad internship program made sense of their experiences in the host schools based on their reflective observations of customs and practices.Brindley et al. (2009) concluded that the international experience made the pre-service teachers stop and ponder fundamental aspects about school, teaching, and learning.These included teachers' duties, teacher-tostudent and student-to-student interactions.In a foreign internship setting, the students have "a set of experiences that cause them to fundamentally confirm or question some of their foundational presumptions about teaching." When answering the online survey question "What have you learned during your exchange program; about the society, culture, customs and traditions of the host country?" one of the exchange students in Innsbruck wrote: To prepare our practicum lectures, we spent lots of time to think about teaching unit and methods and I was totally interested in this assignment because I could share my own teaching methods with other friends and also get some ideas of it.After giving a lesson I got some reflection of our teaching unit, and I realized what's good and bad, so how can I deal with for the next time.
Exposure to such differences made several students clarify their career goals and further commit to seeking a specific teaching job: "working in international high school as a mathematics teacher is my goal and my dream, so I want to achieve my goal."Of course, we need to understand that not all exchange students' perceptions about the differences in educational systems and practices will automatically lead to questioning their own priorities and values, and thus result in transformative learning.However, experiences in real settings initiate individuals to strengthen existing values and create new beliefs and meanings (Mezirow 2000; Tarrant 2010).
Self-identity and social identity
In this study, we defined identity as a range of meanings that the exchange student assigned to themselves as an individual (i.e.self-identity) and as a member of a group(s) (i.e.social identity).Further, we assumed that the two identities would not be fixed nor 'hidden' but rather changeable and articulated, largely because of recurring identity negotiation and re-construction boosted by the exchange period.Again, we saw this identity negotiation and re-construction to be closely related to exposure and significant moments of intercultural communication and learning (Kohonen 2005;Ricoeur 1990Ricoeur /1992).
As to their identities, the main findings of the interviews indicated that the exchange students worked closer towards (but in some cases also walked away from) defining their emerging teacher professional identity, often with some aspects of the global mind.
Interviewer 3: What was your opinion about (the course [?]) "Global Education"?
Student A: Despite this, we should acknowledge that identities are not negotiated and constructed harmoniously nor in a linear way without difficulty.Instead, the process inevitably brings in fluctuations of stress, adaptation and growth (Kim 2001) Uncertainty and stress::[3] are typically related to encounters, events, and turning-points in the social arena where one's self-concept is on trial, with more or less intense sensations of inadequacy, uncertainty, strangeness, or loss of control.In the adaptation stage, the experience may become part of the totality of one's life history.
Interviewer 1: How is it with you as a private person, why are you learning and going to so many places and having so high expectations?Another student told us she had previously learned while staying abroad that her country was regarded as inferior and much less developed than the host country.For her, this was amplified by the categorization and imagery related to her country as presented in the international media: "When I watch American drama or western movie, the way they describe Asian people is not good.Asian people have lower quality culture than theirs.So I thought all western people don't like Asian culture."(Student B, second interview, Kokkola.)This had made her quite depressed as this negative experience of social identity was further combined with her own language problems and apprehension of speaking both in public and private arenas.This is finding is supported by a case study by Marx (2011) which portrayed a teacher student taking a semester-long intercultural program abroad.The student of the study seemed to encounter similar experiences of finding herself a cultural outsider or becoming the "other" while in the host context.However, in Marx's study these moments turned out to be productive for critical reflective thinking about cultures, particularly as she was supported by the host teacher in the process.
Proficiency in the host language and in English seems to be part of the exchange student's identity since it either blocked or enabled boundary crossing and socializing with people of the host country.For linguistic self-confidence (Noels, Pon & Clement 1996) it also seemed to be important how others (locals, peer international students, etc.) perceive and give meanings (implicitly or explicitly) to the exchange students communication capabilities.
The participants of this study were able to meet a lot of people from various parts of the world, and thus could negotiate their social and personal identities, in most cases arriving at a much more positive outcome.In the interview, the participants typically said they thought of themselves as global persons.
Stress related to intercultural exposure and situations
According to several studies (Bochner 1982;Kim 2001;Kim 2008;Kohonen 2005;Ward 2003), both positive feelings (e.g.fun, joy, sensations of beauty and exotic experiences) and negative feelings (e.g.despair, anger, frustration, and homesickness) are part and parcel of intercultural adaptation and learning.However, stress is seen as crucial to the intercultural adaptation process, as it "allows for self-(re)organization and self-renewal (…) the stress-adaptation-growth process continues as long as there are new environmental changes" (Kim 2008:364).On a more drastic note, Kohonen (2005), who studied the experience of expats, referred to the proverb that "moving is second to death from the stress point of view."Non-formal intercultural learning outside of the campus requires students to make sense of surprises and paradoxes, and to adapt to stressors of various kinds (cf.Kim 2001).
While the exchange students reported meaningful and successful experiences during their exchange period in general, their negative emotions and occasional stressful experiences were, related to reactions and verbalised perceptions by locals on campus and in the wider host context as well as by peer international students.Hurtful categorising, expressed below as 'racism', and experiences of social isolation were not uncommon in the students' accounts (cf.Ayano 2006).
Interviewer 3: When you think back, before you were coming to this country as an exchange student, what beliefs about local people did you have and what has changed since your stay here?
Student A: The only thing I had a problem with here (…) was racism, because I experienced it [elsewhere], and of course there is also racism [in that country], but they are used to having lots of people other countries, so we cannot see the racism, and even if they are, they don't express it like that.And they are also really friendly.And also the [host country people] are really friendly, but not in a deep way.(First interview, Innsbruck) In addition, the negative emotions were related to situations where the exchange students had to adapt to new, unfamiliar academic practices, which was challenge for them.In the case below, the student was faced with an uncomfortable situation during a group activity.
Interviewer KR: What differences of working style could you identify in the courses that you took?
Student A: When we had a project (…) we were three.And X and me were trying to do it part by part, like she does this part and I do that part, but he wanted us to do the whole work together.It was so stressful for us, we always do it like that, I do chapter one and you do chapter two, and then we collaborate with these, but he wanted us to do step by step all together, because we don't have the same timetables, it's hard to find a time to work together.And he had really strong opinions, and we said ok (…).But we preferred it the other way.
That was the problem with group work.(First interview, Innsbruck) Also, the exchange students experienced negative emotions when a certain situation was contrary to their expectations.As shown in the excerpt below, the students expected to meet friends through a buddy program, as well as receive support from the university.However, they were disappointed by the services provided.
Interviewer 1: Have your made friends with the locals at the university?
Student D: In [my country] we usually have the buddy system (…) the exchange student came to our university and then we meet the friend (…) there are many Student Union tutors [at the host university], but they don't care.(First interview, Kokkola) Interestingly, negative and stressful emotions allowed individuals to reflect on the differences and similarities among cultures, sustain and be tolerant of stressful periods, and develop a more positive understanding of the context.It indicates how the students' emotional dissonance initiated the cognitive recognition in their contexts (Golombek & Johnson 2004), and contributed to building intercultural and global perspectives.
Interactions and communication for interpersonal understanding
Interacting and communicating are the key components for global experiences (Byram 1997;Kim 2008;Linell 2003;Mansilla & Jackson 2011).Truly dialogical interactions and communication allow individuals to share their feelings and thoughts with others, and to understand different cultures, perspectives and relationships.Dialogue is essential for making sense of other people's perspectives and experience; in many cases, people going abroad face a challenge to communicate with local people even though they use English as a common language.Using nonverbal expressions and communication technologies is an additional way for a deeper understanding in interpersonal communication (Byram 1997;Witte 2010).
Interviewer 2: How did you interact with others, especially in group work?How many roles did you do?
Student A: Yes, we used numbers to communicate.I said give me paper, give me pen, I was drawing, they were drawing, I was drawing.This?You mean this?No… this, ah… yes, this this, so it came out very very good idea but drawing everything, writing is the out, asking Google translator.(First interview, Korea) As shown in the above case, the exchange students integrated a range of modes (e.g.gestures, facial expressions, numbers and drawings) to communicate with local students when communicating in English did not work well.They also used Google translator that turns texts written in local languages to English and vice versa.
We also observed that the students actively used the Facebook group to visually narrate their process of learning.Experiences could be shared more richly when audio and video were used.There was, however, little reflection evident in the articulation on the Facebook site.Even narration directed to the small virtual community is primarily of a statement nature: what happened, or what was experienced and seen.
The interpretation of what these events have meant for the individual and the growth of his/her global mind can be easily interpreted at some level, but on the other hand, remain at a more general level.
In the host context, the exchange students had many chances to understand the local cultures as well as reflect on their own cultures, which is related to how globally competent individuals understand perspectives and communicate with diverse people (Mansilla & Jackson 2011).
The interpersonal and intellectual growth is more evident in learning and working in groups.In many cases, group activities which involve solving real problems and developing authentic products can enhance individuals' communication skills (Witte 2010) and knowledge construction (Liu & Dall'alba 2012;Palincsar 1998).The findings of the interviews showed that the exchange students were respectful to group members who have a different culture, and were willing to share opinions and to learn new things from different points of view.The exchange students had opportunities to work with local students during their regular courses.They needed to discuss relevant topics, to develop common understanding, and to make artifacts and products together as learning outcomes.During the processes, they tried to be open minded by listening to others and collaborating together to achieve their goals, which allowed them to improve their interpersonal understanding and skills.It is also evident in a response to one of the questions in the online survey, which asked "How much did the programs/events listed above help you to get a better understanding of the culture and society of the target country?Please explain the major reason".The respondent wrote "When we did a group works, I realised how different we are".
Managing various ways of communication and overcoming the challenges of working with others who have different cultural backgrounds give individuals more chances to enhance their global mind and competences (Mansilla & Jackson, 2011).In particular, group work that must achieve a common goal with culturally diverse members may be a critical context to build and enhance interpersonal and cultural understanding.
Conclusions
This study investigated how exchange students from Korea and the EU constructed and re-constructed their global minds in a study abroad project.Interviews were employed to gather evidence of participants' experiences and analyzed using the meaning condensation method.Subsequently, survey results and Facebook posts were reviewed.As a result, five themes emerged which provide insight and understanding of the students' experiences.
Personal change and growth cannot be observed easily and explicitly.However, an unfamiliar context in a different culture can significantly affect the personal growth of exchange students, as well as enhance their intercultural and global understanding.The five themes in the study showed how their experiences allow students to construct and develop a global mind.
Firstly, in the host context which was most often totally new to them, the exchange students were able to reflect on the host society and develop novel meaning structures.They used the potentials of the context, including social networks, to develop their intercultural perspectives, each in their own ways.This enabled them to identify with the locals.
The second theme of the study, which overlaps with the first one, relates to how the exchange students made sense of the educational systems and practices which they encountered in their host school classes and exchange universities.They reflected on their own career goals within the teaching profession, and how they could use some of the instructional approaches in their home context.
The third theme opens up vistas to understanding the complexity of exchange students' identity development.These identities were typically negotiated when meeting friends and other people both on and off campus as well as in the wider social arena.Reflections about themselves as (more) global persons became apparent.
The fourth theme is about unexpected experiences and unfamiliar situations.When the exchange students were confronted with a situation contrary to their beliefs and expectations, all of their senses and emotions were actively engaged in the context, and stressful and negative feelings could dominate their emotions.However, these experiences provide students with the momentum to reflect deeply and to develop their perspective of intercultural understanding.The experiences might provide a push towards extended learning processes and transformative learning outcomes, especially as they made the students think of their identities as individuals and group members.
Last but not least, we established the fifth theme, which is communicating and interacting with people using diverse approaches to construct and reconstruct global minds.Dialogue is an essential way to share ideas and feelings, and to make sense of other people's perspectives and experiences.Also, it is evident that nonverbal expressions and technological support effectively facilitated interpersonal and cultural understanding in this study.In particular, interaction with team members to achieve common learning goals allowed the participants to be more aware of cultural differences, to be more open and tolerant of these differences, and to act in an integrative way, which is critical to the globally competent individuals.
As shown in this research study, constructing a global mind is a complex process; it happens consciously and unconsciously, it is affected by the social context, and it affects in various ways the exchange student's awareness and sense of oneself as an individual.From this perspective, global education focuses on the student's personal growth in relation to their intercultural skills, attitudes and identities.Global education is about more than obtaining knowledge and skills; it takes a holistic approach where intellectual growth combines with emotional and interpersonal understanding, and provides the continuity and interaction of educative experience (Dewey 1938).The accumulation of practical and theoretical professional knowledge with increasing diversity, sophistication and depth is certainly one of the great affordances of study abroad in order to enhance global minds.
As a team of practitioner-researchers representing a fairly wide range of cultural and educational contexts as designated by the bilateral KE-LeGE project, the researchers undertook a holistic approach to explore the exchange students' study abroad experiences.The fact that the researchers involved exchange students as research participants across two continents and interviewed each of them twice in situ can be seen as a strength of this study.The researchers also think that it was significant that they tried to be open-minded and did not look at the students' accounts of study abroad experiences based on where they came from.Thus the researchers did not compose a grammar of culture (Holliday 2011) to understand students' experiences.As Dervin (2010) puts it critically, categorizing study participants into groups defined by nationality, ethnicity, or religion may lead to promoting cultural differentialism and thus not gaining a coconstructive horizon in the research effort.
As to the weaknesses of this research study, the researchers acknowledge that an exploratory study like this is limited to formulating hypotheses and offering thematic insights as if from a bird's eye view, instead of reaching more in-depth research claims and explanations.Also, the fact that all the interviews were conducted in a foreign language (English) can be seen as a limitation when seeking rich description and trying to tap the deeper meanings of the individual research participants.
The next steps of this research will include additional interviews with the next student cohorts and triangulation with the survey data.Specific focus will be put on trying to better trace the temporal and networking aspects of the exchange experience, for instance by utilizing narrative inquiry as a methodological approach.Also, it would be very worthwhile to explore how the students, upon returning to the home context, enhanced their career goals and developed their professional and intercultural identities as part of life-wide and life-long learning.
Student D: I don't experience many things, yeah.(….)But when I went to [another country] I met the people and they were quite different.I feel weird, and it was my turning point (…) After I experienced the local cultures, I felt there are many kinds of people.I need to experience and I need to challenge myself, and I have to change myself.I except that when I go back to [my country] and my family and my friends say to me, you are really changed and you are now in confidence.(Firstinterview, Kokkola)
Table 1 :
Basic information about the exchange students Major study area Primary education and general education Subject education 10 15 25
Table 3 :
Examples of theme comparison and condensation
|
2023-11-27T16:13:59.471Z
|
2015-07-10T00:00:00.000
|
{
"year": 2015,
"sha1": "7587dd98d77f0a9fb855ba0725379a25894b46d2",
"oa_license": "CCBY",
"oa_url": "https://immi.se/intercultural/article/download/Johnson-2015-2/745",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b4998210924e0718b564662136410059cb35b746",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": []
}
|
249877445
|
pes2o/s2orc
|
v3-fos-license
|
Deep Q-Learning with Bit-Swapping-Based Linear Feedback Shift Register fostered Built-In Self-Test and Built-In Self-Repair for SRAM
Including redundancy is popular and widely used in a fault-tolerant method for memories. Effective fault-tolerant methods are a demand of today’s large-size memories. Recently, system-on-chips (SOCs) have been developed in nanotechnology, with most of the chip area occupied by memories. Generally, memories in SOCs contain various sizes with poor accessibility. Thus, it is not easy to repair these memories with the conventional external equipment test method. For this reason, memory designers commonly use the redundancy method for replacing rows–columns with spare ones mainly to improve the yield of the memories. In this manuscript, the Deep Q-learning (DQL) with Bit-Swapping-based linear feedback shift register (BSLFSR) for Fault Detection (DQL-BSLFSR-FD) is proposed for Static Random Access Memory (SRAM). The proposed Deep Q-learning-based memory built-in self-test (MBIST) is used to check the memory array unit for faults. The faults are inserted into the memory using the Deep Q-learning fault injection process. The test patterns and faults injection are controlled during testing using different test cases. Subsequently, fault memory is repaired after inserting faults in the memory cell using the Bit-Swapping-based linear feedback shift register (BSLFSR) based Built-In Self-Repair (BISR) model. The BSLFSR model performs redundancy analysis that detects faulty cells, utilizing spare rows and columns instead of defective cells. The design and implementation of the proposed BIST and Built-In Self-Repair methods are developed on FPGA, and Verilog’s simulation is conducted. Therefore, the proposed DQL-BSLFSR-FD model simulation has attained 23.5%, 29.5% lower maximum operating frequency (minimum clock period), and 34.9%, 26.7% lower total power consumption than the existing approaches.
Introduction
The primary purpose of memories in systems-on-chips is "to store huge amounts of data" [1]. Here, the memories occupy more space in SOC design, developed in CMOS technology. It contains a smaller feature size [2], which specifies that memories are essential for yield impact [3,4]. The memory repair principle includes either row or column repair, or both [5]. Traditionally, memory repair is executed in two stages. The first stage is examined the failure identified by the memory built-in self-test controller through the test of repairing memory. The second stage determines the repair signature to repair memory [6,7]. Each repairable memory has repair registers, which hold the repair signature. Additionally, the BIRA method calculates the repair signature according to memory failure data and executes the memory redundancy model. It defines whether the memory is repaired in the production testing platforms. Therefore, the repair signature is stored in the Built-In Redundancy Analysis registers for processing MBIST Controllers [8][9][10][11].
The repair signature is sent to the scan chain of the repair registry for the chip programming placed at the chip design level [12]. Here, the fusebox's read and write test with optimized BISR for fault detection FPGA (MBIST-OBISR-FD) [28], the second one, the SRAM-based Physically Unclonable Function (PUF) with Hot Carrier Injection (HCI) for fault detection FPGA (PUF-HCI-FD) [29], and third, Essential Spare Pivoting (ESP) based Local Repair Most (LRM) FPGA (ESP-LRM-FD) [30], respectively, are compared with the proposed method.
The remaining manuscript is structured as follows: Section 2 delineates the related works, Section 3 elaborates the proposed methodology, Section 4 demonstrates the results and discussion, and Section 5 concludes the manuscript.
Related Work
Numerous MBIST strategies were used in the previous literature. Some of the most recent research works are reviewed here.
In 2020, Kalpana et al. [26] presented an Extreme Learning Machine-based Autoencoder for parametric fault diagnosis (ELM-AE-FD) in the analog circuit. Additionally, Single and multiple parametric fault analyses were considered for simulation before the test model. Here, the features were collected to create a fault dictionary that involves different faulty and fault-free configuration values. Subsequently, the transfer function model was simulated using Monte-Carlo analysis, and the model's performance was compared with self-adaptive evolutionary ELM approaches. Therefore, the ELM-AE-FD method achieved higher diagnosis accuracy, but the power consumption was high.
In 2021, Gopalan and Pothiraj [28] presented memory BIST with optimized BISR for fault detection in SOCs design. The BIST model checks the memory array circuit, and the optimized BISR repairs the faulty memory cells. Additionally, faults were inserted into the memory using saboteurs fault injection and mutants fault injection methods by optimized test pattern logic. After that, the fault memory was repaired by the counting threshold algorithm. Therefore, the performance of the MBIST-OBISR-FD model improved in this method, but the device utilization was very high.
In 2020, Liu [29] presented SRAM-based Physically Unclonable Function (PUF) with Hot Carrier Injection (HCI) for fault detection with high stability and low power. The PUF model utilizes hybrid operations based on CMOS-SRAM mode and Enhancement-Enhancement (EE) Static Random Access Memory mode. The PUF model was fabricated in standard CMOS at 130 nm, which achieved a high bit error rate (BER). Additionally, the accelerated aging test performs a long-term reliability operation, which enhances the model's performance. However, fault diagnosis was not reasonable using the PUF-HCI-FD model.
In 2020, Ryabtsev and Volobuev [30] presented the BISR framework to restore RAM operability on SOCs. Hence, the given model was processed when various failures were due to backup and main memory reconfiguration. Thus, the technical solution lessens the product's weight than most redundant devices because not every memory was allocated, but only vital components were very susceptible to failures. Therefore, the functional state of the digital SOC's memory was automatically restored without the contribution of the framework. The BISR and BIST RAM were used for industrial and special purposes that achieved high performance in failure detection, but the area of the model was comparatively high.
In 2020, Zhou et al. [31] presented the design of the Spin-Transfer Torque-based magneto-resistive random access memory (STT-MRAM) model for the BIST process. In this approach, the tunneling magneto-resistance (TMR) was checked, and a real-time built-in self-test was triggered during the sensing operation for lasting damage in the magnetic tunnel junction (MTJ) stack. Further, the presented design was involved in magnetoresistive random access memory array execution for calculating the feasibility of the test scheme. Therefore, the designed model achieved lower power consumption and higher reliability, but the operating time of the model was high.
In 2021, Park et al. [27] presented the BIST model to process post-package inspections (PPI) for attaining fault-free Dynamic Random Access Memory (DRAM). Here, the compact and higher test coverage structures for in-DRAM-BIST were considered for resolving the area issues when applied to commodity Dynamic Random Access Memory. Subsequently, the built-in self-test model safeguards the test coverage for a short period, diminishing the test time and improving the area overhead, but still, the power utilization of the model was high.
In 2019, Pundir [25] presented improved modified memory Built-In Self-Repair (MM-BISR) for Static Random Access Memory utilizing hybrid redundancy analysis (HRA). The augmented version of ESP and LRM provides a better solution for an optimized set of rows and columns appropriate to the repair process. Additionally, the fault dictionary was updated or fixed concurrently in the redundancy analysis (RA) based on MBIST and provided control signals. Subsequently, rows and column pivots and restoration requests were serviced based on a precedence list that was arranged using the compared activities. Therefore, results were justified using the presented algorithm that was quite active as the repair rate was higher up to 4% assessed with the penalty of an area of the Essential Spare Pivoting, and few nominal area penalties were assessed with Essential Spare Pivoting.
Proposed Built-In Self-Test and Built-In Self-Repair Methodology for SRAM
The fault is inserted and tested to improve any system's performance before it is marketed. The process of defects being added to the system is known as fault injection. In system-on-chip (SOC) design, memory occupies a larger space, and any memory defects affect the overall output of the system-on-chip. Thus, spare rows and columns are included in the memory. This manuscript proposes Deep Q-learning (DQL) and BSLFSR for built-in self-test and Built-In Self-Repair for SRAM. The proposed DQL-based MBIST is used to check the memory array unit, and faults are inserted into the memory using the DQL fault injection process. Subsequently, the fault memory is repaired after inserting faults in the memory cell using the BSLFSR-based Built-In Self-Repair model. The proposed block of built-in self-test and Built-In Self-Repair for Embedded Memories is shown in Figure 1. The notations used in Built-in Self-test and Built-In Self-Repair for Embedded Memories block are summarized in Table 1. Table 1. Description of the notation used in the Test and Repair process.
Operations Description
Wr-data Write the data to the memory location whose address is given.
addr Indicates the address of the memory location where the data of the memory is going to access.
wr Write enable signal to write into the memory.
rd Read enable signal indicating the read operation from memory.
Rd-data
The read data bus contains the read data from the given memory location. w0 Write the logic value '0' to the memory location. w1 Write the logic value '1' to the memory location. r0 Read the logic value '0' from the memory cell.
r1
Read the logic value '1' from the memory cell.
The proposed architecture model involves various blocks such as a start register, test pattern generator, test controller, comparator, memory under Test (MUT), output response recorder, and BISR. The test pattern generator and test controller are performed based on the proposed DQL method. Moreover, faults are injected using the proposed DQL model, and the fault memory is repaired using the proposed BSLFSR.
Built-In Self-Test (BIST) Using Deep Q-Learning Algorithm
Typically, BIST is a practical integrated circuit with low cost incorporated with SRAM memories to test the fault occurring during the memory's read or write operation. It eliminates the need for an expensive and time-consuming external hardware module known as Automated Test Equipment (ATE). Additionally, the architecture of BIST involves various structures than the external ATE because it is very expensive. In general, a built-in self-test circuit involves various blocks such as a buffer circuit that act as a level shifter, an amplifier circuit for amplifying the fault signal, the operational amplifier is utilized for boosting the weak signals and operates as 2nd stage amplifier, and a comparator circuit for evaluating the results of fault-free and faulty SRA Memory. Furthermore, the memory test controller works based on the Deep Q-learning mechanism to improve fault coverage and is utilized for detecting the coupling faults. The proposed Deep Q-learning mechanism generates the test patterns and acts as a test controller for injecting faults into the memory. After injection of defects in the memory cell, fault memory is fixed using BSLFSR for the BISR model.
Test pattern generator It works on the DQL model that creates the patterns needed for injecting faults and propagating the effect to outputs. Let the initial state of DQL be as K 0 , and the agent receives the state observation (K s ) in a step (s) that takes action based on the policy π(K s , A s ). The Q-function or a state-action value function from the state K and action A under the policy π is described in Equation (1), The reward function is denoted as R, which states a scalar reward for a given state or action, φ representing the discount factor used to prefer the test patterns. Additionally, the model's loss function during training is reduced using Equation (2), where the loss function is mentioned as L(ϕ) of DQN parameters ϕ and y(ϕ − ) represents the generated test patterns given in Equation (3), To track the test patterns ϕ − is used in the DQR mentioned in Equation (4), where g denotes the parameter for identifying test patterns that is mentioned as g << 1.
Thus, to enhance the training stability, weights of the target network are periodically updated through repetitions. Hence, the generated test patterns are stored and applied during BIST execution. In this, the patterns are randomly generated by a DQL-based test pattern generator that act as test patterns. Moreover, an essential emphasis of register design is less area, which is created with as many various patterns as possible. Test Controller In this work, the memory test controller involves registers for starting the test controller to record the failed information. Hence, the test controller starts once the start signal is programmed in the start register. Additionally, the start register contains reset, resume, clock, start, stop, halt-on-error, and memory ID. Additionally, the test pattern generator works based on the DQL method that generates patterns applied in the block of MUT during the read operation. Moreover, the read data is compared with generated patterns during the testing operation. At last, the memory ID, faulty cells, defective cell counts, and failed address of the memory under test are recorded in the output response recorder. Moreover, the failed data are given to a built-in repair block for using spare memory in the place of faulty cells.
Memory under Test (MUT)
Additionally, the generated patterns are applied to MUT during the read and write operations. In this, various states are considered, such as idle state, write 0 at the memory location (w0), read 0 write 1 state (r0w1), read 1 write 0 read 0 (r1w0r0), write 0 read 0 write 1 (w0r0w1), read 1 word 0 (r1w0),read 0 (r0) and fail record status. Hence, the test controller in an idle state waits to start the signal if it is received once, then jumps to write 0 state (w0) and starts the memory test operation. Thus, MUT is full of zero patterns when it jumps to another state as r0w1. Here, read and write operations are performed using a test controller. Correspondingly, the test controller executes every operation successively at every state.
Comparator
In this model, the comparator compares the output and the pattern from the MUT block. Additionally, the read data are compared with preferred patterns while performing the testing under-read operation. If the comparison outcome fails, the test controller jumps to the failure-registration level, storing the comparison outcome and returning to another address location.
Output Response Recorder
It is responsible for verifying the output responses, which means the computer response applied to test vectors should be examined. Further, the decision is taken whether the system is faulty or not. Here, the faulty cells, fail address, memory ID, and MUT defective cell count records are stored in the output response recorder. Subsequently, the predefined patterns are given to memory under test, and the attained response is noted in the output response recorder.
Every operation is completed and shows the outcome in the status of fail record state. Additionally, Test Controller detects the failure memory ID, faults location, faulty cell, and faulty cell count. Subsequently, simulation results of the proposed model compute various
Fault Modeling of SRAM
The description of possible approaches by which SRAM fails is defined as the fault modeling of SRAM. Next, the proposed DQL model is aimed to inject various categories of faults into the SRAM memory. Furthermore, possible defects in SRAM are explained as follows: Stuck Fault (SF) SF is defined as the single-cell fault where the logic value in the SRAM memory cell is stuck at 0/1.
Coupling Faults (CF) CF is one of the categories of fault that occurs in SRAM cells due to its interaction with other cells. This originates under double cell faults, and two states are an increasing state that specifies zero to one, and the falling state sets 0 to 1.
Read Destructive Fault (RDF) RDF belongs to a single-cell fault that is occurred in SRAM cell values into inverted, and then the resultant incorrect value is getting when a read operation is executed in the cell. If memory is zero, reads zero takes place, and cell memory becomes one; hence it is specified as 0r0/1/1 and if memory is one, read one takes place, and the cell memory becomes zero. Thus it is defined as 1r1/0/0. Write Destructive Faults (WDF) WDF is the category of single-cell faults that is non-transition kinds of operations, cells in the memory start to flip. Furthermore, in WDF, two various methods are presented among write's destructive fault. If memory is zero, write zero occurs, and the cell becomes one. Hence it is specified as 0w0/1/ and if memory is one, write one takes place, and the cell becomes zero; accordingly, it is defined as 1w1/0/. Transition Coupling Faults (TCF) TCF is a double cell fault, and here transition does not occur if the write transition operation is used among cells of the victim word. The fault is 0W1/0/ for up transition, and the fault is 1W0/1 for down transition.
Static Coupling Faults (SCF) SCF is a double cell fault that occurs in 0/1, and it is attained in the cells of the victim word because of forcing the aggressor word when 0 to 1 value is given to the cell.
Disturb Cell Coupling Faults (DCCF) DCF is one of the double cell fault categories that occur when writing operation or read operation is achieved over the aggressor word, resulting in cell disturbance of the victim word.
Incorrect Read Faults (IRF) IRF falls under the double cell fault category when a read operation occurs in the SRAM cell, gets the incorrect value, and the memory cell state remains stable. If memory is zero, read zero occurs, but when cell memory becomes zero, the reading process turns one and is specified as 0r0/0/1. If memory is one, read one takes place, but when cell memory becomes one, it returns to zero and is specified as 1r1/1/0. Finally, the read operation yields the aggressor value.
Deceptive read destructive faults (DRDF) DRDF is a single-cell fault that occurs because the value in the cell is reversed and gets the accurate value while performing the read operation. If memory is zero, read zero takes place, and cell memory becomes one when the reading process turns into zero, which is specified as 0r0/1/1, and if memory is one, read one takes place. Cell memory becomes zero, then read turns into one specified as 1r1/0/0, i.e., the value is reversed after the read operation.
Idempotent Coupling Faults (ICF) ICF is a double cell fault that happens when forced by the cell of aggressor word consists of higher (0-1)/lesser (0-1) transition of write operation for obtaining final value (0 or 1) in the cell is presented in the victim word.
Finally, the fault information (various faults) and memory failure information (faulty cell, faulty location, and memory ID) is transmitted to the BISR block.
3.6. Built-In Self-Repair (BISR) Utilizing Bit-Swapping-Based Linear Feedback Shift Register (BSLFSR) This manuscript deals with two individual processes described in the flowchart in Figure 2. The first is a Bit-Swapping-based linear feedback shift register, and the second is the BSLFSR-BISR method to perform a Built-In Self-Repair process. The BSLFSR is a process of improving actual LFSR performance next spare rows and columns are used instead of faulty cells using the BSLFSR-BISR method. The BS-LFSR framework focuses primarily on minimizing power dissipation by reducing the conversion process in the conventional LFSR without compromising its efficiency and effectiveness. The proposed BSLFSR acts as the repair analyzer due to low power consumption. Initially, the memory failure information, such as faulty cell, faulty location, memory ID, and fault information, are transferred to the Built-In Self-Repair block to repair defective cells of the failed memory. BSLFSR-BISR involves spare memory or row-column block and redundancy logic (RL).
The redundancy management logic is utilized to store the faulty addresses found in the memory test process. Further, it compares the defected addresses with previously saved addresses in the fault table in case of multiple faults. When read and write operations of memory matches, the BSLFSR-BISR method starts working, and information is accessed via the spare memory. Simultaneously the repair analysis block access to the failure data and measures the repair signature stored in the register. Furthermore, it is likened to defected addresses and prior saved addresses in the fault table in case of multiple faults. Then, the address is stored only for the read and write memory operations that are not available in the fault table. Built-In Redundancy Analysis decides spare row or column allocation related to the information of faulty cell numbers in a specific address. Moreover, the Built-In Self-Repair block works in the proposed Bit-Swapping-based linear feedback shift register principle as mentioned in the flowchart.
In BSLFSR-BISR, the pre-charge technique is used for repair analysis or fault diagnosis. Here, the XOR gate is replaced, which lessens the delay and power in the circuit. Hence increasing operating frequency is attained along with the increasing performance of the circuit. Moreover, the standard repairing works in a simple rule. If a row contains more faults, it is said to be repaired, and if the column contains various faults, then various rows and columns are fixed. The faulty cell count is established using a memory test controller that is taken as a reference by BSLFSR for measuring the repair signature.
Moreover, a predefined threshold value uses spare rows and columns that are decided based on the counts of defected cells in defected rows or columns. Hence, the predefined threshold value equals '2' or greater than '2'. If the row defect count is greater than '2' or equal to '2', then the spare row will allocate first; or else, a spare column will allocate. * The checking process will continue if the spare memory (row or column) allocation is over. If it is non-zero, the spare row allocates the checking process by continuing until it reaches the null state. Therefore, the spare memory is increased based on the faulty cell count. Hence this method significantly provides a memory test and fault repair solution by using the control flow mentioned in the flowchart. The redundancy management logic is utilized to store the faulty addresses found in the memory test process. Further, it compares the defected addresses with previously saved addresses in the fault table in case of multiple faults. When read and write operations of memory matches, the BSLFSR-BISR method starts working, and information is accessed via the spare memory. Simultaneously the repair analysis block access to the failure data and measures the repair signature stored in the register. Furthermore, it is likened to defected addresses and prior saved addresses in the fault table in case of multiple faults. Then, the address is stored only for the read and write memory operations that are not available in the fault table. Built-In Redundancy Analysis decides spare row or column allocation related to the information of faulty cell numbers in a specific address. Moreover, the Built-In Self-Repair block works in the proposed Bit-Swapping-based linear feedback shift register principle as mentioned in the flowchart.
In BSLFSR-BISR, the pre-charge technique is used for repair analysis or fault diagnosis. Here, the XOR gate is replaced, which lessens the delay and power in the circuit. Hence
Results and Discussion
This section describes the performance of the DQL-BSLFSR-FD method. Here, the simulation and synthesis outcomes are attained by utilizing Mentor Graphics and Xilinx ISE 14.5 Design Suite. Additionally, design and implement FPGA architecture by MBIST and Built-In Self-Repair hardware structure for Static Random Access Memory and tested using Verilog. The performance metrics, such as area, power, delay, slice register, maximum operating frequency, and clock period, are analyzed. The FPGA performance of the DQL-BSLFSR-FD method is likened to existing approaches. The proposed method's FPGA performance is compared with the memory BIST with optimized BISR for fault detection (FPGA-MBIST-OBISR-FD) [22], SRAM-based Physically Unclonable Function (PUF) with Hot Carrier Injection (HCI) for fault detection (FPGA-PUF-HCI-FD) [23], and the Essential Spare Pivoting (ESP) based Local Repair Most (LRM) (FPGA-ESP-LRM-FD) [27], respectively.
Performance Metrics
The performance metrics, such as area, delay, power, slice register, maximum operating frequency, and clock period, are analyzed to validate the efficiency of the proposed method.
Calculation of Power Consumption
The average power consumption of the proposed DQL-BSLFSR-FD model is calculated using Equation (4) where the clock frequency of the model is represented as C f , load capacitance is denoted by C L , activation factor is represented as λ, and the supply voltage is denoted as V DD .
Simulation Outcomes
The proposed DQL-BSLFSR-FD method achieves better performance in terms of area, power, and delay when compared with other existing methods. In this work, simulation and synthesis outcomes are attained using Mentor Graphics and Xilinx ISE 14.5 Design Suite, where the design is implemented on Virtex-5 FPGA. During testing, various faults are injected into the memory using DQL with test pattern generation. The test patterns and fault injection into the memory are controlled through the test benches written to test the memory during testing. Various test cases are considered for testing the memory by applying different test patterns and injecting various faults for negative testing. There are 256 different test patterns are required for 8-bit size memory. Additionally, the generated patterns are used to detect every possible fault in the memory. The injected faults and the generated test patterns by the proposed DQL-BSLFSR-FD approach are obtained. Here, the numbers of defected cells are identified by the DQL-BSLFSR-FD approach to allocating the spare rows and spare columns. Figure 3 shows the test pattern generation outcome by the proposed method. Various test patterns are generated to test the MUT for faults. The DQL-BSLFSR-FD model will test the memory and create the fault report. It also counts the number of faulty cells in each memory address and simultaneously updates the fault count register. The row-and column-wise details of defected count cell are computed, and the outcome of the faulty cells are stored in the fault count register by the proposed DQL-BSLFSR-FD method and are shown in Figure 4. Once the faulty cell information is obtained from the test controller, the repair process begins. The simulation output for the repair of the defective cells in rows and columns is shown in the screens taken of the simulator after getting the output. The spare rows and columns are allocated for the faulty cell information received by the DQL-BSLFSR-FD block. Subsequently, the spare row allocation in SRAM memory using the DQL-BSLFSR-FD method is shown in Figure 5. Similarly, the spare column allocation in SRAM memory using the proposed DQL-BSLFSR-FD method is represented in Figure 6. Once the faulty cell information is obtained from the test controller, the repair process begins. The simulation output for the repair of the defective cells in rows and columns is shown in the screens taken of the simulator after getting the output. The spare rows and columns are allocated for the faulty cell information received by the DQL-BSLFSR-FD block. Subsequently, the spare row allocation in SRAM memory using the DQL-BSLFSR-FD method is shown in Figure 5. Similarly, the spare column allocation in SRAM memory using the proposed DQL-BSLFSR-FD method is represented in Figure 6. Once the faulty cell information is obtained from the test controller, the repair process begins. The simulation output for the repair of the defective cells in rows and columns is shown in the screens taken of the simulator after getting the output. The spare rows and columns are allocated for the faulty cell information received by the DQL-BSLFSR-FD block. Subsequently, the spare row allocation in SRAM memory using the DQL-BSLFSR-FD method is shown in Figure 5. Similarly, the spare column allocation in SRAM memory using the proposed DQL-BSLFSR-FD method is represented in Figure 6. Table 2 shows the comparison of simulation outcomes for the proposed and existing approaches. Next, the slices of the proposed method are 24.6% and 32.7% lower than ELM-AE-FD and BIST-PPI-FD approaches, and the slice flip-flop count of the proposed method attains 18.9% and 26.7% lower than the ELM-AE-FD and BIST-PPI-FD approaches. Subsequently, the LUT of the proposed method is 15.7% and 26.7% lower than ELM-AE-FD and BIST-PPI-FD methods. The minimum clock period of the proposed approach is 11.85% and 18.7% lower than the existing ELM-AE-FD and BIST-PPI-FD methods. Furthermore, the maximum operating frequency of the proposed method is 23.5% and 29.5% increased than the existing ELM-AE-FD and BIST-PPI-FD approaches. Moreover, the total power consumption for the proposed method attains 34.9% and 26.7% lower than the existing ELM-AE-FD and BIST-PPI-FD techniques.
Comparative Analysis of Performance Metrics Using FPGA
The proposed structure is designed and tested by utilizing Verilog explanations that are targeted for Virtex-5, xc5vlx30 FPGA. The performance of FPGA implementation of the proposed DQL-BSLFSR-FD method is compared with existing approaches, such as memory BIST with optimized BISR for fault detection (OBISR-FD) [28], PUF-HCI-FD [29], and ESP-LRM-FD [30] FPGAs, respectively. Figure 7 shows the comparison of the area for the proposed DQL-BSLFSR-FD method with the existing (OBISR-FD) [28], PUF-HCI-FD [29], and ESP-LRM-FD [30] FPGAs approaches. The area calculation of the proposed FPGA-DQL-BSLFSR-FD method provides 85%, 72%, and 79% lower area than the existing approaches, such as MBIST-OBISR-FD, PUF-HCI-FD, and ESP-LRM-FD FPGAs, respectively. Table 2 shows the comparison of simulation outcomes for the proposed and existing approaches. Next, the slices of the proposed method are 24.6% and 32.7% lower than ELM-AE-FD and BIST-PPI-FD approaches, and the slice flip-flop count of the proposed method attains 18.9% and 26.7% lower than the ELM-AE-FD and BIST-PPI-FD approaches. Subsequently, the LUT of the proposed method is 15.7% and 26.7% lower than ELM-AE-FD and BIST-PPI-FD methods. The minimum clock period of the proposed approach is 11.85% and 18.7% lower than the existing ELM-AE-FD and BIST-PPI-FD methods. Furthermore, the maximum operating frequency of the proposed method is 23.5% and 29.5% increased than the existing ELM-AE-FD and BIST-PPI-FD approaches. Moreover, the total power consumption for the proposed method attains 34.9% and 26.7% lower than the existing ELM-AE-FD and BIST-PPI-FD techniques.
Comparative Analysis of Performance Metrics Using FPGA
The proposed structure is designed and tested by utilizing Verilog explanations that are targeted for Virtex-5, xc5vlx30 FPGA. The performance of FPGA implementation of the proposed DQL-BSLFSR-FD method is compared with existing approaches, such as memory BIST with optimized BISR for fault detection (OBISR-FD) [28], PUF-HCI-FD [29], and ESP-LRM-FD [30] FPGAs, respectively. Figure 7 shows the comparison of the area for the proposed DQL-BSLFSR-FD method with the existing (OBISR-FD) [28], PUF-HCI-FD [29], and ESP-LRM-FD [30] FPGAs approaches. The area calculation of the proposed FPGA-DQL-BSLFSR-FD method provides 85%, 72%, and 79% lower area than the existing approaches, such as MBIST-OBISR-FD, PUF-HCI-FD, and ESP-LRM-FD FPGAs, respectively. provides 67.8%, 33.33%, and 56.7% higher maximum operating frequency than the existing approaches, such as MBIST-OBISR-FD, PUF-HCI-FD, and ESP-LRM-FD FPGAs, respectively. provides 67.8%, 33.33%, and 56.7% higher maximum operating frequency than the existing approaches, such as MBIST-OBISR-FD, PUF-HCI-FD, and ESP-LRM-FD FPGAs, respectively. As per today's demand of consumers to use a large volume of the memory in their devices, this research study provides the solution to test the memories embedded in the SoC and attempt to improve the product yield. This study offers the solution for the regress testing SRAM in SOC-based development. It proposed the memory test method through the proposed MBIST controller to find more and possibly all kinds of defects. Further, it provides a powerful solution to repair the faculty memory cell or faulty memory location supplied by the test block. It uses a spare memory cell to replace the detected defective cells to repair the memory. The design is targeted to the FPGA platform, and the obtained results are targeted to the power consumption, speed, and area overhead. The obtained results are reasonable if we compare them with the existing studies targeting the same parameters. It is discussed in the literature and the above result section and observed from the results obtained on the FPGA that the proposed method is better in terms of delay, power, and area parameters. The comparison of results of the existing studies [28,29,30] is carried out, the obtained results are tabulated, and the performance graphs are plotted as shown in the charts of the result section. The contribution to the memory test and repair is significant, and the existing work's performance is acceptable compared to the studies considered in this research study.
Conclusions
This manuscript proposes the Deep Q-learning (DQL) with Bit-Swapping-based linear feedback shift register (BSLFSR) for Fault Detection for SRAM. Furthermore, the proposed DQL-based MBIST effectively generates test patterns and injects faults into the memory. Here, the BSLFSR-based BISR model is utilized to repair the faults' injection in memory. Therefore, the BSLFSR model performs the redundancy analysis to detect the faulty cells and use the spare rows and columns. Finally, the proposed DQL-BSLFSR-FD model has attained 85%, 72%, 79% lower area, 85%, 80%, 50% lower utilization of slice As per today's demand of consumers to use a large volume of the memory in their devices, this research study provides the solution to test the memories embedded in the SoC and attempt to improve the product yield. This study offers the solution for the regress testing SRAM in SOC-based development. It proposed the memory test method through the proposed MBIST controller to find more and possibly all kinds of defects. Further, it provides a powerful solution to repair the faculty memory cell or faulty memory location supplied by the test block. It uses a spare memory cell to replace the detected defective cells to repair the memory. The design is targeted to the FPGA platform, and the obtained results are targeted to the power consumption, speed, and area overhead. The obtained results are reasonable if we compare them with the existing studies targeting the same parameters. It is discussed in the literature and the above result section and observed from the results obtained on the FPGA that the proposed method is better in terms of delay, power, and area parameters. The comparison of results of the existing studies [28][29][30] is carried out, the obtained results are tabulated, and the performance graphs are plotted as shown in the charts of the result section. The contribution to the memory test and repair is significant, and the existing work's performance is acceptable compared to the studies considered in this research study.
|
2022-06-21T15:04:27.638Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "db1049c4d430e97db7cab72ad25076cb0af79731",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/13/6/971/pdf?version=1655639116",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3198d88be26ee83a501927717f9c70e9108ff3bb",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17119650
|
pes2o/s2orc
|
v3-fos-license
|
The Arabidopsis apyrase AtAPY1 is localized in the Golgi instead of the extracellular space
Background The two highly similar Arabidopsis apyrases AtAPY1 and AtAPY2 were previously shown to be involved in plant growth and development, evidently by regulating extracellular ATP signals. The subcellular localization of AtAPY1 was investigated to corroborate an extracellular function. Results Transgenic Arabidopsis lines expressing AtAPY1 fused to the SNAP-(O6-alkylguanine-DNA alkyltransferase)-tag were used for indirect immunofluorescence and AtAPY1 was detected in punctate structures within the cell. The same signal pattern was found in seedlings stably overexpressing AtAPY1-GFP by indirect immunofluorescence and live imaging. In order to identify the nature of the AtAPY1-positive structures, AtAPY1-GFP expressing seedlings were treated with the endocytic marker stain FM4-64 (N-(3-triethylammoniumpropyl)-4-(p-diethylaminophenyl-hexatrienyl)-pyridinium dibromide) and crossed with a transgenic line expressing the trans-Golgi marker Rab E1d. Neither FM4-64 nor Rab E1d co-localized with AtAPY1. However, live imaging of transgenic Arabidopsis lines expressing AtAPY1-GFP and either the fluorescent protein-tagged Golgi marker Membrin 12, Syntaxin of plants 32 or Golgi transport 1 protein homolog showed co-localization. The Golgi localization was confirmed by immunogold labeling of AtAPY1-GFP. There was no indication of extracellular AtAPY1 by indirect immunofluorescence using antibodies against SNAP and GFP, live imaging of AtAPY1-GFP and immunogold labeling of AtAPY1-GFP. Activity assays with AtAPY1-GFP revealed GDP, UDP and IDP as substrates, but neither ATP nor ADP. To determine if AtAPY1 is a soluble or membrane protein, microsomal membranes were isolated and treated with various solubilizing agents. Only SDS and urea (not alkaline or high salt conditions) were able to release the AtAPY1 protein from microsomal membranes. Conclusions AtAPY1 is an integral Golgi protein with the substrate specificity typical for Golgi apyrases. It is therefore not likely to regulate extracellular nucleotide signals as previously thought. We propose instead that AtAPY1 exerts its growth and developmental effects by possibly regulating glycosylation reactions in the Golgi.
Background
The term "apyrase" (adenosine pyrophosphatase) for an enzyme cleaving the phosphoanhydride bonds of ATP and ADP was coined by Otto Meyerhof in 1945 [1]. Decades later, the alternative name "NTPDase" (nucleoside triphosphate diphosphohydrolase) was officially proposed [2] because apyrases hydrolyze a wide range of nucleoside triand diphosphates (reviewed in [3]). Apyrases have been found in many pro-and eukaryotes (reviewed in [3]), and they all share highly conserved regions [4]. In plants, the postulated functions are diverse and include nodulation [5][6][7][8][9], resistance to xenobiotics [10], phosphate scavenging [11] and growth [12][13][14][15][16]. Each eukaryotic genome screened for the presence of apyrase genes holds at least two candidates. In Arabidopsis thaliana, a total of seven apyrase gene candidates exist. Our research focused on the function of the two Arabidopsis apyrase genes AtAPY1 and AtAPY2, whose corresponding proteins share an identity of 87% amino acids. Knocking out one of the two apyrase genes by T-DNA (transfer DNA) insertion resulting in an apy1 or apy2 single knockout (SKO) caused no obvious differences in phenotype compared with the wild type (WT) [17], but knocking out both AtAPY1 and AtAPY2 inhibited pollen germination [17] and was seedling-lethal [18]. Overexpression of either AtAPY1 or AtAPY2 led to more vigorous growth of hypocotyls and pollen tubes [12]. Suppression of expression, however, by RNA interference targeting AtAPY1 in the apy2 SKO background, inhibited growth throughout the whole plant and especially in the hypocotyls and roots [12]. Several lines of evidence suggested that these growth effects are mediated by AtAPY1 and AtAPY2 regulating extracellular ATP (eATP) signals [12]: Apyrase activity, measured in the extracellular matrix (ECM) of growing pollen tubes, could be reduced by adding chemical inhibitors or polyclonal antibodies directed against AtAPY1. The reduction in activity simultaneously raised eATP levels and reduced pollen tube growth [12]. These findings explained the inhibition of growth when the expression of AtAPY1 and AtAPY2 is suppressed or shut off and provided the first direct evidence that apyrases function as regulators of extracellular nucleotides such as eATP in plants. In the animal field, the direct link between ecto-apyrases and [eATP] had already been shown [19]. Similarly, eATP was already known to serve as signaling molecule in animals (reviewed in [20]) before it became recognized as such in plants in the past decade (reviewed in [21][22][23]).
The objective of this study was to confirm the extracellular function of the two Arabidopsis apyrases AtAPY1 and AtAPY2 by their localization to the plasma membrane or the apoplast. Since AtAPY1 and AtAPY2 were shown to be functionally redundant in their ability to rescue pollen germination of double knockout apyrase (DKO) pollen [17] and seedling viability in DKO mutants [18], an overlapping subcellular localization of the two apyrases was likely. Therefore, this study focused on the localization of only one apyrase.
Stable Arabidopsis lines were generated expressing AtAPY1 fused to either one of two tag sequences, SNAP or GFP. For the identification of the AtAPY1-positive compartments, organelle-specific marker proteins were co-expressed and immunogold labeling was used. Unexpectedly, the apyrase was not localized to the plasma membrane or cell wall, but to the Golgi apparatus.
Plant material and growth conditions
For all experiments, the A. thaliana ecotype Wassilewskija was used as the WT control. Seedlings were grown for one week under sterile conditions on agar plates (4.3 gL -1 Murashige Skoog (MS) salts, 0.5 g L -1 MES, pH 5.7 (adjusted with KOH), 1% (w/v) sucrose, 0.8% (w/v) agar) or in liquid medium (see above, without agar) under shaking (80 rpm). After one week on agar plates, seedlings were transferred to soil (Einheitserde, type P, Pätzer Inc., Sinntal-Jossa, Germany) and grown at 24°C and a 16-h photoperiod at 100 μmol photons m -2 s -1 .
Generation of AtAPY1-SNAP-complemented apyrase DKO mutants (= DKO-SNAP)
The open reading frame (ORF) of AtAPY1 fused to the SNAP-tag sequence was cloned under the control of the native AtAPY1 promoter region (nt −10 to −1959; with −1 corresponding to the first nucleotide upstream of the adenine of the AtAPY1 start codon) with the Gateway technology (Invitrogen). The AtAPY1-SNAP sequence is shown in the Additional file 1. For the SNAP-tagging, the stop codon was removed from the AtAPY1 sequence to create a C-terminal fusion to the tag. The vector pSNAP-tag(m) (New England Biolabs) was used as the PCR template for the SNAP-tag sequence. The primer pair for the amplification produced a 531-bp product and contained the following SNAP-specific sequences: 5'-GACTGCGAAATGAAGCGCA-3' (SNAPlokattB3F; forward) and 5'-TTAAGGCTTGCCCAGTCTGTG-3' (SNAPlokattB2R; reverse). The reverse primer introduced the stop codon. The entry clone for each of the three DNA elements was generated by recombining the respective PCR product with the matching pDONR vector (Invitrogen). The necessary recombination sites were introduced into the PCR product through the primer sequences. The three entry clones were recombined with the binary destination vector pGWB501 [25] to form the final construct AtAPY1::AtAPY1-SNAP. The sequences of the entry clones and the expression clone were confirmed by sequencing. The Agrobacterium tumefaciens strain GV3101 [26] was transformed with the expression clone and then used for transformation of apyrase mutants which were hemizygous for the apy1 mutation (= +/apy1), homozygous for the apy2 mutation (= apy2/apy2; SKO) and contained the construct SPIK::AtAPY2. The plants were transformed by the floral dip method [27]. Transgenic lines (T1 generation) were grown on agar plates containing hygromycin (50 μg mL -1 ), phosphinothricin (PPT) (10 μg mL -1 ) and kanamycin (30 μg mL -1 ). Hygromycin selected for the presence of AtAPY1::AtAPY1-SNAP, PPT for the presence of SPIK::AtAPY2 and kanamycin for the presence of apy1 or apy2.
Generation of AtAPY1-GFP-complemented apyrase DKO mutants (= DKO-GFP) The cloning of the 35S::AtAPY1-GFP construct is described in detail in [28]. The ORF of AtAPY1 without the stop codon was amplified by PCR and cloned into the pQE-30 vector (Qiagen). For the amplification of the GFP cDNA, the pBIN mGFP5-ER [29] served as the template. The PCR product was cloned in frame with the AtAPY1 sequence already present in pQE-30 to enable a translational fusion of GFP with the C-terminus of AtAPY1. The resulting conjugated AtAPY1-GFP ORF was amplified and subcloned into the TOPO pCR2.1 vector (Invitrogen). The AtAPY1-GFP cDNA was released by EcoRI digestion and cloned into the binary vector pLBJ21 [30]. The AtAPY1-GFP sequence is available as Additional file 2. The transgenic line expressing the GFP-tag alone is available from the Arabidopsis Resource Center (stock number CS9114). WT Wassilewskija plants were transformed with the help of agrobacteria containing the recombinant construct by the floral dip method [27]. Transformants (T1 generation) were selected on agar plates containing kanamycin (30 μg mL -1 ). For genetic complementation experiments, homozygous AtAPY1-GFP transgenic lines were crossed with apy1 SKO plants hemizygous for the apy2 mutation and carrying the AtAPY2 cDNA under the control of the SPIK promoter (= apy1/apy1; +/apy2; SPIK::AtAPY2). Kanamycin-resistant progeny were genotyped by PCR. Apy1 SKO plants containing the AtAPY1-GFP construct (= apy1/apy1; +/+; 35S::AtAPY1-GFP) were crossed with apy2 SKO plants hemizygous for the apy1 mutation and expressing AtAPY2 under the control of the SPIK promoter (= +/apy1; apy2/apy2; SPIK::AtAPY2). The progeny were selected on kanamycin and PPT.
Generation of transgenic lines co-expressing AtAPY1-GFP and either YFP-(yellow fluorescent)-Rab E1d, -SYP32, -Got1p homolog or RFP -(red fluorescent protein)-MEMB12 Four transgenic lines generated by Geldner et al. [32] expressing either the marker YFP-Rab E1d, YFP-Got1phomolog, YFP-SYP32 or RFP-MEMB12 under the control of the UBQ10 promoter were obtained from the Nottingham Arabidopsis Stock Centre. The lines were validated by antibiotic selection and genomic PCR as suggested by Geldner et al. [32]. Homozygous transgenic 35S::AtAPY1-GFP plants were crossed with either homozygous UBQ10::YFP-Got1p homolog, YFP-SYP32 or RFP-MEMB12 plants. The F1 progeny were selected on agar plates containing kanamycin (30 μg mL -1 ) and either PPT (10 μg mL -1 ) for the crosses with the YFP fusion constructs or hygromycin (50 μg mL -1 ) for the crosses with the RFP fusion construct. Double resistant F1 seedlings were genotyped for the existence of the desired fusion constructs before analyzing them by confocal microscopy.
Whole mount immunofluorescence of AtAPY1-SNAP Ten-day-old DKO-SNAP seedlings were fixed in 4% (w/v) paraformaldehyde (PFA) for 1 h and treated with 1% (w/v) cellulase Onozuka RS (Duchefa) and 1% (w/v) macerozyme R-10 from Rhizopus sp. (Serva) for 15 min at 37°C. After washing twice with phosphate buffered saline (PBS), the samples were treated with 1% (v/v) Triton X-100 for 1 h at room temperature (RT). To inactivate all endogenous peroxidases, the samples were incubated in methanol:hydrogen peroxide (200:1, v/v) for 30 min at RT in the dark. After washing with water, the seedlings were treated with 96% (v/v) ethanol for 1 min, washed twice with PBS and blocked with 1% (w/v) skim milk (Fluka) for 30 min. Incubation with polyclonal rabbit α-SNAP antibody (Open Biosystems, Huntsville, Alabama, USA) (1:50 in 1% (w/v) skim milk) followed for at least 1 h at RT. After washing three times with PBS, the seedlings were incubated for at least 1 h at RT with goat α-rabbit-IgG conjugated with horseradish peroxidase (HRP) (GE Healthcare) diluted 1:800 in 1% (w/v) skim milk. After removing the secondary antibody by washing three times with PBS, the seedlings were treated with 2 μL fluorescein isothiocyanate (FITC) tyramides (1 mg mL -1 ) in 50 mL amplification buffer [34] for at least 1 h at RT or at 4°C overnight in the dark. FITC tyramides were prepared according to Pernthaler et al. [34] and kindly provided by Kerstin Röske. The samples were washed three times with PBS and stored for imaging in PBS.
Treatment with FM4-64 and alkaline pH
Seven-to ten-day-old 35S::AtAPY1-GFP transgenic seedlings were vacuum-infiltrated with 15 μM FM4-64 (Invitrogen) in 1 M Tris-HCl pH 8.0. For the alkali treatment, two approaches were taken: (1) 35S::AtAPY1-GFP seedlings were cultured in regular liquid MS medium (pH 5.7) for five days, transferred to alkaline MS medium (pH 8.1) and grown for three more days before imaging [38] or (2) the seedlings were grown in regular liquid MS medium (pH 5.7) for eight days and infiltrated with tap water or Tris buffered saline (TBS) pH 7.5 for at least 2 h before imaging.
CLSM (confocal laser scanning microscopy)
For the imaging of DKO-SNAP plants (10 d old), the Leica confocal and multiphoton microscope TCS SP5 MP was used. To avoid bleaching of FITC, the samples were mounted with antifade (0.233 g 1,4-diazabicyclo (2.2.2)octane (Sigma) in 200 μL 1 M Tris-HCl pH 8.0, 800 μL water, 9 mL glycerine) [39]. The FITC signals were captured using a Leica water immersion objective (HCX PL APO 63x/1.2 Water Lbd.Bl.). The detector range was set to 494 to 530 nm. The autofluorescence of the WT seedlings (10 d old) was captured with the same parameters and settings as described for FITC. Spectral images of the WT and the DKO-SNAP samples were analyzed by linear unmixing with the dye separation tool of the Leica software (LAS1.8.2) to identify FITCspecific signals.
Transgenic plant material containing the 35S::AtAPY1-GFP construct were imaged with the Zeiss Axio Imager connected to the laser scanning microscope LSM 710 or 780 (Carl Zeiss). Six-to 10-d-old seedlings were imaged in water and protoplasts in sorbitol buffer pH 7.6 (see Protoplast preparation) without the enzymes. The images were analyzed with the Zeiss Zen 2009 and Fiji [40] software. Zeiss water immersion objectives (C-Apochromat 40x/1.20 W Korr M27 or C-Apochromat 63x/1.20 W Korr M27) were used. Chlorophyll fluorescence was detected between 601 to 708 nm after excitation with the 594-nm excitation line of a helium-neon (HeNe) laser. GFP was excited with the 488 nm argon laser multiline and the emission was collected between 490 and 520 nm. YFP was excited with the 514-nm excitation line of an argon laser (multibeam splitter 514/561) and the emission was collected between 535 and 580 nm. RFP was excited at 561 nm and its fluorescence detected in the 570 to 630-nm range. For coimaging of GFP with either YFP or RFP, the line sequential imaging mode was chosen with rapid switching between the two exciting laser lines. For detection of GFP and YFP in one sample, GFP was excited at 458 nm and YFP as described above (multi-beam splitter 458/514/594). The fluorescence of GFP and FM4-64 was imaged simultaneously by using the exciting laser line 488 nm, but separate emission detectors (490 -543 nm and 667 -746 nm, respectively). For all dual labelings, narrow detector entrance slit bandwidths were chosen to avoid bleed-through of fluorescence emissions. Bright field-type images were acquired with the transmitted light detector.
Immunogold labeling of AtAPY1-GFP
Root tips of six-day-old 35S::AtAPY1-GFP transgenic seedlings were fixed as described under "Whole mount immunofluorescence of AtAPY1-GFP". Fixed samples were processed for Tokuyasu cryo-sectioning as described [41]. In brief, root tips were washed several times in PB, infiltrated stepwise into gelatine and cooled down on ice. Blocks with single root tips were cut on ice, incubated in 2.3 M sucrose in water for 24 h at 4°C, mounted on Pins (Leica no. 16701950) and plunge-frozen in liquid nitrogen. One-hundred-nm-thin sections were cut on a Leica UC6 equipped with a FC6 cryo-chamber and picked up in methyl cellulose sucrose (1 part 2% (w/v) methyl cellulose (Sigma M-6385, 25 centipoises) + 1 part 2.3 M sucrose). For immunogold labeling, the grids were placed upside down on drops of PBS in a 37°C incubator for 20 min, washed with 0.1% (w/v) glycine in PBS (5x1min), blocked with 1% (w/v) bovine serum albumin in PBS (2x5min) and incubated with primary antibodies for 1 h (α-GFP: TP 401 from Torrey Pines, 1:50, or ab290 from Abcam, 1:50). After washes in PBS (4x2min), the sections were incubated with Protein A conjugated to 10-nm or 6-nm gold for 1 h, washed again in PBS (3x5s, 4x2min) and postfixed in 1% (v/v) glutaraldehyde (5 min). The sections were washed with distilled water (10x1min), stained with neutral uranyl oxalate (2% (w/v) UA in 0.15 M oxalic acid, pH 7.0) for 5 min, washed briefly in water and incubated in methyl cellulose uranyl acetate (9 parts 2% (w/v) MC + 1 part 4% (w/v) UA, pH 4) on ice for 5 min. Finally, grids were looped out, the MC/UA film was reduced to an even thin film and air-dried. Sections were analyzed on a Philips Morgagni 268 (FEI) at 80 kV and images were taken with the Mega-View III digital camera (Olympus). Areas were calculated using the ITEM-software (Olympus). Alternatively, 200nm-thin sections were mounted on glass slides and stained with α-GFP and goat α-rabbit Alexa Fluor 488 for fluorescence analysis on a Keyence BZ 8000 fluorescence microscope.
Co-localization analysis
Transgenic plants co-expressing AtAPY1-GFP and either RFP-MEMB12, YFP-SYP32, YFP-GOT1p homolog or YFP-Rab E1d were imaged by confocal microscopy and the obtained dual-channel images analyzed with ImageJ [40]. The corresponding scatterplots and Pearson's correlation coefficients were generated with the "Colocalization Threshold" and "Coloc2" tool of ImageJ.
Purification of AtAPY1-GFP
For cultivation of starting material, 50 mg of seeds were grown in 50 mL of liquid medium in a 250-mL flask for 10 to 12 d. Seedlings were ground to a fine powder in liquid nitrogen using a mortar and pestle. For each gram of plant material, 250 to 375 μL ice-cold Tris-MES buffer (10 mM Tris, 2 mM MgCl 2 , 30 mM KCl, pH 6.5, adjusted with 1 mM MES pH 3) were added. The cell homogenate was allowed to thaw at RT and filtered through a fine mesh (Miracloth, Calbiochem). The filtrate was subjected to centrifugation at 1,000 g and 4°C for 10 min to remove debris. The supernatant was centrifuged at 8,000 g and 4°C for 10 min and the pellet was discarded. The supernatant was mixed 1:1 with 100% (v/v) glycerol to retain enzymatic activity and stored at −80°C. For purification of AtAPY1-GFP, 96-well microtiter plates coated with α-GFP antibodies (GFP-multiTrap plates, ChromoTek, Planegg-Martinsried, Germany) were used. Two hundred microliters of protein extract (4-6 μg μL -1 ) were added per well and incubated at 4°C under shaking (500 rpm) for 2 h or overnight. Unbound proteins were removed by washing the wells three times with 300 μL of ice-cold Tris-MES buffer.
Apyrase activity assay
To determine the apyrase activity, an assay based on Tognoli et al. [42] was used. The nucleotide substrates were purchased from Sigma and the stock solutions were prepared in water. The nucleotides were diluted in Tris-MES buffer (pH 6.5 or pH 5.5) to the desired concentration and added as 130-μL aliquots to each well of immobilized AtAPY1-GFP on the GFP-multiTrap plate. The reaction was incubated under shaking (500 rpm) at 30°C for 1 h. The released phosphate was assayed by transferring 60 μL of each reaction mixture to two separate wells on a new transparent 96-well microtiter plate (Greiner Bio-One, Kremsmünster, Austria) and by adding 120 μL of freshly prepared stopping solution of After a 10-min incubation at RT, the absorbance of the samples was read at 740 nm. To determine the background from phosphate contaminations and unspecific phosphatase activities, the reactions were run in parallel with WT protein extracts. The background absorbance readings were subtracted from the readings assayed with AtAPY1-GFP.
Solubilization of microsomal membrane proteins
Seedlings from 2 mg of 35S::AtAPY1-GFP transgenic seeds were grown in 60 mL of liquid medium for two weeks and then ground in liquid nitrogen. The plant powder was suspended in 3 mL ice-cold protein extraction buffer (50 mM HEPES KOH pH 6.5, 5 mM EDTA, 0.4 M sucrose, 1 mM AEBSF (4-(2-aminoethyl)-benzensulfonyl fluoride hydrochloride) (Sigma), complete EDTA-free protease inhibitor cocktail (Roche)). AEBSF and the inhibitor cocktail were added right before use. The protein suspension was filtered through a single layer of miracloth and centrifuged at 14,000 g for 10 min at 4°C. The supernatant was ultracentrifuged at 100,000 g for 1 h at 4°C to pellet microsomal membranes. Equal amounts of the membranes were either treated with 2 M NaCl, 0.2 M Na 2 CO 3 , 0.2% (w/v) SDS, 4 M urea or protein extraction buffer alone for 30 min on ice. Then, the samples were centrifuged at 100,000 g for 1 h at 4°C producing a supernatant with solubilized proteins (S100) and the microsomal membrane fraction (P100). The supernatants were centrifuged in Vivaspin 2 concentrators (polyethersulfone membrane, 10-kDa cut off; Sartorius) for circa 20 min at 12,000 g. The protein concentrations of the S100 and P100 fractions were determined with the BCA protein assay kit (Thermo Scientific). Equal amounts (approximately 40 μg) of the membrane fraction and of the solubilized proteins were loaded on a SDS gel and immunoblotted.
Results
Rescue of the seedling-lethal apyrase double knockout phenotype with tagged AtAPY1 One objective was to localize AtAPY1 at the subcellular level to learn how the protein exerts its function in plant growth. Tagging AtAPY1 was chosen over raising antibodies against it because AtAPY1 and AtAPY2 are so identical in their amino acid (aa) sequence: There is only one six-aa-stretch in AtAPY1 (aa 44-49) that has four different and two similar aa to the corresponding sequence in AtAPY2 [43]. All other stretches of differences between the two sequences comprise only one or two aa.
Among the tags available, the SNAP-tag [44,45] seemed the most suitable. As an O 6 -alkylguanine-DNA alkyltransferase, SNAP binds covalently to benzylguanine-based substrates. There are a large number of substrates coupled to different fluorescent dyes and other labels commercially available making the SNAP-tag a versatile tool for localization studies. The expression of AtAPY1-SNAP was placed under the control of the native promoter region, because overexpression can lead to localization artifacts. Despite this risk, another tagged AtAPY1 version, AtAPY1-GFP, was fused to the strong cauliflower mosaic virus 35S promoter because expression levels of NTPDases are generally low [46].
The SNAP-or GFP-tag was fused to the C-terminus of AtAPY1 to avoid losing the tag by a possible Nterminal cleavage in a subcellular targeting process. Since tags can impair protein function and lead to mislocalization [47], a complementation strategy was performed. The knockout of AtAPY1 and AtAPY2 (DKO) is seedling-lethal [18]. A DKO seedling should survive if it is complemented with a tagged AtAPY1 that is functional and correctly localized. However, the use of the 35S promoter made the rescue of the DKO mutant with AtAPY1-GFP impossible as confirmed experimentally, because this promoter is turned off in pollen [48]. Without AtAPY1-GFP in the DKO pollen, no progeny will form, because the presence of either AtAPY1 or AtAPY2 is prerequisite for pollen to germinate [17]. In order to overcome this hurdle, partially complemented apyrase apy2 SKO plants (= +/apy1; apy2/apy2; SPIK::AtAPY2) were used as the genetic background for transformation with each tagged AtAPY1 construct. These plants carried AtAPY2 under the control of the pollen-specific promoter SPIK which ensured the survival of the DKO pollen.
DNA was isolated from progeny of the partially complemented SKO plants containing either AtAPY1::AtAPY1-SNAP or 35S::AtAPY1-GFP and used for genotyping by PCR. Several DKO plants without a WT AtAPY1 and AtAPY2 gene, but with a tagged apyrase construct were identified hereafter called DKO-SNAP and DKO-GFP, respectively. The PCR analysis of two such mutants is shown in Figure 1A. The SPIK::AtAPY2 construct was always present in the DKO-GFP mutants as expected, but interestingly also in the DKO-SNAP mutants. One possible explanation is that some regulatory elements necessary for optimal expression in pollen were missing in the chosen promoter region. The promoter region used previously for AtAPY1::GUS analyses [12,17,18] was 1 kb longer at the 3'end including almost the entire gene (At3g04090) upstream of AtAPY1. Since the gene At3g04090 was deemed unnecessary for successful complementation, it was mostly excluded in the AtAPY1::AtAPY1-SNAP construct.
To confirm that the SNAP-and GFP-tagged AtAPY1 could rescue the lethal DKO seedling phenotype, the seedling phenotype of complemented SKO and DKO plants in comparison with the WT and DKO seedlings was analyzed ( Figure 1B). SKO-SNAP (+/+; apy2/apy2; SPIK::AtAPY2; AtAPY1::AtAPY1-SNAP) and SKO-GFP (apy1/apy1; +/+; SPIK::AtAPY2; 35S::AtAPY1-GFP) plants were included in the study to check for possible dominant negative effects of the tagged apyrase on the WT phenotype. DKO seedlings without a construct coding for a tagged AtAPY1 had an abnormal phenotype with fleshy cotyledons and no root ( Figure 1B; [18]). These seedlings did not develop beyond this stage. DKO plants expressing AtAPY1-SNAP or AtAPY1-GFP, on the other hand, showed no phenotypical differences to WT plants ( Figure 1B) and SKO mutants (data not shown).
AtAPY1 is present in punctate structures, but not at the plasma membrane or extracellular space For localization of AtAPY1 at the subcellular level by confocal microscopy, living DKO-SNAP seedlings were incubated with SNAP-compatible fluorescent substrates to label the AtAPY1-SNAP fusion protein. Two cell-permeable, fluorescent substrates were used: red fluorescent tetramethylrhodamine-Star and the green fluorescent BG-505 (both kindly provided by Andreas Brecht, formerly Covalys Biosciences, Basel, Switzerland). Although specific labeling of fusion proteins in vivo was successful in yeast [49] and animal as well as human cell cultures [50][51][52], a high background made the detection of AtAPY1-SNAP-specific signals in Arabidopsis seedlings impossible. The tested dyes passed the cell wall and entered the cell, but even 14-h washing steps could not remove the excess fluorescent substrate (data not shown).
Therefore, indirect immunofluorescence was chosen as a different approach. DKO-SNAP seedlings were fixed. After cell wall digestion and plasma membrane permeabilization, they were incubated with primary antibodies against the SNAP-tag. Following several washing steps, FITC-labeled secondary antibodies were added to visualize AtAPY1-SNAP for the CLSM. The background was low, but no specific signals could be detected (data not shown). To increase the fluorescent signal, the tyramide signal amplification (TSA) technique was applied [53]. This technique employs peroxidase activity to covalently couple a large number of labeled substrates in the immediate vicinity. Therefore, instead of FITC-labeled secondary antibodies, HRP-labeled antibodies were added in combination with FITC-coupled tyramides. TSA improved the signal-to-noise ratio and intracellular dotlike structures became visible in DKO-SNAP seedlings ( Figure 2A) which were not found in the WT ( Figure 2B). Root hairs were selected as suitable cell types for localization, because promoter-glucuronidase analyses suggested that AtAPY1 is expressed strongly in root hairs as well as in guard cells among other cell types [12,18].
To overcome the weak expression levels of AtAPY1, the indirect immunofluorescence approach was repeated with transgenic plants expressing AtAPY1-GFP under the control of the strong 35S promoter. We used primary antibodies against GFP and secondary Alexa Fluor 488-coupled antibodies in two different approaches: 1. Post-embedding labeling of 200-nm-thin Tokuyasu cryosections ( Figure 2C, D), and 2. Pre-embedding labeling followed by embedding in Technovit 7100 resin and sectioning (Additional file 3). In both experiments, the intracellular punctate signals could be confirmed in the root ( Figure 2C) and in root hairs (Additional file 3). No signals were detected in the cell wall and at the plasma membrane and in the control without primary antibody ( Figure 2D).
To verify the imaging data obtained by immunofluorescence, a second detection method was used. The transgenic plants expressing AtAPY1-GFP were imaged in vivo by CLSM. Here, the same intracellular punctate pattern as found before in AtAPY1-SNAP and AtAPY1-GFP expressing plants was observed in guard cells ( Figure 2E), cotyledon epidermis (Additional file 4A), hypocotyls (Additional file 4B) and roots (Additional file 4C). The WT control did not present this punctate pattern as shown exemplarily for WT guard cells ( Figure 2F). Expression of the GFP-tag alone led to cytoplasmic staining [28].
The method of live imaging of GFP-tagged proteins is suitable to detect apyrase in the cell wall as shown for apoplastic apyrases in other plant species [16,54]. But since AtAPY1 was expected to be localized extracellularly and since GFP does not fluoresce at pH ≤5.0 [55], a weak AtAPY1-GFP signal could be missed if the tag were exposed to the acidic environment of the cell wall. To provide an optimal pH for GFP fluorescence in the extracellular environment, WT protoplasts and those expressing AtAPY1-GFP were prepared from cotyledons and imaged at pH 7.6. As before, intracellular GFP signals were found ( Figure 2G) which did not appear in the WT control ( Figure 2H), but the plasma membrane of the transgenic protoplasts did not fluoresce ( Figure 2H). This result ruled out the possibility that AtAPY1 was anchored in the plasma membrane. However, protoplastation represents severe stress for the cells which could have caused down regulation of AtAPY1 and/or degradation or internalization of AtAPY1. In addition, AtAPY1 was a possibly soluble protein in the cell wall and in this case the digestion of the cell wall during protoplast preparation would have led to a loss of the AtAPY1-GFP signal. Therefore, cells with intact walls were imaged at a pH suitable for GFP fluorescence ( Figure 2I, J). Seedlings expressing 35S::AtAPY1-GFP were grown in liquid culture at pH of 8.1 instead of 5.7. The higher pH in the culture medium is known to recover GFP fluorescence in the apoplast [38]. However, even under these conditions, no extracellular GFP signal was detectable ( Figure 2I). In addition, 35S::AtAPY1-GFP seedlings were cultured under normal conditions to minimize any impact the alkaline culture medium may have on AtAPY1 distribution and infiltrated with buffer of pH 7.5 just for imaging. Again no extracellular signals were found in more than 30 independent experiments (data not shown).
Both detection methods, immunofluorescence and in vivo imaging, revealed the same punctate structures, but no signals at the plasma membrane or in the extracellular space.
AtAPY1 is localized in the Golgi apparatus
At higher magnifications, some of the AtAPY1-specific punctate signals appeared as doughnut-or horseshoeshaped structures (see Figure 2I). This morphology is typical of Golgi stacks viewed in the middle of the main cisternae from the top [56]. In addition, the observed size between 0.5 to 1 μm across matched the expected size of Golgi stacks [57]. In order to corroborate that AtAPY1 was localized in these organelles, the dye FM4-64 was applied. FM4-64 is endocytosed by the cell, sequentially staining the plasma membrane, endosomes and the trans-Golgi network, but not the Golgi apparatus [58]. Imaging FM4-64-infiltrated 35S::AtAPY1-GFP seedlings did not reveal any co-localization of the fluorescent dye and the GFP signal, even 120 min after the infiltration ( Figure 3A and corresponding scatterplot in Figure 3E).
As an additional negative endosomal control, colocalization with the GTPase Rab E1d was studied. There is some controversy in the literature over the designation of Rab E1d as a marker protein for the Golgi [59] or the post-Golgi/endosomal compartment [32]. However, there is consensus that Rab E1d primarily co-localizes with rat sialyltransferase [59,60], a trans-Golgi and/or trans-Golgi network marker protein [61], and that it also associates with the plasma membrane (PM) [60,62]. Therefore, Rab E1d is believed to play a role in the trafficking of secretory vesicles from the Golgi to the PM [59,60].
In transgenic plants co-expressing YFP-Rab E1d and AtAPY1-GFP, no overlap of the YFP and GFP fluorescence was found ( Figure 3B and corresponding scatterplot in Figure 3E). The lack of overlap not only suggests the absence of AtAPY1-GFP in endosomes, but also ruled out crosstalk between the GFP and YFP detection channels. In order to confirm that the chosen GFP and YFP settings were specific for the detection of the GFP and YFP fluorescence, respectively, transgenic plants expressing only one of the two fluorophores were imaged sequentially with the GFP and YFP settings. When imaging epidermal cells from the AtAPY1-GFP expressing plant with the GFP settings, the familiar dotlike structures appeared in the GFP detection channel (Additional file 5A). Taking an image of the identical epidermal section with the YFP settings, however, did not deliver this punctate pattern (Additional file 5A). The same imaging experiment with a plant expressing YFP-SYP32 only, showed negligible bleed-through of the YFP fluorescence into the GFP detection channel (Additional file 5B). These control experiments demonstrated the specificity of the GFP and YFP detection settings.
In a direct localization approach, the occurrence of AtAPY1-GFP in the Golgi apparatus was investigated by co-localization with three known Arabidopsis Golgiresident proteins. MEMB12 (Membrin 12) and SYP32 (Syntaxin of plants 32) are SNARE proteins localized in the Golgi apparatus [63]. Got1p (Golgi transport 1 protein) was found in the Golgi membranes of Saccharomyces cerevisiae [64] and its homolog in Arabidopsis was also localized in the Golgi [32]. Transgenic plants co-expressing AtAPY1-GFP and either the Golgi marker RFP-MEMB12 ( Figure 3C), the YFP-SYP32 ( Figure 3D) or YFP-Got1p homolog (Additional file 6) were analyzed by confocal microscopy. The fluorescence of all three Golgi marker proteins overlapped with the AtAPY1 fluorescence, localizing AtAPY1-GFP to the Golgi apparatus.
To exclude or confirm a co-localization not only by eye, the ImageJ software was applied. The overlays of the images from the two detection channels were used to generate scatterplots and to calculate the Pearson's correlation coefficient (R r ) of the two fluorescent signals ( Figure 3E and Additional file 6). R r values > 0.5 indicate co-localization [65], verifying co-localization of all three marker proteins RFP-MEMB12, YFP-SYP32 and YFP-Got1p homolog with AtAPY1-GFP.
In order to confirm the co-localization results, AtAPY1-GFP was labeled with gold particles using α-GFP primary antibodies and secondary gold-coupled secondary antibodies. Electron microscopy of Tokuyasu cryo-sections through roots revealed weak, but specific staining of Golgi stacks ( Figure 4A, B). In sections through 56 Golgi compartments (48 labeled, 8 unlabeled) equaling an area of 11.8 μm 2 , 8.3 gold particles per μm 2 were counted. Considering only labeled Golgi compartments increased the value to 9.8 particles per μm 2 . By comparison, only 0.37 and 0.07 gold particles per μm 2 were found in sections through 88 mitochondria (= 14 μm 2 ) and 12 nuclei (= 27 μm 2 ), respectively. Multivesicular bodies (MVB) were also positively immunolabeled by gold particles ( Figure 4C) which most likely reflects the transport of some AtAPY1-GFP to the vacuole as seen in other GFPoverexpressing plants [59] rather than a functional role of AtAPY1 in this prevacuolar compartment. No immunolabeling of any other cellular compartment including the cell wall was found ( Figure 4D). In summary, the immunogold labeling studies of AtAPY1-GFP confirmed its localization in the Golgi and gave no indication of an extracellular occurrence.
AtAPY1 has the substrate specificity typical of an endo-, not an ecto-apyrase If the Golgi localization of AtAPY1 were correct, AtAPY1 should exhibit the substrate specificity typical of Golgi apyrases. Therefore, the activity of AtAPY1-GFP was tested in the presence of known apyrase substrates at pH 6.5 which equals the pH found in the Golgi [55]. AMP is not a substrate for apyrases and for this reason served as a negative control. AtAPY1-GFP did not hydrolyze ATP and ADP, the typical substrates of ecto-apyrases [46], but the nucleotides uridine diphosphate (UDP), guanosine diphosphate (GDP) and inosine diphosphate (IDP) ( Figure 5). No activity was detectable in the presence of all other NTPs or NDPs ( Figure 5) and AMP (data not shown). Hydrolysis of these three NDPs matches the substrate specificity of other plant Golgi apyrases, e. g. from rice (Oryza sativa) [66] and sycamore (Acer pseudoplatanus) [67].
In order to investigate the possibility that the substrate specificity was pH-dependent and that it would change in favor of ATP and ADP once AtAPY1 reached the apoplast, the activity assay of AtAPY1-GFP was repeated at pH 5.5, the pH typically found in the cell wall [68]. However, the three diphosphates UDP, GDP and IDP remained the only substrates (Additional file 7). Therefore, the determined substrate specificity substantiated the results that AtAPY1 was not observed in the cell wall, but in the Golgi.
AtAPY1 is an integral membrane protein
One objective was to determine if AtAPY1 was a soluble protein in the lumen of the Golgi or, as implied by the TMHMM prediction program [69], a Golgi membrane protein with an uncleaved signal sequence at the N-terminus serving as a transmembrane anchor ( Figure 6A). This prediction was supported by the immunogold labeling results (E) The distribution of the green and magenta pixels in the dual-channel overlay images in A-D were analyzed with the ImageJ "Colocalization Threshold" and "Coloc2" tool. The x-axes represent the intensities of the green pixels from the GFP channel (AtAPY1) and the y-axes from the magenta channel (FM4-64, Rab E1d, MEMB12 or SYP32). For each scatterplot, the intensities are given as the pixel grey values ranging from 0 to 255. Co-localization clusters the pixels from both channels along a diagonal line. The maximal theoretical value for the Pearson's correlation coefficient (R r ) is 1.0.
which suggested a membrane association of the protein ( Figure 4A-C).
Microsomal membranes were isolated from transgenic AtAPY1-GFP seedlings and their purity was verified with antibodies against marker proteins for cytosolic and insoluble proteins (Additional file 8). The microsomal membranes were treated with various solubilizing agents and then analyzed for any solubilized proteins. Using α-GFP antibodies, a major signal was observed at the expected molecular mass of AtAPY1-GFP (80 kDa) and a minor signal at 27 kDa ( Figure 6B). The smaller protein most likely represents a degradation product of AtAPY1-GFP resulting from the protein extraction procedure. If the microsomal membranes were left untreated, AtAPY1-GFP was detected in the membrane fraction ( Figure 6B), suggesting that the protein was membrane-bound. In support of this finding, the detergent SDS released the majority of the AtAPY1-GFP protein from the membranes ( Figure 6B). In order to differentiate between AtAPY1 being a peripheral or an integral membrane protein, the microsomal membranes were subjected to high salt (2 M NaCl), alkaline (0.2 M Na 2 CO 3 ) and denaturing (4 M urea) conditions. Peripheral proteins are removed from membranes by urea which disturbs protein-protein interactions or by high salt and alkaline treatments which disrupt electrostatic and hydrophobic interactions, respectively. Except for trace amounts, the salt and Na 2 CO 3 treatment did shift AtAPY1 from the pellet fraction to the supernatant ( Figure 6B) as expected for an integral membrane protein. AtAPY1 is also not a Golgi soluble protein, because AtAPY1 remained in the pellet fraction after Na 2 CO 3 treatment which is known to leach the soluble proteins from the microsomal lumen into the supernatant fraction as shown in [70]. Urea released some AtAPY1-GFP protein into the supernatant, but most of the protein remained in the membrane fraction ( Figure 6B). Although transmembrane proteins are generally not extracted by urea at all, type II integral proteins seem to be less tightly associated with the membrane than proteins with multiple transmembrane domains [71].
In summary, AtAPY1 showed the characteristics of a single-pass type II membrane protein.
Previous evidence for extracellular localization of AtAPY1
Using programs to predict the subcellular localization of AtAPY1 was inconclusive: The program SubLoc version (v) 1.0 [72] suggested the cytoplasm, Target P v. 1.1 [73] mitochondria and WoLF PSORT v. 2.0 [74] the chloroplast. Predotar v. 1.3 [75] could not define a specific localization and PSORT v. 6.4 [76] predicted an extracellular localization. Therefore, experimental data became invaluable. Experiments with in vitro germinated pollen detected apyrase activity in the liquid germination medium [12]. This extracellular activity was inhibited by the addition of polyclonal antibodies raised against the N-terminally truncated (aa 36-471), denatured and recombinant AtAPY1 protein [12]. This result indicated that AtAPY1 was an extracellular protein. However, these antibodies might have targeted apyrases other than AtAPY1 in the cell wall. The apyrase conserved region (ACR) 1 of AtAPY1, for example, is 100% identical to the ACR1 of AtAPY2. Therefore, it is not surprising that the α-AtAPY1 antibodies also recognize AtAPY2 [12]. The similarity of the AtAPY1 ACRs with the other five Arabidopsisapyrases is not as high, but these conserved regions could still be binding targets for the α-AtAPY1 antibodies. The possibility of cross-reactivity is supported by experiments using the same α-AtAPY1 antiserum to successfully inhibit extracellular apyrase activity in a different plant species, namely cotton [14].
The experiment with the in vitro germinated pollen also showed that the protein exhibiting apyrase activity in the pollen germination medium was soluble because it was measureable in siphoned off aliquots [12]. Since AtAPY1 showed the solubilization characteristics of an integral membrane protein, it is unlikely to be identical to the previously detected extracellular apyrase activity.
Arabidopsis apyrases as regulators of eATP signals
AtAPY1 and AtAPY2 have been implicated to be the regulators of eATP signals in Arabidopsis [12,14,15,[77][78][79] and were therefore expected to function in the cell wall. However, the substrate specificity of AtAPY1 does not fit this model. AtAPY1-GFP did not hydrolyze ATP and ADP, the After a 30-min treatment, the proteins were subjected to centrifugation at 100,000 g. The pellet fractions (P) and the supernatants (S) were separated in a 8% SDS polyacrylamide gel, blotted on a nitrocellulose membrane and incubated with α-GFP antibodies. The molecular weights of the protein standard are given in kDa. The AtAPY1-GFP protein (80 kDa) and a second band at 27 kDa (arrow) were detected with αmouse IgG coupled with horseradish peroxidase. Similar results were obtained in three experiments with independent protein extracts.
typical substrates of ecto-apyrases, contradicting a role of AtAPY1 in regulating eATP signals.
Even though AtAPY1 did not agree with the profile of an ecto-apyrase, AtAPY2 remains a candidate. However, Dunkley et al. [80] noted a Golgi localization of AtAPY2. They prepared membrane fractions from Arabidopsis callus cultures by density centrifugation and identified AtAPY2 in the Golgi protein pool by mass spectrometry. Recently, the Golgi localization of AtAPY2 was confirmed in another proteome analysis of enriched Golgi membranes [81]. Nevertheless, the Arabidopsis Information Resource database holds entries for two different splice variants of AtAPY2 that could result in different localizations of the corresponding proteins. The fact that transgenic Arabidopsis plants overexpressing AtAPY2 show less eATP-mediated superoxide production than WT [79] points to an extracellular localization of (a variant of) AtAPY2.
Since AtAPY1 was not detected in the cell wall by three subcellular detection methods and did not exhibit the substrate specificity of an ecto-apyrase, we hypothesize that some apyrase other than AtAPY1 is the regulatory enzyme observed in eATP signaling of Arabidopsis.
Possible AtAPY1 function(s) in the Golgi apparatus
A diphosphatase activity was described in the Golgi of Pisum sativum [82]. UDP was rapidly degraded to uridine monophosphate and P i by an integral membrane protein facing the Golgi lumen with its active site. It was also shown in P. sativum that UDP inhibits glycosyltransferases [83], which function in the assembly of primary plant cell wall components [84] in a feedback mechanism. So it was proposed that a diphosphohydrolase such as apyrase prevents inhibition of polysaccharide synthesis in the Golgi by constantly removing the by-product UDP from the glycosyl transfer reaction [85]. A role of the Golgi apyrase in polysaccharide biosynthesis was supported by their finding that the enzyme was only found in the elongation zone of the pea seedling stem. This zone constantly needs new cell wall material. The apyrase AtAPY1 could assume the same role in Arabidopsis. The developmental and growth phenotypes of the apyrase DKO mutants such as lack of pollen germination [17], no root and shoot growth, and distorted cell shapes [18] could be explained by defects in the biosynthesis of cell wall material. Such a role is supported by the substrate specificity of AtAPY1 because UDP and GDP are produced by glycosyltransferases in the Golgi.
xMore evidence for a function in glycosylation comes from a complementation experiment with the yeast Saccharomyces cerevisiae mutant Δgda1. This yeast mutant lacks the Golgi apyrase guanosine diphosphatase 1 (GDA1) leading to a glycosylation defect [86] which is abolished by complementation with AtAPY1 [81].
It is also believed that Golgi apyrases provide the nucleoside monophosphates which serve as the substrates in exchange for nucleotide sugars from the cytosol in an antiport mechanism [85]. Therefore, a shortage of nucleoside monophosphates would prevent or reduce the transport of nucleotide sugars into the Golgi [87] and increase the nucleotide sugar concentration in the cytosol. As a "release valve", the chloroplasts would import the sugar and convert it into starch. In agreement with this hypothesis, a pronounced increase in transitory starch was observed in the apyrase DKO mutants compared with the WT [18]. An accumulation of starch in the chloroplasts was also seen when the Golgi function was generally disrupted and explained by an indirect effect due to elevated cytosolic sugar concentrations [88].
Conclusions
The use of two different transgenic lines expressing either AtAPY1-SNAP or -GFP allowed specific labeling of AtAPY1 at the subcellular level. The cell organelles exhibiting the AtAPY1-specific fluorescent signals were identified as the Golgi apparatus by the following criteria: (i) morphology, (ii) size, (iii) lack of co-localization with the endocytic marker stain FM4-64 and trans-Golgi marker protein Rab E1d, but (iv) co-localization with the three Golgi marker proteins MEMB12, SYP32 and Got1p homolog. In addition, Golgi stacks were immunolabeled with α-GFP antibodies. While this paper was under review, the Golgi localization of AtAPY1 was independently confirmed by proteome analysis of Golgi membranes and by co-localization of AtAPY1-YFP with the CFP-labeled cis-Golgi marker α-mannosidase I in transiently transformed onion peels [81].
Although it can never be ruled out that the missing detection of extracellular AtAPY1 is a matter of methodological sensitivity, our results show that AtAPY1 is primarily present in the Golgi. Therefore, AtAPY1 is highly unlikely to represent the regulatory enzyme of eATP levels [12,14,15,77,78] and the true identity of the Arabidopsis ecto-apyrase(s) is yet to be found. Furthermore, the growth defects caused by the absence [18] and by the reduced amounts of AtAPY1 and AtAPY2 [12] are probably not directly linked to eATP signaling. Instead, the localization of AtAPY1 in the Golgi needs to be the basis for future investigations to understand how AtAPY1 in particular and plant Golgi apyrases in general affect plant growth and development.
Additional files
Additional file 1: AtAPY1-SNAP DNA sequence. The AtAPY1-SNAP DNA sequence present in the AtAPY1::AtAPY1-SNAP transgenic lines is shown.
Additional file 3: Immunofluorescence of pre-imbedding labeled AtAPY1-GFP in root hair. Roots from AtAPY1-GFP expressing seedlings were immunostained whole mount, embedded in resin and sectioned. The sections were successively incubated with α-GFP and secondary αrabbit Fab fragments coupled with Alexa Fluor 488. A 3-μm cross-section of a root hair is shown. The Alexa Fluor 488 fluorescence is shown in green. Scale bar equals 20 μm.
Additional file 4: Live imaging of AtAPY1-GFP in various cell types. CLSM images of various tissues in AtAPY1-GFP expressing seedlings show the GFP signals in green overlaid with bright field view. (A) Cotyledon epidermis, (B) hypocotyl and (C) root tip. Scale bars = 20 μm.
Additional file 5: Specificity of the imaging settings for the detection of GFP and YFP fluorescence. The epidermis of cotyledons from transgenic plant lines was imaged with the GFP and YFP settings outlined under Methods for the "CLSM". The GFP and YFP fluorescence is shown in green and magenta, respectively. Scale bars = 20 μm. (A) The identical epidermal section with two guard cells from plants expressing AtAPY1-GFP only was imaged sequentially with the GFP and YFP settings. Dot-like signals appeared in the GFP detection channel only. Only weak autofluorescence of the thickened cell wall around the stomate was visible in the YFP detection channel. (B) The identical epidermal section with two guard cells from plants expressing YFP-SYP32 only was imaged sequentially with the GFP and YFP settings. Here, only very weak signals were detectable in the GFP detection channel, but strong YFP fluorescence appeared with the YFP-specific excitation and detection.
Additional file 6: Co-localization analysis of AtAPY1-GFP and YFP-Got1p homolog. CLSM images of epidermal cells of cotyledons from transgenic lines co-expressing AtAPY1-GFP and YFP-Got1 phomolog were taken. The GFP fluorescence is shown in green and the YFP fluorescence in magenta. The bright field-type image was acquired with the transmitted light detector. The fluorescence signals for AtAPY1-GFP and YFP-Got1p homolog were detected separately and merged for colocalization with the "Co-localization Finder" plugin of ImageJ. Colocalization of the two proteins is depicted as white signals. The corresponding scatterplot was analyzed with the ImageJ "Colocalization Threshold" and "Coloc2" tool from ImageJ. The x-axis represents the pixel intensities from the GFP channel and the y-axis from the YFP channel. R r = Pearson's correlation coefficient. Scale bar = 20 μm. Additional file 7: Substrate specificity of AtAPY1-GFP at pH 5.5. The activity of AtAPY1-GFP at pH 5.5 in the presence of various substrates was measured. No activity was detectable with AMP as substrate (data not shown). Different letters above the columns indicate mean values that are significantly different from one other (p<0.05, Tukey test). Error bars represent standard deviations of two phosphate measurements from one reaction (see Methods). Abbreviation: P i , inorganic phosphate.
Additional file 8: Analysis of the purity of microsomal membrane and soluble protein fractions. Protein extracts from transgenic plants expressing 35S::AtAPY1-GFP were treated with either extraction buffer, 0.2% SDS, 2 M NaCl, 0.2 M Na 2 CO 3 or 4 M urea and then centrifuged at 100,000 g to obtain microsomal membrane fractions (P100) and supernatants (S100). Proteins from each fraction (40 μg each) were subjected to Western blot analysis. The enrichment of microsomal and insoluble proteins in the P100 fractions and of soluble proteins in the S100 fractions was confirmed with antibodies against marker proteins. The 37-kDa cytosolic fructose-1,6-bisphosphatase (cFBPase) served as a marker protein for soluble proteins. Actin (45 kDa) was used as a marker protein for insoluble proteins under actin polymerizing conditions found in the extraction buffer and 2 M NaCl [89] and as a soluble marker under actin depolymerizing conditions such as 0.2 M Na 2 CO 3 , 0.2% SDS and 4 M urea [89,90].
|
2017-04-05T03:55:30.223Z
|
2012-07-31T00:00:00.000
|
{
"year": 2012,
"sha1": "947380ccecee8b10086c7fd8ba3858b92fb324ee",
"oa_license": "CCBY",
"oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/1471-2229-12-123",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a7bcfe673c07d10aaf757138a546531631aa80c",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
311192
|
pes2o/s2orc
|
v3-fos-license
|
Maresin 1 inhibits transforming growth factor-β1-induced proliferation, migration and differentiation in human lung fibroblasts
The myofibroblast has been implicated to be an important pathogenic cell in all fibrotic diseases, through synthesis of excess extracellular matrix. Lung fibroblast migration, proliferation and differentiation into a myofibroblast-like cell type are regarded as important steps in the formation of lung fibrosis. In the present study, the effect of maresin 1 (MaR 1), a pro-resolving lipid mediator, on transforming growth factor (TGF)-β1-stimulated lung fibroblasts was investigated, and the underlying molecular mechanisms were examined. The results of the present study demonstrated that MaR 1 inhibited TGF-β1-induced proliferative and migratory ability, assessed using MTT and scratch wound healing assays. The TGF-β1-induced expression of α-smooth muscle actin (α-SMA) and collagen type I, the hallmarks of myofibroblast differentiation, was decreased by MaR 1 at the mRNA and protein levels, determined using the reverse transcription-quantitative polymerase chain reaction and western blot analysis, respectively. Immunofluorescence demonstrated that MaR 1 downregulated the TGF-β1-induced expression of α-SMA. In addition, phosphorylated mothers against decapentaplegic homolog 2/3 (Smad2/3) and extracellular signal-related kinases (ERK) 1/2 were upregulated in TGF-β1-induced lung fibroblasts, and these effects were attenuated by MaR 1 administration. In conclusion, the results of the present study demonstrated that MaR 1 inhibited the TGF-β1-induced proliferation, migration and differentiation of human lung fibroblasts. These observed effects may be mediated in part by decreased phosphorylation of Smad2/3 and ERK1/2 signaling pathways. Therefore, MaR 1 may be a potential therapeutic approach to lung fibrotic diseases.
Introduction
Pulmonary fibrosis occurs in various clinical conditions, including chronic obstructive pulmonary disease, asthma, idiopathic pulmonary fibrosis, interstitial pneumonia, radiation-induced lung injury and end-stage lung failure (1)(2)(3). Pulmonary fibrosis is characterized by destruction of the lung architecture, and the production and excessive accumulation of a collagen-rich extracellular matrix (ECM), eventually leading to respiratory insufficiency. The primary etiology of fibrosis remains poorly understood, and there is a lack of therapeutic interventions and novel therapies for pulmonary fibrosis (4,5).
Previous studies have demonstrated that fibroblast proliferation and activation serve a role in the pathogenic fibrotic process. When stimulated by transforming growth factor (TGF)-β1 or other local cytokines, fibroblasts proliferate, migrate, and differentiate into myofibroblasts, which drive lung fibrogenesis (6). Fibroblasts and myofibroblasts produce numerous cytokines and growth factors in addition to the ECM, which are the hallmarks of fibrotic pulmonary disease (7). A previous study demonstrated that dysregulation of fibroblast activation and differentiation into myofibroblasts may be a therapeutic strategy for pulmonary fibrosis (8).
The aim of the present study was to evaluate the impact of MaR 1 on TGF-β1-induced human lung fibroblast (MRC-5) proliferation, migration and differentiation in vitro. In order to obtain an increased understanding of the underlying mechanisms, the effects of MaR 1 on mothers against decapentaplegic homolog 2/3 (Smad2/3) and extracellular signal-regulated kinase (ERK) 1/2 phosphorylation signaling pathways were investigated in TGF-β1-treated MRC5 cells.
Materials and methods
Cell culture and stimulation. MRC5 lung fibroblasts (human fetal lung fibroblasts; American Type Culture Collection, Manassas, VA, USA) were grown in Dulbecco's modified Eagle medium (Hyclone; GE Healthcare Life Sciences, Logan, UT, USA) with 4.5 g/l glucose, 10% fetal bovine serum (Thermo Fisher Scientific, Inc., Waltham, MA, USA) and 1% penicillin/streptomycin solution. Cells were maintained at 37˚C in a humidified incubator, in the presence of 5% CO 2 . Following incubation for 3 days, the cells were cultured to ~80% confluence.
Cell viability assay. Cell viability was assessed using an MTT assay according to the manufacturer's protocol (Cayman Chemical Company). MRC5 cells (2.5x10 3 /100 µl) were seeded into the wells of 96-well culture plates and treated as described above. A total of 24 h subsequent to treatment, cells were incubated with 50 µg MTT solution for 4 h. The supernatants were removed and dimethyl sulfoxide was added to each well. The quantity of aqueous soluble formazan, which was produced by viable cells from tetrazolium salt, was measured using spectrophotometry at absorbance (A) 570 nm, and was equal to the number of living cells. Optical density A 570 was measured in six samples of each group.
Cell migration. In order to evaluate the migratory response of MaR 1 pretreated cells following TGF-β1 exposure, a scratch wound healing assay was performed. Cells were seeded at confluent status for 24 h and pre-incubated with MaR 1 (10 nM) for 30 min prior to the addition of TGF-β1 (10 ng/ml) as a chemotactic factor. A cell-free area (900-µm scratch wound) in each well was created using a 200-µl pipette tip. A total of 24 h subsequently, the migratory cells in the gap were counted using a light microscope at x200 magnification. All of the data were obtained in six independent experiments.
RNA extraction and reverse transcription-quantitative polymerase chain reaction (RT-qPCR). Cells were pretreated with or without MaR 1 (10 nM) for 30 min, followed by the addition of TGF-β1 (10 ng/ml) for 24 h. Total cellular RNA was extracted using TRIzol reagent (Invitrogen; Thermo Fisher Scientific, Inc.), and 2 µg RNA was reverse transcribed into cDNA using the RevertAid™ First Strand cDNA Synthesis kit (Thermo Fisher Scientific, Inc.), according to the manufacturer's protocol. The specific primer sequences (Invitrogen; Thermo Fisher Scientific, Inc.) used were as follows (5'-3'): α-SMA forward, GAC AAT GGC TCT GGG CTC TGT AA and reverse, ATG CCA TGT TCT ATC GGG TAC TT; collagen α-1(I) chain (COL1A1) forward, GAG GGC CAA GAC GAA GAC ATC and reverse, CAG ATC ACG TCA TCG CAC AAC; and GAPDH forward, AGT GCC AGC CTC GTC TCA TAG and reverse, CGT TGA ACT TGC CGT GGG TAG. Each specific gene product was amplified by real-time PCR using SYBR-Green Master Mix in the StepOne™ Real-Time PCR system (Thermo Fisher Scientific, Inc.). PCR thermal cycling was conducted at 95˚C for 15 sec, followed by denaturing at 95˚C for 5 sec, annealing at 60˚C for 60 sec and a final extension step at 72˚C for 10 sec, for a total of 40 cycles. The reaction was duplicated for each sample. The relative expression levels of each transcript were normalized to GAPDH expression. The quantitative fold changes in gene expression were calculated using the 2 -ΔΔCq (18) method following normalization to GAPDH.
Immunofluorescence assay. Cells were fixed with 4% paraformaldehyde for 30 min and permeabilized using 0.5% Triton X-100 for 5 min, followed by blocking with 5% bull serum albumin (Dingguo Changsheng Biotechnology Co., Ltd., Beijing, China) at room temperature for 60 min. The fixed cells were incubated with specific primary α-SMA monoclonal antibody (1:200, cat no. AA132; Beyotime Institute of Biotechnology, Haimen, China) for 12 h at room temperature, washed with PBS, and incubated with streptavidin biotin complex-fluorescein isothiocyanate (Boster Biological Technology, Pleasanton, CA, USA) at 37˚C for 30 min. The cells were incubated with DAPI (BestBio, Shanghai, China) at room temperature for 1 min, immunofluorescence was observed and images were captured using an Olympus immunofluorescence microscope (Olympus Corporation, Tokyo, Japan) at x40 magnification.
Statistical analysis. All experimental data are presented as the mean ± standard error of the mean and were analyzed using SPSS software (version 17.0; SPSS Inc., Chicago, IL, USA). Statistical analysis was performed using one-way analysis of variance and Student-Newman-Keuls post hoc analysis. P<0.05 was considered to indicate a statistically significant difference.
MaR 1 inhibits TGF-β1-induced proliferation in MRC5 cells.
As presented in Fig. 1B, TGF-β1 significantly increased MRC5 cell viability measured by MTT assay (P<0.05). Incubation with MaR 1 alone exerted no effects on MRC5 cell viability compared with the untreated group (P>0.05). Pretreated with MaR 1, the TGF-β1-induced proliferation of MRC5 cells was significantly decreased at 10 and 100 nM (P<0.05). The lower concentration of MaR 1 (1 nM) exerted no effect on viability. As presented in Fig. 1B, the TGF-β1-induced proliferative ability decreased in a dose-dependent manner. The concentrations of 10 ng/ml and 10 nM were selected to be the working concentration of TGF-β1 and MaR 1, respectively.
MaR 1 attenuates TGF-β1-induced migration in MRC5 cells.
The scratch assay was used to measure the effect of MaR 1 on TGF-β1-induced fibroblast migration. TGF-β1 significantly increased the migration of fibroblasts compared with the control and MaR 1-only treated groups (P<0.05). MaR 1 alone exerted no apparent influence on the migration of fibroblasts. However, administration of MaR 1 significantly decreased fibroblast migration and scratch closure induced by TGF-β1 (P<0.05; Fig. 2).
MaR 1 inhibits MRC5 cell differentiation into myofibroblasts, induced by TGF-β1 in vitro.
In order to determine whether MaR 1 exerts an inhibitory effect on the TGF-β1-induced differentiation of fibroblasts into myofibroblasts, the expression of α-SMA and collagen type I was examined in MRC5 cells treated with TGF-β1, in the presence or absence of MaR 1. RT-qPCR analysis demonstrated that administration of TGF-β1 resulted in a significant increase in α-SMA and COL1A1 (the gene that encodes the pro-α1 chains of collagen type I) mRNA levels compared with the control and MaR 1-only treated groups. By contrast, treatment with MaR 1 decreased the mRNA level of α-SMA induced by TGF-β1. It was observed that MaR 1 decreased the TGF-β1-induced expression of COL1A1 mRNA, an additional marker of myofibroblast differentiation (P<0.05; Fig. 3). Immunofluorescence staining analysis demonstrated that treatment with TGF-β1 increased the intensity of α-SMA staining in MRC5 cells; whereas, treatment of MRC5 cells with TGF-β1 following MaR 1 significantly inhibited this enhancement (P<0.05). In addition, MaR 1 alone exerted no effect on α-SMA expression (Fig. 4A). Consistent with the results of the RT-qPCR and immunofluorescence staining assays, western blot analysis demonstrated that MaR 1 decreased the TGF-β1-induced expression of α-SMA and collagen type I (P<0.05). However, MaR 1 alone exhibited no effect on protein expression ( Fig. 4B and C).
MaR 1 decreases TGF-β1-induced Smad2/3 and ERK1/2 phosphorylation in MRC5 cells.
TGF-β1 signals were followed by the activation of cytoplasmic effectors, including Smad and non-Smad mediator proteins. In order to elucidate the possible mechanism involved in the effect of MaR 1 on TGF-β1-stimulated profibrotic activity, the expression of p-Smad2, p-Smad3, and p-ERK1/2 was evaluated. As presented in Fig. 5, treatment with TGF-β1 resulted in significantly increased phosphorylation of Smad2, Smad3 and ERK1/2 compared with the control group. Application of MaR 1 decreased the TGF-β1-induced expression of p-Smad2, p-Smad3 and p-ERK1/2 in MRC5 cells, compared with treatment with TGF-β1 alone. Treatment with MaR 1 alone exhibited no effect on Smad and ERK mediator proteins.
Discussion
In the present study, it was demonstrated that pretreatment with MaR 1 was able to attenuate TGF-β1-dependent lung fibroblast proliferation and migration, TGF-β1-dependent increases in lung fibroblast α-SMA and collagen type I expression and TGF-β1-induced Smad2/3 and ERK1/2 phosphorylation in lung fibroblasts. It was additionally observed that administration of MaR 1 post-TGF-β1 stimulation was able to inhibit human lung fibroblast proliferation, migration and differentiation in vitro (data not presented). The fibroblast/myofibroblast has been identified to be an important pathogenic cell in all fibrotic diseases through the secretion of excess ECM, cytokines and growth factors, and tissue contraction. Myofibroblasts have three biological origins: Tissue resident fibroblasts; epithelial to mesenchymal transition; and bone marrow-derived mesenchymal precursors (19). Previous evidence has suggested that myofibroblasts may be a target for anti-fibrotic therapy (6)(7)(8). Lung fibroblast migration, proliferation and differentiation into a myofibroblast-like cell type are regarded important steps in lung fibrogenesis (5). It is well-known that TGF-β1 is the most potent stimulator of the activation and the differentiation of fibroblasts into myofibroblasts (20). Preventing myofibroblast differentiation in migratory and proliferative fibroblasts has been demonstrated to attenuate lung fibrosis (21).
Recently discovered pro-resolving lipid mediators, including lipoxins, protectins, resolvins and maresins, have been reported to exhibit potent anti-inflammatory and pro-resolving effects in multiple disease models, by acting on neutrophils, macrophages and other inflammatory cells (9,10). Previous research demonstrated that pro-resolving lipid mediators exhibit anti-fibrotic and anti-proliferative abilities. Lipoxin A4 was demonstrated to inhibit the proliferation of human lung fibroblasts evoked by connective tissue growth factor, and suppressed the tumor necrosis factor-α-induced proliferation of rat mesangial cells (22,23). Aspirin-triggered-15R-lipoxin A4 and lipoxin A4 inhibited endothelial cell proliferation stimulated with vascular endothelial growth factor or leukotriene D4 (24). Systemic administration of resolvin 2 and MaR 1 attenuated neointimal hyperplasia and vessel remodeling in a mouse model of arterial injury, by reducing early proliferation in the vessel wall and inhibiting vascular smooth muscle cell migration mediated by platelet-derived growth factor (16). As demonstrated by the MTT and scratch wound healing assays, the present study demonstrated that MaR 1 is able to inhibit fibroblast proliferation and migration stimulated by TGF-β1. Differentiated myofibroblasts are characterized by increased expression of collagen and α-SMA, the most commonly used molecular markers (6). The results of the present study demonstrated that MaR 1 prevents TGF-β1-induced myofibroblast differentiation in cultured lung fibroblasts, as exhibited by the underexpression of collagen type I and α-SMA at the mRNA and protein levels. It was hypothesized that, in addition to inhibiting epithelial-mesenchymal transition in lung alveolar epithelial cells (17), MaR 1 may inhibit the effects of TGF-β1 on fibroblasts, demonstrating a second, synergistic mechanism whereby MaR 1 exerts its anti-fibrotic effects. To the best of our knowledge, the present study is the first to directly demonstrate that MaR 1 inhibits TGF-β1-induced myofibroblast differentiation from lung fibroblasts.
TGF-β1 exhibits numerous biological functions associated with airway remodeling, fibrosis and myofibroblast differentiation, including enhanced cellular migration, proliferation and ECM synthesis (25,26). TGF-β1 elicits its biological activities by signaling through a receptor complex of serine/threonine kinase type I and type II receptors, which induces signal transduction through receptor-mediated Smad signaling pathway in addition to non-Smad signaling pathways (27). In order to investigate the possible signaling cascade by which MaR 1 may suppress human lung fibroblast cell activation, the effects of MaR 1 on the Smad2/3 and ERK1/2 signaling pathways in TGF-β1-stimulated fibroblasts were investigated. It has been demonstrated that the TGF-β1/Smad pathway is activated in myofibroblast differentiation, and inhibiting its phosphorylation may attenuate myofibroblast differentiation and the fibrogenic response. Dong et al (28) demonstrated that interleukin-27 inhibited the proliferation, differentiation and collagen synthesis of lung fibroblasts by reducing the activation of the TGF-β1/Smad signaling pathways. In cystic fibrosis lungs, TGF-β1 signaling was markedly increased, as observed by p-Smad2 expression, increased myofibroblast differentiation and tissue fibrosis (29). Tumelty et al (30) reported that aortic carboxypeptidase-like protein increased α-SMA and collagen protein expression by inducing Smad2/3 phosphorylation in primary lung fibroblasts and promoted myofibroblast differentiation. As expected, the results of the present study demonstrated that p-Smad2 and p-Smad3 were highly expressed in TGF-β1 treated cells compared with controls, and that this effect was inhibited by treatment with MaR 1. The results of the present study suggested that MaR 1 inhibited myofibroblast differentiation, in part through the inhibition of Smad2 and Smad3 phosphorylation.
In addition to Smad signaling pathways, previous studies have demonstrated that non-Smad pathways, including ERK1/2, may serve a role in TGF-β1-mediated biological activities. The ratio of p-ERK to ERK indicates the degree of activation of the ERK pathway, which serves a role in a number of fibroblast/myofibroblast functions, including proliferation, migration and differentiation. A previous study demonstrated that MaR 1 may protect against liver injury induced by carbon tetrachloride by inactivating mitogen-activated protein kinase/ERK1/2 signaling pathways (15). An additional previous study demonstrated that the downregulation of ERK1/2 inhibited high mobility group protein B1-induced myofibroblast differentiation and migration, using the human lung fibroblast cell line WI-38 (31). In addition, early reports indicated that IM-412 inhibited TGF-β1 induced expression of the fibrotic markers α-SMA and fibronectin, and collagen accumulation in human lung fibroblasts by decreasing Smad2 and Smad3 phosphorylation, in addition to ERK activity (32). Chung et al (33) reported that resistin like α induced myofibroblast differentiation by increasing the rapid phosphorylation of the ERK signaling pathway. Consistent with this report, the results of the present study demonstrated that the effects of MaR 1 were partly mediated by decreasing ERK1/2 phosphorylation in TGF-β1-treated human lung fibroblasts. A number of growth factors stimulate ERK1/2, and it has been reported that activated ERK phosphorylates Smad proteins within their linker regions, resulting in the maintenance of Smad-mediated biological activities (34). However, the precise mechanisms through which MaR 1 attenuates the TGF-β1-induced proliferation, migration and differentiation of human lung fibroblasts requires further clarification.
In conclusion, the results of the present study demonstrated that MaR 1 inhibited TGF-β1-dependent profibrotic activity in lung fibroblasts when administered prior to or following TGF-β1 stimulation, and that the protective effect of MaR 1 may be associated with inactivation of the Smad2/3 and ERK1/2 signaling pathways in lung fibroblasts. The present study provided evidence for the potential role of MaR 1 in fibrotic lung disease. In addition to lung fibrosis, it is hypothesized that MaR 1 may act on other types of fibroblasts, representing a potentially useful therapeutic strategy in other fibrotic diseases.
|
2018-03-09T19:07:26.326Z
|
2017-06-07T00:00:00.000
|
{
"year": 2017,
"sha1": "065cc3b5f98edf4c8f5c30b1729af67c2fa92858",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2017.6711/download",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "065cc3b5f98edf4c8f5c30b1729af67c2fa92858",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
8693123
|
pes2o/s2orc
|
v3-fos-license
|
Distribution and fluctuations of backswimmers ( Notonectidae ) in a tropical shallow lake and predation on microcrustaceans
Notonectids are widely distributed in freshwaters and can prey on zooplankton in temperate lakes. However, its role in structuring the zooplankton community is unknown in tropical lakes. Thus, our objective was to study the notonectid Martarega uruguayensis in a Brazilian tropical shallow lake to evaluate its potential as a zooplankton predator. Its horizontal distribution was analized in the lake throughout one year in fortnightly samplings. Backswimmers were more abundant (mean density 162.9 ± 25.8 ind.m–2) in the cool-dry season, with a strong preference by the littoral zone (mean density 139.9 ± 17.5 ind.m–2). Laboratory experiments were undertaken with young and adult notonectid and the two most abundant cladocerans, Daphnia gessneri and Ceriodaphnia richardi, as prey. Predation by backswimmers in the laboratory showed that only juveniles fed on microcrustaceans (mean ingestion rate of 1.2 ± 0.2 Daphnia and 1.0 ± 0.2 Ceriodaphnia per predator per hour), without size selectivity. Adult insects probably have difficulties in detecting and manipulating small planktonic organisms. On the other hand, young instars might influence zooplankton community, especially in the littoral zone of the lake. This study does contribute to a better understanding of trophic interactions in tropical shallow lakes and is the first to investigate the predation of a notonectid on microcrustaceans from Lake Monte Alegre.
Introduction
Predation in lentic ecosystems is one of the most important ecological interactions directly influencing zooplankton community structure (Hall et al., 1970;Zaret, 1980;Kerfoot and Sih, 1987;Arner et al., 1998).The role of predation in prey communities would depend on many factors, such as the joint effect of vertebrate and invertebrate predators, duration of predation pressure, prey size, density of predators and prey selectivity (Brooks and Dodson, 1965;Brooks, 1968;Hall et al., 1976;Peckarsky, 1982;Hanazato and Yasuno, 1989;Eitam and Blaustein, 2010).In addition, predators can influence prey communities, not only by direct effects of consumption, but also through sub lethal effects, such as injuries that might kill prey.
The aquatic insects known as backswimmers (Heteroptera: Notonectidae) are invertebrate predators that can play a major role in shaping the structure and the abundance of zooplankton population in several freshwater environments (Nesbitt et al., 1996;Blaustein, 1998;Hampton et al., 2000).They are mainly associated to the littoral areas with stands of macrophytes, although they can also inhabit the limnetic zone, as their distribution pattern depends on both biotic and abiotic factors (Bennett and Streams, 1986;Bailey, 1987;Streams, 1987a;Gilbert et al., 1999;Foltz and Dodson, 2009).Notonectids normally explore the water surface, although they are also able to dive to at least 0.5 m (Streams, 1992).They usually attack prey by grabbing them with their fore and mid legs, piercing them with the rostrum, and injecting digestive enzymes before sucking the inner content (Streams, 1987b).They have a broad diet that includes several aquatic organisms, such as rotifers, crustaceans, mosquito larvae, tadpoles and aquatic insects (Hirvonen, 1992;Blaustein, 1998;Gilbert and Burns, 1999;Hampton and Gilbert, 2001;Saha et al., 2010;Jara et al., 2012;Fischer et al., 2012Fischer et al., , 2013)).A few papers have shown that there is a decreased appetite associated with adults (Scott and Murdoch, 1983;Murdoch and Scott, 1984), while Gilbert and Burns (1999) showed the opposite.Furthermore, they often feed on terrestrial organisms trapped on the water surface that become vulnerable to predation, such as bees, ants, and mosquitos.Their ability and preference to feed on zooplankton and insect larvae have been observed in the laboratory (Murdoch and Sih, 1978;Sih, 1982;Scott and Murdoch, 1983;Murdoch et al., 1984;Murdoch and Scott, 1984;Streams, 1987b;Gilbert and Burns, 1999;Hampton and Gilbert, 2001;Walsh et al., 2006;Gergs and Ratte, 2009;Gergs et al., 2010;Saha et al., 2010;Fischer et al., 2012Fischer et al., , 2013)), in outdoor containers (Murdoch and Sih, 1978;Murdoch et al., 1984;Arner et al., 1998;Eitam and Blaustein, 2010) and natural habitats (Nesbitt et al., 1996;Blaustein, 1998;Hampton et al., 2000).
Most studies on invertebrate predation were carried out in temperate lakes, so new insight on tropical lakes may be of great interest to know the role of predation on the structuring of prey communities.Studies carried out on the biotic and abiotic aspects of the tropical shallow Lake Monte Alegre, in southeastern Brazil, resulted in the knowledge of the major factors involved in structuring the zooplankton community in this ecosystem.Predation by water mite (Cassano et al., 2002) and Chaoboridae larvae has emerged as the most important factor in structuring the zooplankton community (Arcifa et al., 1992(Arcifa et al., , 2015;;Arcifa, 2000;Castilho-Noll and Arcifa, 2007a, b), whose impact is stronger during the warm season (Arcifa et al., 1992(Arcifa et al., , 2015)), and influencing the vertical migration of microcrustaceans (Minto et al., 2010).However, there are other invertebrate predators that have never been studied in this environment and might affect zooplankton community, such as notonectids.Therefore, this study does contribute to a better understanding of the trophic interactions in tropical shallow lakes and also of the ecology of a notonectid.
This study is part of a larger project on interactions in the lake and the structuring of communities.The aim was to determine experimentally the potential predation of the young and adult backswimmer Martarega uruguayensis (Berg) on the most abundant and frequent zooplankton species in Lake Monte Alegre, the cladocerans Daphnia gessneri Herbst and Ceriodaphnia richardi Sars.Fluctuations of the backswimmer population, age structure, and spatial distribution were evaluated to detect periods and zones of the lake where predation upon microcrustaceans would be potentially more intense.The hypotheses are that predation of the young M. uruguayensis on the two cladoceran species would be more intense than predation by adults, and that the notonectid population is more abundant in the littoral zone, where it could exert a potentially higher predation pressure.
Study site
Lake Monte Alegre (21° 10' 04" S, 47° 51' 28" W) is a small, shallow, tropical and eutrophic reservoir (area = 7 ha, maximum depth = 5 m; altitude 500 m a.s.l) (see Figure 1).It is located in southeastern Brazil, in the town of Ribeirão Preto (SP), inside the campus of the University of São Paulo.The reservoir was closed in 1942 by damming the Laureano Creek, which belongs to the Pardo River basin.The lake was used initially for irrigation and recreation, but since the 1980s it has been used for research and teaching, besides having an ornamental value.Although it is a reservoir, the functioning of Lake Monte Alegre is similar to a natural lake due to the lack of dam manipulation and a residence time relatively high for its dimensions (~ 45 days).Currently, the margins and surroundings are covered by dense terrestrial vegetation, mostly trees and herbaceous plants.The aquatic vegetation is predominantly composed of the emergent macrophyte Ludwigia sp., distributed in narrow stands and occupying some regions of the littoral area.The region has a tropical climate, with two well-defined seasons: warm-wet (October-April) and cool-dry season (May-September) (Arcifa et al., 1990).The only filter-feeder planktivore is the adult of the exotic cichlid Tilapia rendalli Boulenger, which is not abundant ( Arcifa and Meschiatti, 1993).The main invertebrate predators are the dipteran Chaoborus brasiliensis Theobald, the water mite Krendowskia sp. and Martarega uruguayensis, which is the only notonectid specie.
Field sampling
Population fluctuations and spatial distribution of M. uruguayensis were studied in fortnightly samplings during one year, from December 2011 to December 2012.On each sampling event, backswimmers were collected in the littoral zone, near the edge of macrophytes (1 m deep with stands of Ludwigia sp.) and in the limnetic zone (5 m deep without macrophytes).Samplings were carried out by superficial sweeping with a dip net (37 × 28 cm; 500 µm-mesh) in three longitudinal transects, 10 m long each, in both zones (see Figure 1).The transects were sufficiently separated to ensure independence of samples.After samplings, superficial water temperature was measured by a probe Yellow Springs™ Model 95.Insects were preserved in ethanol 80% and individuals were counted to calculate the population densities in each date and were measured under a stereomicroscope to identify instars and the relative population abundance.The cladocerans used as prey in the experiments were collected with a plankton net (60 µm-mesh) by three vertical hauls.They were cultivated at the laboratory to obtain a sufficient number of individuals for the experiments.
Laboratory experiments
Experiments were carried out in an environmental chamber (FANEM™ , model CDG), at 25 °C and diffuse light.Overall, four experimental assays were performed following general conditions and procedures as shown in Table 1.The culture of cladocerans after field samplings was carried out in glass bottles attached to a plankton wheel, with addition of 1 mgC.L -1 of the chlorophycean Desmodesmus spinosus (Chodat) (former Scenedesmus spinosus) every other day.The backswimmers used in the experiments, collected with a dip net described in the previous section, were kept at the laboratory in the environmental chamber, in 80 mL beakers filled with filtered lake water (glass fiber, Millipore™ AP20).They were deprived of food for 24 h prior to the experiments, for standardizing the level of hunger.Feeding trials were set up separately to analyze the consumption of cladocerans by young and adult instars of the notonectid.In each experiment, only one species of prey was used.We set two experimental treatments with six replicates each: 1. predator + prey (P+); and 2. control with prey only (P-).Each replicate contained 20 prey as Initial Density (ID) and 2 predators in (P+) treatment (according to Scott and Murdoch, 1983;Cassano et al., 2002).Densities in the experimental trials were close to low densities found for Martarega (38 ind.m -2 ) and microcrustaceans (10 ind.L -1 ) in the lake.
After the acclimation time, the notonectids were placed in 1800 mL beakers filled with 500 mL of filtered lake water.Then, 1 mgC.L -1 of D. spinosus was added to the beakers as food for cladocerans, and they were arranged in a systematic way to avoid pseudoreplication (Hurlbert, 1984).After 2 hours, predators were removed from the beakers and the following variables were evaluated: Intact Prey (IP), Natural Prey Death (NPD), Experimental Error (EE) and Ingestion Rate (IR).IP represents the individuals that were alive and NPD the individuals that died without any predator influence.These individuals are easily recognizable because they have no damages, such as crushed carapace or holes, and the internal parts are intact.EE was calculated only for (P-) treatments and it represents mean error in prey counting after experiment ends, since the number of individuals in (P-) should be the same at the beginning and the end of the experiment; it was calculated by the Initial Density (ID) minus Intact Prey (IP) in the control treatment (P-).The estimated IR of prey eaten per predator per hour was calculated following Gilbert and Burns (1999) (Equation 1): where (IPc) is the Intact Prey in the control treatment (P-), (IPe) is the Intact Prey in the experimental treatment with predators (P+), (T) is time in hours and (N) is the number of predators in the treatments.
Statistical analyses
The mean densities of M. uruguayensis in the littoral and limnetic zones and in the warm-wet and cool-dry seasons were compared by the non-parametric Mann-Whitney test.One-way analysis of variance (ANOVA) was applied to compare the mean relative abundance of instars (I to VI) during the whole period in the lake.To determine the size of the instars, a histogram was built with the frequency of each size class of the notonectid using the collections in the field.Thus, the histogram peaks corresponded to the size of each instar.The dispersion index (DI = σ 2 /μ) was used to infer notonectid natural aggregation in each lake zone and sampling period.The index was used to test the null hypothesis that the observed distribution pattern is random.The DI with n-1 degrees of freedom is approximately distributed as χ 2 , small values meaning random distribution while large values (≥ 5.99; P ≤ 0.05) indicate aggregation.
Mann-Whitney test was used to compare IP and NPD between the treatments (P+) and (P-), and Kruskal-Wallis was applied to compare EE between experiments (1 to 4).Two-way analysis of variance (ANOVA) was used to test differences in IR of young and adult predators on prey species (C.richardi and D. gessneri).All statistical analyses were conducted using the software Statistica™ 8.0 at significance level of 95%.
Fluctuations, spatial distribution and age structure of Martarega uruguayensis
In the littoral zone, the highest densities of M. uruguayensis occurred during the cool season (mean surface temperature 23.7 ± 1.2 °C) and the lowest density occurred in the warm season (mean surface temperature 28.8 ± 1.3 °C) (see Figure 2).The mean density of the insects in the cool season (162.9 ± 25.8 ind.m -2 ) was significantly higher than in the warm season (87.3 ± 14.7 ind.m -2 ) (Mann-Whitney, U = 39.00,P = 0.02).High values of standard error of the means may be explained by insect aggregation in the environment, resulting in high density variations among replicates.In the limnetic zone, densities were almost nil in all periods (see Figure 2).A significant higher mean density of backswimmers was found in the littoral (139.9 ± 17.5 ind.m -2 ) compared with the limnetic zone (0.05 ± 0.01 ind.m -2 ) (Mann-Whitney, U = 0.00, P = 0.00).
The trend in age structure was similar during the whole sampling period (see Figure 3).Adults comprised a significant percentage of the population sampled through the year.The mean relative abundance of adults ranged from 39.0 ± 3.3% to 65.8 ± 5.7%.For the other instars the mean relative abundance ranged from 0.2 ± 0.2% to 22.2 ± 1.4%.There were significant differences among the relative abundances of the instars (ANOVA, F 5, 210 = 215.9,P = 0.00), the adult means differing from all the means of young instars (Tukey HSD, P = 0.00).
In the littoral zone, backswimmers were aggregated in all the sampling dates, with the dispersion index (DI) ranging from 20.7 to 1457.0.In the limnetic zone, there was a trend to random distribution in all the samplings, DI ranging from 1.0 to 2.0.
Laboratory experiments
We counted the number of Intact Prey (IP), Natural Prey Death (NPD) and also calculated the Experimental Error (EE) and Ingestion Rate (IR) at the end of the experiments (as shown in Table 2).The average numbers of IP between the treatments of the experiment 1 (D.gessneri vs. adult were not significantly different.In the experiment 3 (D.gessneri vs. young notonectid), and 4 (C.richardi vs. young notonectid), the average number of IP in the (P-) treatment was statistically different from that of (P+) treatment (Mann-Whitney, U = 0.0, P = 0.00 for both).
The effect of the predator instar (adult and juvenile) and the prey species (large Daphnia gessneri and medium-sized Ceriodaphnia richardi) on the IR showed that young instars significantly preyed on both cladocerans (see Figure 4).
There was no effect of prey species on predator IR, and young instars fed on similar number of D. gessneri and C. richardi (as shown in Table 3).
Occasionally, cladocerans died naturally without any evidence of predator attack.Such individuals did not represent a problem in estimating IP, since they were easily recognized (conspicuous milky appearance with intact outer and inner structures).Natural Prey Death (NPD) occurred in all experiments and did not differ between the treatments (P+) and (P-) for any experiment (as shown in Table 2).In the control treatments, the EE was null in 62.5% of the replicates, and there was no statistical difference (Kruskal Wallis, H = 0.51, P = 0.91) comparing the mean EE among all the four experiments (as shown in Table 2).
Discussion
The hypothesis that a higher population densities of notonectids would be found in the littoral zone of the Lake Monte Alegre was confirmed, agreeing with other studies that have shown that backswimmers tend to occupy the littoral zone where aquatic vegetation is abundant (Bennett and Streams, 1986;Bailey, 1987;Gilbert et al., 1999).Significant factors that affect the distribution of backswimmers are the habitat size, amount of shade, depth, substrate type, water temperature, characteristics of macrophytes, presence of predator and prey diversity (Giller and McNeill, 1981;Foltz and Dodson, 2009;Schilling et al., 2009).Prey abundance and diversity is a plausible hypothesis in Lake Monte Alegre, since the diversity of microhabitats in the littoral and the proximity of the terrestrial environment can provide a greater supply of resources than the limnetic zone, which might explain the notonectid preference by the littoral.Littoral harbor a large variety of potential prey from the terrestrial habitat, such as insects that fall on the water, becoming vulnerable to predation (A.R. Domingos pers.obs).In the littoral, invertebrates are also abundant within macrophyte stands (Meschiatti and Arcifa, 2002), besides planktonic organisms at their edge (Arcifa et al., 2013) and in the benthos of shallow areas (Cleto-Filho and Arcifa, 2006).Another aspect is that backswimmers are distributed near the upper layer of the water (Streams, 1992) and, therefore, the shallow littoral zone would favor the search and capture of prey like cladocerans and other aquatic organisms inhabiting the water column.Macrophytes in the littoral also play an important role in the reproduction of backswimmers, since they can use the stems to lay eggs (Nessimian and Ribeiro, 2000).The aggregated distribution of the backswimmers in the littoral zone can occur by the quality of sites for oviposition, amount of prey, better conditions for refuge in macrophyte stands and anti-predatory behavior (Gilbert et al., 1999;Bailey, 1987Bailey, , 2010)).The occurrence of such aggregation explains the wide variation in abundance among our samples.
The predominance of older instars leads us to question on the microhabitat preference by different instars of the notonectids.As adults are always more abundant than all other instars it is possible that juveniles do not live in the same microhabitat of adults.Therefore, they were underrepresented in the samples due to a niche partition, and horizontal and vertical stratification between adults and juveniles.Maybe young instars are distributed within the macrophyte stands, instead of at their edge, where sampling was made.Several studies indicate that segregation between young instars and adults usually occurs for some species of notonectids, juveniles migrating to or remaining in areas with fewer adults (Murdoch and Sih, 1978;Sih, 1982;Bailey, 1987;Gilbert et al., 1999;Hampton, 2004).We were unable to sample quantitatively in the middle of macrophyte stands to compare to the sampling made at their edge, because superficial sweepings with a dip net in longitudinal transects were prevented by the macrophyte structure.However, we observed, without quantification, that in this microhabitat young instars were more frequent than adults.Therefore, due to limitations of the sampling method used, it was impossible to accurately detect juveniles at the sampling site, resulting in a higher relative frequency of adults.
The hypothesis that young M. uruguayensis would be a predator of the two planktonic species was confirmed, since only young instars effectively preyed on C. richardi and D. gessneri.Higher predation pressure by juveniles on Daphnia and Ceriodaphnia, in comparison to adults, was also found by another studies (Scott and Murdoch, 1983;Murdoch and Scott, 1984), differing from Gilbert and Burns (1999), who found that large instars of notonectids preyed on more cladocerans than small ones.Murdoch et al. (1984) also showed that adult Notonecta sp.preferred at least one size class of mosquito larvae than Ceriodaphnia sp.On the other hand, young instar of Notonecta sp.selected Ceriodaphnia sp. in relation to the other surface prey, such as Drosophila.Adult backswimmers are not morphologically adapted to feed on small prey (Ellis and Borden, 1970).Thus, they are likely to have greater facility in capturing prey larger than microcrustaceans, such as terrestrial insects on the water surface, aquatic insects and larvae (Quiroz-Martínez and Rodríguez-Castro, 2007;Fischer et al., 2012), which are abundant in the littoral zone of Lake Monte Alegre, such as Chironomidae (Cleto-Filho and Arcifa, 2006).But the capture of small sized microcrustaceans by adults do not worth the energy expenditure with the attack for greater difficulties of detection and handling, and the low nutritional content of prey.
There is strong evidence that prey size is a key factor for predator's preference in aquatic environments, leading to the generalization that aquatic invertebrate predators are size selective (Brooks and Dodson, 1965;Zaret, 1980).In fact, several authors observed that notonectids exhibit a tendency to select zooplanktonic prey based on their size, with a clear preference for the largest ones, indirectly favoring smaller and supposedly less competitive zooplankton species (Cooper, 1983;Scott and Murdoch, 1983;Murdoch and Scott, 1984;Gilbert and Burns, 1999;Walsh et al., 2006;Lindholm and Hessen, 2007;Gergs and Ratte, 2009).Blaustein (1998) showed that Notonecta was responsible for structuring the community by size selective predation, resulting in density reduction of larger Daphnia, without affecting the density of the smaller Ceriodaphnia.In our study, there was no size selective predation on the larger zooplankton prey offered (Daphnia), since no differences were observed in ingestion rates of C. richardi and D. gessneri by young M. uruguayensis.This result, which differs from other mentioned studies, may have been influenced by the structural simplicity of the experimental containers when compared to the lake conditions.The containers were small-sized and homogeneous, without refuge for prey.Since environmental heterogeneity can provide spatial refuges with reduced predation risk, the homogeneous environment could facilitate prey capture.Thus, any encounter of young instars with prey probably produced a stimulus, resulting in attacks, regardless of prey size.In this case, despite differences between the laboratory and lake conditions, these results show, at least qualitatively, the potential predation of M. uruguayensis on cladocerans (C.richardi and D. gessneri) from Lake Monte Alegre.
Another important issue is that the influence of Natural Prey Death was negligible in the experiments, so the effect of predators, which eventually could cause an imperceptible injury that would culminate in the prey death, was not detected in our study.Since the experimental error was constant, there was no interference on the experimental results.
In conclusion, this study contributed to enlarge the knowledge upon invertebrate predation on the zooplankton community of the tropical shallow Lake Monte Alegre and also on the ecology of notonectids.The younger individuals of M. uruguayensis potentially feed on the microcrustaceans D. gessneri and C. richardi in the lake.The predation pressure is concentrated in the littoral zone where higher densities of notonectids were found, with a greater impact during August and September.
Figure 1 .
Figure 1.Map of Lake Monte Alegre.The sampling sites are transects of 10 m (3 in the littoral and 3 in the limnetic zone).
Table 3 .
Two-way ANOVA for effects of predator instar (juvenile and adult) and prey species (Daphnia gessneri and Ceriodaphnia richardi) on the ingestion rate (IR) of Martarega uruguayensis.
|
2017-09-12T18:37:22.788Z
|
2017-03-01T00:00:00.000
|
{
"year": 2016,
"sha1": "c50c9a6014a1ee55d8eda5ca38ef0c8492ded889",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/bjb/v77n1/1519-6984-bjb-1519-698410815.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c50c9a6014a1ee55d8eda5ca38ef0c8492ded889",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
202745871
|
pes2o/s2orc
|
v3-fos-license
|
Blood concentrations of vitamins B1, B6, B12, C and D and folate in palliative care patients: Results of a cross-sectional study
Objective The main purpose of palliative care is symptom relief. Frequently, the symptoms of patients requiring palliative care are the same as common symptoms of vitamin deficiency (e.g. pain, weakness, fatigue, depression). The study aim was to investigate whether patients in palliative care are vitamin deficient. Method This was a monocentre cross-sectional study. Patients attending the palliative care unit of a general hospital in Germany from October 2015 to April 2016 were examined for vitamin blood concentrations and symptoms. Data were analysed using univariate analysis and bivariate correlations. Results Data were available from 31 patients. Vitamin D3 deficiency (<62.5 nmol/L) affected 93.5% of patients, vitamin B6 deficiency (<4.1 ng/mL) 48.4%, vitamin C deficiency (<4.5 mg/L) 45.2%, vitamin B1 deficiency (<35 µg/L) 25.8% and vitamin B12 deficiency (<193 pg/mL) 12.9%. There was a significant negative correlation between vitamin B1 ranges and pain (r = −0.384) and depression (r = −0.439) symptoms. Conclusion All patients showed a deficiency in at least one of the measured vitamins; 68% had concurrent deficiencies in >1 vitamin. A follow-up study using validated questionnaires and a larger sample is needed to investigate the effects of targeted vitamin supplementation on quality of life and symptom burden.
Introduction
The main aim of palliative care is to provide relief from pain and other distressing symptoms, thereby improving the quality of life (QOL) of patients facing problems associated with life-threatening illness. A recent nationally representative, longitudinal cohort study of community-dwelling US residents aged 51 years and older revealed that the goal of pain relief was not achieved during the last year of life. 1 According to this study, the prevalence of pain and other critical symptoms during the final year of life increased despite efforts to improve end-of-life care. Between 1998 and 2010, the final-year-of-life prevalence of any pain in all decedents had actually increased by 11.9% (95% confidence interval (CI), 3.1% to 21.4%), and the prevalence of depression and periodic confusion lasting at least 1 month increased by 26.6% and 31.3%, respectively. 1 According to their metabolic function, vitamins such as B1, B6, folate, B12, C and D are associated with characteristic deficiency symptoms. Unspecific deficiency symptoms, which can also occur in subclinical deficiency conditions, include weakness, fatigue, pain, shortness of breath and loss of appetite. [2][3][4] These symptoms are consistent with common symptoms of patients requiring palliative care. The symptoms of seriously ill people cannot be explained solely by vitamin deficiency, but it may be useful to investigate the frequency of vitamin deficiency in patients with severe illness to better understand the contribution of vitamin deficiency, if any, to symptom burden. In addition, the potential need for supplements could be identified.
Malnutrition in palliative care patients owing to reduced food intake, malabsorption or metabolic disorders is common [5][6][7] and may result in vitamin deficiencies. However, except for studies on vitamin D, this problem has been inadequately investigated. [8][9][10][11][12][13][14] At present, study data are mainly available from cancer patients, and noncancer patients are still underrepresented in palliative care. 15 However, the number of non-cancer patients in palliative care is increasing. [16][17][18] Folate and vitamin B12 have been mainly investigated in anaemic palliative care and cancer patients; 8 to the best of our knowledge, there are no studies on vitamin B6 (pyridoxal phosphate) concentrations in palliative care patients. Only three studies have considered the possible association between vitamin concentration and symptoms (vitamin B1 and cognitive dysfunction; 9 vitamin D and fatigue, anorexia and QOL 10,19 ). Furthermore, most studies have focused only on a single vitamin. Palliative care patients may be deficient in multiple vitamins, which would have even more pronounced effects on QOL.
Other important issues that should be addressed are optimum vitamin concentrations and their influence on the health of the individual. Although therapeutic vitamin treatment is often administered for clinical deficiency states, some subclinical ailments may also require treatment of vitamin deficiency. [2][3][4]20 Our primary objectives were to a) determine whether patients in palliative care are deficient in folate and vitamins B1, B6, B12, C and/or D and b) investigate whether there are linear correlations between vitamin plasma or serum concentrations and symptoms.
Study design
In this monocentre cross-sectional study, we examined patients shortly after admission to a palliative care unit. Frequency of vitamin deficiency was investigated by examining one blood sample at the start of palliative care. The number of patients in the hospital palliative care unit during the study period determined the sample size.
The secondary outcome was to determine whether there were any linear correlations between plasma/serum vitamin concentrations and the most common symptoms, homocysteine (Hcy) concentration, nutritional risk screening (NRS) score, inflammation (concentration of C-reactive protein (CRP)) and performance (Karnofsky score). As symptoms in this heterogenic patient group are biased by many factors (e.g. age, underlying disease for palliative care) and the sample was small, linear correlation analysis was only conducted to identify preliminary indications of possible links between specific vitamin deficiencies and symptoms for a subsequent study with a larger sample. Ethical approval was obtained from the University of Rostock ethics committee (A2015-0107 04/09/2015).
Study sample
Data were collected from patients attending the palliative care unit of the general hospital Warnow-Klinik, Bu¨tzow, Germany, from October 2015 to April 2016, who complied with the inclusion criteria (!18 years; Karnofsky score !30%, performance status according to the Eastern Cooperative Oncology Group (ECOG) 3, capable of providing written informed consent).
Patients with very poor performance status (ECOG 4 or Karnofsky score <30%) were excluded for ethical reasons (e.g. to avoid additional stress, discomfort and blood loss). All participants agreed with the study participation, data processing and publication of the data, and provided written informed consent.
Assessment of symptoms, performance and nutritional status
The following symptoms were assessed by physicians at admission, graded using a 4-point scale (none, mild, medium, severe) and converted to an ordinal scale (0-3): pain, nausea, vomiting, shortness of breath, constipation, weakness, loss of appetite, fatigue, depression, anxiety, tension, periodic confusion, disturbed wound healing and decubitus.
Each patient was questioned about their vitamin supplement intake in the week prior to admission (name of product(s) or substance(s) and dosage) using a written questionnaire.
The recording of symptoms, NRS, CRP and Karnofsky scores are part of the routine basic examination on admission to the Warnow-Klinik hospital.
Additionally, to examine how specific vitamin deficiencies correlated with symptoms and performance, NRS was used to validate the association with nutritional status, and CRP was chosen to screen for a possible association with inflammation.
Hcy plasma concentrations were measured because hyperhomocysteinemia is considered an independent risk factor for cardiovascular and cerebrovascular diseases, and data suggest that it is an important indicator for overall health status. 21 Deficiencies in folate, vitamin B12 and/or vitamin B6 affect Hcy metabolism, resulting in hyperhomocysteinemia. 21
Measurement of vitamins and homocysteine
From each patient, one blood sample was drawn in the early morning (fasting). The testing laboratory (Biovis Diagnostik MVZ GmbH, Limburg-Offheim, Germany) has DIN EN ISO 15189:2014 accreditation and has successfully participated in proficiency testing. Vitamin B1 was estimated in plasma by reversed phase highperformance liquid chromatography (HPLC) (IC2201rp) and subsequent fluorescence detection, 22 vitamin B6 in plasma by reversed phase HPLC (IC2100rp) and subsequent fluorescence detection, 23 folate in erythrocytes by electrochemiluminescencecobas 6000 system (Roche Diagnostics GmbH, Penzberg, Germany), 24 vitamin B12 in serum by electrochemiluminescence -cobas 6000 system (Roche Diagnostics GmbH, Penzberg, Germany), 25 vitamin C in plasma by reversed phase HPLC (IC2900rp) and subsequent UV detection, 26 25-OH-vitamin D3 in serum by liquid chromatography (Shimadzu Prominence LC system, Shimadzu Corporation, Kyoto, Japan) and mass spectroscopy (QTRAP V R 5500 system), and Hcy in plasma by electrochemiluminescence -cobas 6000 system (Roche Diagnostics GmbH, Penzberg, Germany). 27 The intra-and inter-assay coefficients were within acceptable ranges.
Normal vitamin ranges are set by reference to vitamin levels in healthy persons; the 'optimal range' is a reference range that is based on concentrations associated with optimal physiological metabolic processes. In principle, it may be more appropriate to use optimal ranges for some vitamins, as apparently healthy patients often show deficient vitamin concentrations. 28 However, a standard method of estimating these ranges does not exist.
Statistical analysis
The software IBM SPSS Statistics for Windows, version 21 (IBM Corp., Armonk, NY, USA), was used for the statistical analyses, which comprised univariate analysis (data distribution, dispersion and measure of spread) of the vitamin, Hcy and CRP concentrations and bivariate correlation analysis between vitamin concentrations and Hcy, CRP, NRS, Karnofsky score and symptom intensity, respectively.
Normal distribution of data was tested using the Shapiro-Wilk test. Pearson correlation coefficients were calculated for numerical and parametric data; Spearman's rho values were calculated for graded or non-parametric data.
Results
In the observational period, 31 patients fulfilled the inclusion criteria and were invited to participate in the study; no patients refused to participate or dropped out. The ECOG scores of two patients were classified as 4 (bedridden), but after consultation with the attending physician these patients were included. As they were bedridden to prevent falls, the Karnofsky score for both patients was !30%. Demographic data, NRS, CRP, glomerular filtration rate and performance status are shown in Table 1.
The predominant disease for palliative care was cancer (77.4%). About one-third of patients were at risk of malnutrition (NRS !3). Plasma concentrations of CRP and Hcy were in the normal range in four (12.9%) and two (6.5%) patients, respectively. Three patients (9.7%) had recently used vitamin supplements (parenteral vitamin B1, a multivitamin preparation or vitamin D3). The patient taking weekly vitamin D3 supplementation was the only one
Symptoms
Weakness was the only symptom present in all patients (score >0). Together with the symptom tension, it had the highest median intensity (3 ¼ severe). Anxiety, pain, shortness of breath, fatigue and loss of appetite were next; all had a mean intensity value of 2 (moderate). Constipation, depression and confusion/disorientation occurred occasionally in some patients. A total of 64.5% of patients had moderate-to-severe pain and 51.6% had at least one episode of depression.
Concentrations of vitamins and homocysteine
The univariate data analysis of vitamins and Hcy plasma concentrations is shown in Table 2. The vitamin deficiency frequencies are shown in Figure 1, and boxplots of vitamin concentrations in Figure 2a-f. The most frequent deficiency was that of vitamin D3, which affected 93.5% of patients. The mean (AE standard deviation) concentration of 35.85 (AE53.89) nmol/L vitamin D3 is 43% below the lower limit of the normal range (!62.5 nmol/L). Vitamin B6 or C deficiencies were present in nearly half of the study population (48.4% and 45.2%, respectively). Two (6.5%) patients had values within the optimal range for vitamin B6, and none had vitamin C concentrations within the optimal range. Four (12.9%) patients displayed 'scurvy' vitamin C concentrations ( 1.8 mg/L).
Vitamin B1 deficiency was present in 25.8% of patients. An optimal range for vitamin B1 was not available from the laboratory that conducted the analyses. Folate concentrations in erythrocytes were in the normal range for all patients, but 16 (51.6%) had values below the optimal concentration.
Vitamin B12 serum values were below normal in 4 (12.9%) patients, below optimal concentrations in 19 (61.3%) patients and above normal in 7 (22.6%) patients. All patients displayed at least one belownormal vitamin concentration of the six assessed vitamins. Nearly one-third (29%) of the study cohort had two concurrent vitamin concentrations below normal, 22.6% had three, 12.9% had four and 3.2% had five concurrent vitamin concentrations below normal.
Correlation analysis
Only vitamin C and Hcy showed a normal distribution according to the Shapiro-Wilk test. The significant correlations are displayed in Table 3. There were significant correlations between vitamin B12 concentrations and Karnofsky score and weakness (both P < 0.05) and between vitamin B12 ranges and weakness (P < 0.05). Vitamin B1 ranges showed a significant negative correlation with both pain (P < 0.05) and depression (P < 0.05).
Discussion
Although participants showed moderately good health-related QOL overall (median Karnofsky score ¼ 50%; one-third of patients were at risk of malnutrition), a high proportion showed significant vitamin deficiencies, particularly for vitamins D3, B6, C and B1. An association between insufficient vitamin status and symptoms was apparent for vitamin B1 and the symptoms pain and depression. The frequency of the symptoms pain (64.5% had medium-tosevere pain) and depression (51.6%) in our cohort correspond with published data from a larger survey. 15 Deficient vitamin B1 concentrations occurred in one-quarter of our patients, which is comparable to results of a previous study. 9 We found an association between (e) C (f) D3 Normal and optimal ranges were provided by the testing laboratory (Biovis Diagnostik MVZ GmbH). Normal vitamin ranges are set by reference to vitamin levels in healthy persons; the optimal range is based on concentrations associated with optimal physiological metabolic processes.
depression and pain and vitamin concentrations only for vitamin B1. However, this may be because patients with vitamin B1 deficiency often had multiple vitamin deficiencies. Seven of the eight patients with below-normal vitamin B1 concentrations also displayed below-normal vitamin C and vitamin D3 concentrations, and five were also deficient in vitamin B6. Mental changes and neuropathy are symptoms of vitamin B1 deficiency. 3 Pain and depression are both symptoms of subclinical vitamin C 29 and D 30,31 concentrations, and depression occurs in patients with vitamin B6 deficiency. 3 It is possible that multiple vitamin deficiency contributes to pain and depression symptoms in palliative care patients.
As targeted vitamin supplementation to prevent deficiency states is easy to achieve and inexpensive, a follow-up study is warranted to investigate the therapeutic effects of vitamins on symptom relief. To the best of our knowledge, this study is the first to examine the concentration of vitamin B6 in palliative care patients. Nearly 50% of our patients displayed deficient vitamin B6 concentrations. Although no correlation between vitamin B6 deficiency and symptoms was detected, a targeted treatment for vitamin B6 deficiency should be investigated, as vitamin B6 is the most important coenzyme in amino acid metabolism. It is involved in the synthesis of various neurotransmitters, such as serotonin, noradrenalin, dopamine, gamma-aminobutyric acid, and of myelin, collagen, haemoglobin and other functional proteins. 3,32 Surprisingly, erythrocyte folate concentrations were in the normal range; however, only half the patients had an optimal concentration. The mandatory fortification of staple foods with folic acid, which is common in many countries, cannot explain these data, as this practice is not established in Germany although some products are available. The adequate folate concentrations in our sample of patients with life-threatening illness is also surprising in light of data from the Federal Ministry for Risk Assessment of Germany (Bundesministerium fu¨r Risikobewertung, BfR), which has estimated that two-thirds of the German population are deficient in folate. 33 German women of child-bearing age have a median erythrocyte folate concentration of 266 ng/mL, 33 which is far less than the median in our study cohort (716 ng/ mL). However, our data are in accord with data from a New Zealand study that also found normal red blood cell folate concentrations in palliative care patients. 8 Vitamin B12 concentrations were more frequently above normal (22.6%) than below normal (12.9%). Similar data were derived from a large study of cancer patients in palliative care, 34 which concluded that there was a functional vitamin B12 deficiency despite the high vitamin B12 values. This conclusion was based on the presence of elevated concentrations of Hcy or methylmalonic acid. 35 Although the exact cause is unknown, the involvement of oxidative stress is likely, as reactive oxygen species oxidize the cobalt atom of vitamin B12, preventing conversion to the active coenzymes (methyl-and adenosylcobalamin). Therefore, vitamin B12 supplementation in cases of above-normal concentrations seems to be warranted. 35 Several studies have demonstrated that above-normal vitamin B12 concentrations are associated with high CRP concentrations, which predicts increased mortality in palliative care cancer patients. [36][37][38][39][40][41] In our study cohort, a linear negative association between vitamin B12 concentrations and the Karnofsky score was found, with reduced performance in patients with above-normal vitamin B12 concentrations. This is consistent with our finding of a positive correlation between vitamin B12 concentrations and weakness, which disappeared following exclusion of above-normal vitamin B12 values. The reason for the elevated vitamin B12 concentrations in severely or critically ill patients is still unclear, although several plausible causes (kidney, liver or myeloproliferative diseases and IgG-B12-immune complexes) have been discussed. 36,37,[42][43][44] Subnormal plasma concentrations of vitamin C (<4.5 mg/L) were found in nearly half our patients, and 13% showed a clinical deficiency (concentrations associated with scurvy). The proportion of patients with subnormal vitamin C concentrations was lower than that reported in other studies that included only cancer patients in palliative care (62% 14 and 72% deficiency 13 ). One explanation could be a higher frequency of concomitant chemotherapy in other study groups, which may generate oxidative stress, causing reduced vitamin C concentrations. [45][46][47] The frequency of chemotherapy was not documented in our study group. Because 22.6% of the patients received palliative care owing to a non-oncological disease, it can be assumed that the frequency of chemotherapy was lower compared with purely oncological studies.
The mean vitamin D3 concentration was only half the minimum requirement (!62.5 nmol/L). The median value was still lower, and below values reported in similar studies. 11,12 One explanation could be the time of evaluation (October to April), when solar radiation (UV index <3) in Germany is low.
The high proportion of patients with vitamin D3 deficiency matches well with data from a German cross-sectional study of 1,343 patients visiting general practices for various health problems 48 The German cross-sectional study also revealed that the proportion of patients with deficient vitamin D3 concentrations increases with age, 48 which might explain why a higher proportion of our cohort had deficient vitamin D3 concentrations; the mean age of participants was 73 years, compared with 58 years in the German cross-sectional study. The positive correlation between vitamin D3 blood concentrations and the symptoms weakness, anxiety, pain and depression should be interpreted as nonmeaningful owing to two outlying high values: the rest of the patients displayed severe deficiency. Weakness and pain are well-known symptoms of vitamin D deficiency. 49,50 Furthermore, vitamin D deficiency has been linked to an increased incidence of depression. 49 A recent study of 30 patients in palliative care with advanced solid cancer found vitamin D3 deficiency in 90% of patients. A positive correlation between serum vitamin D concentration and patient-reported absence of fatigue and physical and functional wellbeing has also been reported. 19 To our knowledge, the use of vitamin supplements in palliative care patients has not yet been investigated. Less than 10% of our patients took supplements. This proportion is lower than in patients undergoing acute oncological treatment (28% in melanoma patients and 31% in breast cancer patients). 51,52 The rural location and low socioeconomic status of the region in which the study centre is located may also help to explain these findings. 53,54 Study strengths and weaknesses Strengths of the study were the simultaneous measurement of several vitamins, the investigation of vitamin B6 concentrations, the focus on the potential use of vitamin supplements and the measurement of normal and optimal ranges for vitamins in palliative-treated patients for the first time.
A weakness of this study is the small sample size, which could limit the representativeness of our findings. In this study, vitamin concentrations were only determined at one time (admission day to intensive care unit), so no control or follow-up values were available. Symptoms were measured using a simple 4-point scale (none, mild, medium, severe). A follow-up study is needed that measures blood values at several points in time, and that uses validated questionnaires to evaluate symptoms and QOL.
Conclusion
Although median Karnofsy performance score of the study population was 50%, and only one-third of patients were at risk of malnutrition, all patients exhibited below-normal concentrations of at least one vitamin; 68% of patients had concurrent deficiencies of several vitamins.
Almost all patients displayed a vitamin D3 deficiency. Vitamin B6 and C deficiency affected nearly half the population. A quarter of patients had vitamin B1 below-normal concentrations, and 13% had deficient vitamin B12 concentrations.
Because patients in palliative care are severely vitamin deficient, targeted vitamin supplementation of such patients is warranted. A follow-up interventional study is needed to determine vitamin status and to investigate the effects of targeted vitamin supplementation on QOL and symptom burden.
Declaration of conflicting interests
CV, PWG, KK and IF declare that they have no competing interests and received no funding for this cross-sectional study. CV is employed parttime by Pascoe Pharmazeutische Pr€ aparate GmbH (Giessen, Germany).
Funding
Pascoe Pharmazeutische Praparate GmbH (Giessen, Germany) paid the fee for open access publication.
|
2019-09-25T13:16:39.604Z
|
2019-09-23T00:00:00.000
|
{
"year": 2019,
"sha1": "2b3183d94a602d91c8f0ca13c3df3f10d96c9b46",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0300060519875370",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bde3db07be3fcfafff8ccdc23ee4542beed85263",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9091162
|
pes2o/s2orc
|
v3-fos-license
|
Deadly Rural Road Traffic Injury : A Rising Public Health Concern in I . R . Iran
Background: The 5th Iran National Development Plan, 2011-2015, has emphasized on expansion of rural asphalt roads. This article aims to illustrate the trend of deaths caused by rural road traffic crashes (RTCs) and its association with length of the rural roads in Iran. Methods: We carried out a retrospective analysis on secondary data for the period from 2005 to 2010. The Iranian Forensic Medicine Organization, High Commission for Road Safety and Iran’s Statistical Center were the sources for the number of RTC death, length of the road and population data, respectively. Results: Number of RTC deaths in rural roads increased from 1,672 in 2005 to 2,206 in 2010. This was associated with expansion of the rural asphalt roads (P = 0.04). The construction of urban asphalt roads was also on an increasing trend, but the number of traffic deaths in these roads decreased from 26,083 in 2005 to 21,043 in 2010. Adjusted for 100,000 populations, the number of traffic deaths in urban roads showed a decrease from 37.0 to 28.0, while this number increased from 2.4 to 2.9 in rural roads during the study period. Conclusions: Although expansion of rural roads would contribute to economic development in rural areas, it exposes people to risk of severe RTCs if effective preventive actions are not taken. To prevent this threat, the Iranian policy makers need to take the followings into consideration: Public awareness, improving the safety of roads and vehicles, law enforcement, increasing coverage of police and Emergency Medical Services.
INTRODUCTION
Road traffic crash (RTC) is the main cause of non-intentional injuries and the second cause of deaths after ischemic heart diseases in Iran. [1]It accounts for more than 10% of total mortalities [2] and about 23,000 deaths, in average, per year during the last decade. [3]This is approximately equal to the number of deaths caused by the 2003 Bam earthquake, one of the worst natural disasters of recent decades. [4]These facts have made the RTC as a man-made disaster and real public health concern in Iran.
In line with the fourth I.R. Iran's National Development Plan (NDP), the fifth NDP for 2011-2015 [5] has emphasized on development of rural infrastructures, including rural asphalt roads.Although this development has led to more traffic in rural roads, no significant improvement is observed in safe driving behavior and construction of roads in rural areas.In addition, compared with urban areas, rural people use older cars with less safety standards. [6]The monitoring by traffic police and coverage of Emergency Medical Services (EMS) are also lower in rural roads than main roads. [2]his article aims to provide the Iranian policy makers with information on the burden of deaths that are caused by rural RTCs and demonstrates its association with expansion of the rural roads.
METHODS
We applied a retrospective analysis of secondary data to demonstrate the association of rural RTCs deaths with expansion of the rural roads in Iran.Our study covered the period from 2005 to 2010.The variables of interest included length of road (rural and urban), number of death (caused by rural and urban RTCs) and number of population.The sources that we used for data collection were the Iran's High Commission of Road Safety (HCRS) for length of road (km), the Iran's Forensic Medicine Organization (FMO) for the number of RTCs death and the Iran's Statistical Center for population data.International Classification of Diseases-10 is the basis for coding the causes of death by FMO.
For the purpose of analysis, we illustrated the trends of length of rural road from 2005 to 2010 and their corresponding mortality.Rates of traffic death in rural and urban roads were also estimated per 100,000 populations.χ 2 for trend was the test of significance and P < 0.05 was considered as statistically significant.SPSS statistical software for windows (Version 11.0, Chicago, IL) was used for statistical analysis.
RESULTS
The number of RTC deaths in rural roads increased from 1,672 in 2005 to 2,206 in 2010.This was parallel to construction of the rural asphalt roads that was increased from 56,424 to 86,519 km during the same period [Figure 1].The trend analysis showed a positive association between these two variables (χ 2 for trend = 3.93, df = 1, P = 0.04).
The expansion of urban asphalt roads was also on an increasing trend from 71,711 to 77,964 km, but the number of traffic deaths in these roads decreased from 26,083 in 2005 to 21,043 in 2010.
From 2005 to 2010, adjusted for 100,000 populations, the number of traffic deaths in urban roads decreased from 37.0 to 28.0, while this number increased from 2.4 to 2.9 in rural roads [Figure 2].
DISCUSSION
Our study showed an increasing trend of the RTC deaths in Iran's rural roads and its positive association with expansion of the rural roads.This is while, at the same period, the rate of traffic death in urban roads was on a decreasing trend.Therefore, expansion of the road network is one of the most important prerequisites for development of economics and infrastructures in rural areas, the policy makers should take preventive measures in rural roads to improve the safe driving condition.In this line, the highest-level of political commitment undoubtedly enhance the fight against RTC deaths. [7]ran's Ministry of Road and Urban Construction is the lead agency for the HCRS.Traffic police, EMS and car factories are the main stakeholders of this national coordination body.To reduce the rate of deadly traffic crashes in rural roads, taking the coordinated measures by the HCRS' members is necessary.In this line, the HCRS may also rely on the successful experience in decreasing the rate of urban RTC deaths.
According to available literature, the risk factors of deadly RTCs in rural roads of Iran are unsafe roads, excessive speed, risky driving caused by inadequate knowledge and inappropriate risk perception, no use of seat belt and using of old cars with low safety equipments. [2]Crash with livestock that freely move across the roads is another factor. [6]Effectiveness of the enforcement measures for control of RTCs in Iran is rated six out of 10, according to the World Health Organization. [8]This is expected to be even lower in rural roads as they are less controlled by traffic police compared with freeways, highways and main roads.
Timely and quality provision of medical care is essential for saving lives in a RTC.In 2010, the national Emergency Medical Service was only able to cover 15% of minor roads, including rural areas.According to National EMS, in addition to the current 1,028 EMS stations, 5,000 more stations are needed for effective coverage of roads all over the country. [9]This requires a considerable financial investment.
[12] For instances, in Germany, Britain and the US, about two third of all RTC related fatalities occur on rural roads, which score badly when compared to the high quality motorway network in those countries.All these have said, there are evidences that fortunately show how the preventive measures can be effective in reducing rural RTCs.For example, in Germany, reinforcement of speed limits and development of additional passing lanes were found effective. [11]
CONCLUSIONS
The expansion of rural asphalt roads exposes the people of rural areas with risk of severe RTC if effective preventive measures are not taken.To prevent from this threat, the Iranian policy makers need to take the followings into consideration: Public awareness, improving the safety of roads and vehicles, law enforcement, increasing coverage of police and EMS.
Figure 1 :Figure 2 :
Figure 1: Length of rural asphalt roads and number of deaths caused by rural road traffic crashes, Iran, 2005-2010
|
2017-06-07T01:49:46.010Z
|
2014-02-01T00:00:00.000
|
{
"year": 2014,
"sha1": "e6bf1d3ba8ca93c0ec58893a90faaf4265ebd7f8",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e6bf1d3ba8ca93c0ec58893a90faaf4265ebd7f8",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5168845
|
pes2o/s2orc
|
v3-fos-license
|
The Entropy of Laughter : Discriminative Power of Laughter ’ s Entropy in the Diagnosis of Depression
Laughter is increasingly present in biomedical literature, both in analytical neurological aspects and in applied therapeutic fields. The present paper, bridging between the analytical and the applied, explores the potential of a relevant variable of laughter’s acoustic signature—entropy—in the detection of a widespread mental disorder, depression, as well as in gauging the severity of its diagnostic. In laughter, the Shannon–Wiener entropy of the distribution of sound frequencies, which is one of the key features distinguishing its acoustic signal from the utterances of spoken language, has not been a specific focus of research yet, although the studies of human language and of animal communication have pointed out that entropy is a very important factor regarding the vocal/acoustic expression of emotions. As the experimental survey of laughter in depression herein undertaken shows, it was possible to discriminate between patients and controls with an 82.1% accuracy just by using laughter’s entropy and by applying the decision tree procedure. These experimental results, discussed in the light of the current research on laughter, point to the relevance of entropy in the spontaneous bona fide extroversion of mental states toward other individuals, as the signal of laughter seems to imply. This is in line with recent theoretical approaches that rely on the optimization of a neuro-informational free energy (and associated entropy) as the main “stuff” of brain processing.
Introduction
It is a fact that biomedical research on laughter has notoriously increased during the last decades, not only in volume of publications but also in the number of specialized disciplines involved.In PubMed, under the heading of laughter, there were 32 publications in the 1940s and 1950s; 119 in the 1960s and 1970s; 585 in the 1980s and 1990s; and 1044 publications in the 15 years since 2000.Nowadays, laughter has become an interesting research topic under an ample variety of perspectives: biomedical, biophysical, neuro-computational, cognitive, psychological, social, evolutionary, philosophical, engineering...As always happens, with the augmented volume of research and the very different points of view, the list of unanswered questions to investigate grows and grows.One of these questions, the role of entropy, is the focus of the present paper.
Laughter, as a conspicuous interpersonal communicative signal (Kierkegaard noted that a solitary laugh was "a little more than queer"), is ordinarily related to the use of language [1].From the evolutionary point of view, however, laughter has preceded language.Anthropoid ritualized "panting" during play may be considered as the closest antecedent of human laughter [2].The increasing group size of humans and the parallel increase in brain size-both of them crucial for the emergence of linguistic skills [3]-projected laughter to a new, more complex social scenario.Rather than being expressed only as an individual's signal of commitment to play, it became an important group display containing a variety of underlying emotional expressions and relational categories.Its sound structures also became more complex and more capable of expressing the situational nuances, based on an improved control of breathing as well [1,[4][5][6].Thereafter, the occurrence of laugher became regularly tied to all kinds of inter-individual relationships, quite often via language, punctuating behavioral situations or linguistic utterances as a sort of emotional valuation of the congruence with the shared background of the other individual or of the group audience.
The Social Meaning of Laughter
Convivially, rather than being the result of clever jokes or sophisticate humorous constructs, most laughter is produced around small talk in a variety of social environments: at home, with friends, at the workplace, during courtship, along children's play, in group coalitions, at the "third place" [1,[7][8][9], etc.Following the strong "grooming" connotations of human language that have been advocated by the "social brain hypothesis" [10][11][12], laughter conspicuously appears as a physiological intensification of such linguistic grooming.It is a signal of being cooperative, and an indirect way to rebuild and to strengthen the memory traces involved in the ongoing interaction.Laughing is about making more robust bonds between the participants in the interaction, crystallizing a shared sense of belonging.Whenever there are human bonds in the making, laughter is instinctively put into action [13,14].
Why does the acoustic signal of laughter have such powerful effects?Apparently, the acoustic signature of laughter is well known; nevertheless, it still contains intriguing elements.It may be summarized [15] as composed of behavioral episodes that contain several bouts (with exhalation parts separated by brief inhalations) that in their turn are composed of several laughter plosives (calls, syllables, pulses).Amongst the most important sound characteristics that appear there: the fundamental frequency F 0 of the emitted sounds, the changes and excursions of this fundamental frequency between plosives, the irregular separation between plosives, the "vowels" of the voiced laughter (versus the unvoiced laughter), as well as the energy, amplitude, and entropy related to the distribution of intervening frequencies.It is at least intriguing that the entropy of laughter is higher than the entropy of spoken language [16].As will be discussed later on, this difference might indicate two things: that the neural control involved in laughter emission is "more primitive" (clearly, different neural circuits are involved in the control of vowel cords and the whole respiratory-phonatory apparatus); and additionally that an increased entropic distinctiveness of the received laughter may increase its attractiveness and improve the emotional sharing [17].The theme will also be discussed regarding the interrelationship of neural entropy with the general "stuff" of brain processing [18,19].About the specific phonation involved, the facial counterparts, the diaphragmatic and bodily movements, and the systemic repercussions (cardiovascular, immune, central and autonomous nervous system, etc.), they will not be dealt with here [1,4,20], although they are essential to fulfill the "hidden" evolutionary missions of laughter and to explain most of the present therapeutic applications [13,21].
Biomedical Applications
Given the inner repercussions of producing and exchanging laughter, it is no wonder that biomedical applications have been multiplied during recent years, as already mentioned in the PubMed literature.Laughter has been widely explored as a therapeutic method to prevent and to treat major medical diseases-the positive effect of laughter and humor in pain relief, autoimmune pathologies, surgical recuperations, psychotherapy interventions, patient empowerment, general resilience, mental wellbeing, etc., is well authenticated [7,[21][22][23][24].In mental pathologies, however, very few works have been addressed that explore the discriminative potential that laughter might contain.Quite probably, laughter is affected differently within the major neuropsychiatric pathologies, such as depression, schizophrenia, and psychoticism, as well as within dementia and neurodegenerative diseases [22,[25][26][27][28].All of these pathologies would have in common the diminished social ability of the individual to participate in group dynamics and to progress along bonding processes, as well as the relative blocking of the hedonistic mechanisms.In all of them, the emission of laughter would be either severely compromised or notoriously disorganized regarding the spontaneous response to humorous stimuli.However, as said, the relative specificity has not been found yet.Therefore, to the extent to which laughter reflects with some accuracy the specific mental condition of individuals [29], a better understanding of the whole sound structures of laughter could have relevant implications in mental-health research, beyond the present therapeutic applications-as, for instance, in diagnostics and prognosis, at detecting the differences between healthy subjects and patients, and at following the recovery progresses of patients.At least previous work by these authors [30] has demonstrated that simple proof based on laughter analysis can be useful in establishing the diagnostics of depression patients and gauging the severity of the disorder.
The Present Study
Our analytical focus in the present study will be specifically on entropy, complementing the previous study already mentioned [30].In that work, we found that amongst the 10 acoustic variables considered for each plosive, three of them (energy, entropy, and F 0 ) had the highest discriminating power.Furthermore, given the relevance of entropy in the sound structures of animal communication, processing anew all of the data obtained and to check about the exclusive discriminating power of entropy regarding the physiological-emotional information contained within laughter's sound structures makes sense.Is entropy one of the fundamental components of the "emotional code" implicit in the communicative content of laughter?To the extent to which the answer is positive, it could contribute to refine and to improve the proposed use of laughter as a new tool, of easy and fast usage, for diagnosis and evaluation of neuropsychiatric diseases.
In that previous study, we were counting with registered laughs of depressed patients (n = 30) and healthy controls (n = 20), in total 934 laughs (517 from patients and 417 from controls).All patients were tested by the Hamilton Depression Rating Scale (HDRS).The records were processed using Matlab, evaluating the 10 following variables per plosive: time duration, fundamental frequency mean, standard deviation of the fundamental frequency, first three formants, average power or energy per sample, Shannon's entropy, jitter, shimmer, percentage of voiced/unvoiced signal, and harmonic to noise ratio.By applying general and discriminant analyses conducted in STATGRAPHICS Plus version 5.1 to those variables, we obtained discriminant functions, canonical correlations, Wilk's Lambda, and Fisher's linear discriminant function coefficients, showing that depressed patients and healthy controls differed significantly on the type of laughter, with 88% efficacy.In addition, according to the Hamilton scale, 85.47% of the samples were correctly classified in males, and 66.17% in women.After these results implying the 10 mentioned variables, what performances could be attained by means of the exclusive use of entropy?
In order to advance the present study, we have processed anew the 934 laughs already registered trying to analyze Shannon-Wiener entropy's specific contribution in discriminating power.To do that, we have constructed a decision tree and made a cluster analysis which shows the discriminating role of entropy.It is described below.
Subjects
The 934 laughs registered belonged to 50 individuals, 30 patients and 20 healthy people, comprising men and women aged in between 20 and 65; their laughs were obtained in response to humorous videos (which were always watched accompanied by some acquaintance), and were registered individually by means of a directional digital voice recorder.More patients than controls were recruited in order to make possible a classification of depression rating and to correlate it with laughter registers.All of the individuals were Spanish and none of them was suffering any mental illness that prevented the realization of the task, so they could understand the humour sketches presented and complete the questionnaires.We followed the inclusion criteria described in [30] and the protocol was approved by the Regional Ethics Committee of Aragon.
Psychological Test
To measure the severity of clinical depression symptoms, all patients were tested by the Hamilton Depression Rating Scale (HDRS).The HDRS test is widely used to measure severity of depression and mood disorders both in clinical practice and research settings.In this study, we used the original 21-items version in its Spanish validated translation [31].
Compilation of Laughter
The compilation of humorous videos was made mainly by an Internet search.These videos provided funny circumstances to evoke laughter in most types of people (consisting of cartoon sketches, falls, jokes, famous movie characters, well-known humorists, etc.).A specific protocol was followed, including the different kinds of visual and acoustic stimuli used to generate laughter during sessions of 20 min.A digital voice recorder, Olympus VN-712PC (Olympus Imaging Corp., Tokyo, Japan), was used to capture the sound records.
Spontaneous laughter from each participant was registered in a wav archive encoded in 16-Bit PCM format, and was sampled in the 50-10,000 Hz interval.Every laugh episode was separated by both hearing the recordings and visualizing the waveforms provided by the sound analysis program Adobe Audition.Through this software, we could distinguish each laughter episode, so that the different laughter utterances were analysed, selected and stored separately.The evaluation of whether an audio segment was suitable or not, both for patients and controls, was mainly conditioned by its clarity (overlapped speech-laugh and laugh-laugh segments, as well as all kinds of exclamations and noises were dismissed).The resulting laugh archives were recorded from only one individual, had well defined boundaries, and did not include interfering sounds like coughs, throat clearing or humming-otherwise they were discarded.This evaluation job is too complicated to achieve with programmed laughter detectors, such as machine learning methods and support vector machines.The present manual process is slow but reliable enough.
According to the conventions already mentioned [15], each laughter bout contains a series of discrete elements, called plosives, which may be characterized as energy peaks separated by silences that are repeated every 200-220 ms approximately.This wide range of acoustic shapes requires segmentation in the time domain.At the temporal domain, bouts appear as alternating maxima and minima within the envelope of the waveform amplitude-and all the 10 variables already described may apply.However, in this study, as already mentioned, we will only work with the entropy of each plosive, following a statistical approach more appropriate to this circumstance: a decision tree.
Laughter Processing
In total, we compiled 934 well-formed laughs following the selection criteria just described (517 from patients and 417 from controls)-on average, 17 laughs for patients and 21 for controls.The plosive automatic detector was implemented in Matlab version R2014a.As an outcome of this characterization, a data matrix was obtained comprising all plosives sorted by individual laugh archives, each one in a row, with entropy values as the only column.
Statistical Analyses
A decision tree was built using SPSS version 22 (2014).Predictors were selected according to their statistical significance, thus enabling the detection of interactions with the values of those selected variables.For predictor variables, this technique [32] is capable of determining the optimal cut-off that maximizes the association with the entropy measure for each plosive: the entropy of the first (EP1), second (EP2), third (EP3), fourth (EP4), and fifth (EP5) plosives.Statistical significance was set for a probability p < 0.01.
Cluster analysis was conducted in SPAD.N version 8 (2014) using a mixed strategy that combined divisive and agglomerative techniques.The Euclidean squared distance and the minimal variance of Ward were used as the aggregation criterion.The number of clusters was determined by the aggregation criterion that describes a hierarchical tree followed by a reallocation of each case to the most nearest cluster by the k-average method.In this approach the number of clusters used is set up by the observer; and in our study six was taken as the optimal number of clusters.Finally, the characterization and description of clusters was conducted based on Lebart's statistical method [33], selecting those variables that are relevant to a cluster.
Decision Tree
The decision tree distinguished depressed patients and healthy controls (Figure 1).The first input variable selected (node 0) is the entropy value measured in the first plosive (EP1) correctly classifying this variable to 71.2% of patients (EP1 > 0.00322) and 72.7% of control subjects (EP1 ď 0.00322).According to the left nodes (1, 3) and branches of the tree, the probability of being a healthy person increases to 93.8% when the entropy of the first plosive is equal to or less than 0.00085.However, as seen on the right side of the tree, the probability of depression increases to 86.9% and 89.3% (nodes 5, 9) when the entropy measure in the fifth and second plosives is less than or equal to 0.00011 and 0.02342, respectively.cut-off that maximizes the association with the entropy measure for each plosive: the entropy of the first (EP1), second (EP2), third (EP3), fourth (EP4), and fifth (EP5) plosives.Statistical significance was set for a probability p < 0.01.
Cluster analysis was conducted in SPAD.N version 8 (2014) using a mixed strategy that combined divisive and agglomerative techniques.The Euclidean squared distance and the minimal variance of Ward were used as the aggregation criterion.The number of clusters was determined by the aggregation criterion that describes a hierarchical tree followed by a reallocation of each case to the most nearest cluster by the k-average method.In this approach the number of clusters used is set up by the observer; and in our study six was taken as the optimal number of clusters.Finally, the characterization and description of clusters was conducted based on Lebart's statistical method [33], selecting those variables that are relevant to a cluster.
Decision Tree
The decision tree distinguished depressed patients and healthy controls (Figure 1).The first input variable selected (node 0) is the entropy value measured in the first plosive (EP1) correctly classifying this variable to 71.2% of patients (EP1 > 0.00322) and 72.7% of control subjects (EP1 ≤ 0.00322).According to the left nodes (1, 3) and branches of the tree, the probability of being a healthy person increases to 93.8% when the entropy of the first plosive is equal to or less than 0.00085.However, as seen on the right side of the tree, the probability of depression increases to 86.9% and 89.3% (nodes 5, 9) when the entropy measure in the fifth and second plosives is less than or equal to 0.00011 and 0.02342, respectively.The decision tree (Figure 1) reveals the high discriminant role of the entropies of the first (EP1) and fifth (EP5) plosives.In order to obtain a classification of laughter according to the value of the entropies, the resulting decision tree allows us to establish the following taxonomy.Considering the aforementioned entropies EP1 and EP5 together with EP2 and EP3, we can get rules that classify nine types of laughter.Each class contains healthy and different percentages of depressed individuals: 6.2%, 7.5%, 13.4%, 18.2%, 45.0%, 50.8%, 60.0%, 73.9% and 89.3%.Based on the "0.5 rule", we obtained a successful classification of depressed patients and healthy controls (Table 1).The decision tree (Figure 1) reveals the high discriminant role of the entropies of the first (EP1) and fifth (EP5) plosives.However, in order to obtain a classification of laughter according to the whole entropy values, the resulting decision tree allows us to establish the following taxonomy.Considering the aforementioned entropies EP1 and EP5 together with EP2 and EP3, we can get rules that classify nine types of laughter.Each class contains healthy and different percentages of depressed individuals: 6.2%, 7.5%, 13.4%, 18.2%, 45.0%, 50.8%, 60.0%, 73.9% and 89.3%.Based on the "0.5 rule"-a patient is classified to the most probable class with the highest percentage of diagnosis-we obtained a successful classification of depressed patients and healthy controls (Table 1).In general, the process resembles an expert system with certainty factors [34].
Cluster Analysis
The cluster analysis identified six types of laughter (Figure 2).According to the studied variables, the cluster obtained were: (1) healthy men (33% of cases); (2) depressed women (20% of cases); (3) depressed patients (26% of cases); (4) healthy controls (13% of cases); (5) depressed patients (6% of cases); and (6) healthy controls (3% of cases).The decision tree (Figure 1) reveals the high discriminant role of the entropies of the first (EP1) and fifth (EP5) plosives.In order to obtain a classification of laughter according to the value of the entropies, the resulting decision tree allows us to establish the following taxonomy.Considering the aforementioned entropies EP1 and EP5 together with EP2 and EP3, we can get rules that classify nine types of laughter.Each class contains healthy and different percentages of depressed individuals: 6.2%, 7.5%, 13.4%, 18.2%, 45.0%, 50.8%, 60.0%, 73.9% and 89.3%.Based on the "0.5 rule", we obtained a successful classification of depressed patients and healthy controls (Table 1).The decision tree (Figure 1) reveals the high discriminant role of the entropies of the first (EP1) and fifth (EP5) plosives.However, in order to obtain a classification of laughter according to the whole entropy values, the resulting decision tree allows us to establish the following taxonomy.Considering the aforementioned entropies EP1 and EP5 together with EP2 and EP3, we can get rules that classify nine types of laughter.Each class contains healthy and different percentages of depressed individuals: 6.2%, 7.5%, 13.4%, 18.2%, 45.0%, 50.8%, 60.0%, 73.9% and 89.3%.Based on the "0.5 rule"-a patient is classified to the most probable class with the highest percentage of diagnosis-we obtained a successful classification of depressed patients and healthy controls (Table 1).In general, the process resembles an expert system with certainty factors [34].
Cluster Analysis
The cluster analysis identified six types of laughter (Figure 2).According to the studied variables, the cluster obtained were: (1) healthy men (33% of cases); (2) depressed women (20% of cases); (3) depressed patients (26% of cases); (4) healthy controls (13% of cases); (5) depressed patients (6% of cases); and (6) healthy controls (3% of cases).Table 2 shows the partition description of each cluster showing with "−" the low values of entropy on a plosive, with "+" the high values of entropy, and "++" when very high values of entropy were measured in a plosive.If we only consider the first and fifth plosives, and classify Table 2 shows the partition description of each cluster showing with "´" the low values of entropy on a plosive, with "+" the high values of entropy, and "++" when very high values of entropy were measured in a plosive.If we only consider the first and fifth plosives, and classify clusters according to their individuals (clusters 1, 4 and 6 contain healthy control and clusters 2, 3 and 5 contain depressed patients), we conclude the following rule: "an individual is healthy when the value of entropy in the fifth plosive is relevant and is in synch with the value of the first plosive, or when both entropy values are high or both values are low".By contrast, an individual is depressed when the fifth plosive entropy is not relevant or has a low value.Table 3 shows the summary statistics for each of the entropies studied.It is interesting to note that none of the entropies fits a normal distribution.It is also interesting to observe how the mean and median of entropy in the fifth plosive is very low or zero.This result would be consistent with what was observed in Table 2: that an individual suffering from depression shows a fifth plosive entropy that is no significant or with a very low value.Furthermore, the variability of entropy seems to be lower in depressed than in healthy subjects.In general, the results suggest that in depressed individuals a decrease in entropy occurs from the first to the fifth plosive.Although entropies have not normal distribution, normality is not required in the statistical tests performed since they are non-parametric statistical techniques [32].When we measure only the entropies and compare them to other statistical techniques, the decision tree method leads to better results in the classification and diagnosis of subjects.Based on decision trees, we obtained a successful classification of depressed patients and healthy controls of 82.1% and 68.2% respectively, whereas using a binary logistic regression these percentages decreased to 79.1% and 64.0%, respectively.Similarly, using discriminant analysis, the success in the classification of depressed patients decreased to 65.7%, increasing to 76.5% in healthy controls.Considering all percentages together, the overall results were 76.0%, 72.5%, and 70.5% when diagnosis was conducted based on decision trees, binary logistic regression, and discriminant analysis respectively.The advantage of decision trees is the easiness of interpretation as well as the description and classification efficiency achieved with a simple segmentation of data.
Discussion
The technique of trees we have used in the present work is simple but powerful: each node is a relevant variable so that, in a subject problem, if a variable is greater or smaller than a threshold, then it goes for a branch or another and stops in the desired node, obviously the node with the highest accuracy.This technique is popular in what is called data mining.It has outperformed neural networks because the trees are able to explain what the networks cannot, as the latter factually work as a black box.Of course, when we compare this technique with the discriminant procedure of our previous work, we can see how the variables work in each case to distinguish a healthy control from a patient "in the blues".Whereas, in the discriminant analysis, the percentage of correctly classified patients with depression was of 85.12%, with the decision tree the percentage is 82.1%, indeed a very similar value.
The present results are significant in several analytical regards.Firstly, it is obvious that comparing with the previous results entropy alone is able to detect differences between patients and control subjects, while in the discriminant analysis such differences are detected by an ample set of variables.However, it is surprising that in the decision tree analysis the fifth plosive is once again the most discriminating factor between healthy subjects and depressed patients; a similar result was obtained in the discriminant analysis when the study was conducted with men only.Secondly, the current study shows that an individual suffers depression when the fifth plosive entropy is not relevant or has a low value.This result supports the relevance of the fifth plosive, suggesting as a surprising novelty the use of just the fifth plosive's entropy for the diagnosis, or more prudently its use as a complement of the results obtained from the other variables in the discriminant analysis.Nevertheless, the use of entropy in medical diagnosis is not new [35].For example, a high value of entropy in the electroencephalography (EEG) of a patient, or a low value of entropy in the electrocardiography (ECG), predict a possible epileptic seizure [36] or a possible arrhythmia [37], respectively.
Regarding the biomedical interpretation, the present results throw further light on the possibility of specific detection of mental disorders.A number of new medical methods are being designed for early diagnosis in the most important mental pathologies and neurodegenerative diseases: new biochemical and molecular detectors, EEG, neuroimaging, ocular-macular exploration, pupillometry, exercise and gait analysis, equilibrium platforms, manual exercises, cognitive trials, memory tests, linguistic trials, etc. [27,35,[38][39][40][41].We think laughter could also be added to that list of biomedical explorations.The emotional-cognitive characteristics of laughter, where ample swaths of brain cortical areas as well as medial and cerebellar regions are involved, make for a promising model system in early diagnostics-looking for conspicuous alterations in entropy, energy and F 0 , the variables where most of the "emotional code" of laughter is ensconced.Another important characteristic, not explored here, would concern the timing of laughter, where non-standard placement of laughter relative to topic boundaries may reveal failure to maintain engagement in dialogue [42,43].The different pathological states would quite probably imprint their specific signature on all those variables and characteristics, although the decoding would not be easy.In addition, there may be methodological difficulties to establish the detection procedures, mostly at standardizing the video probes and the circumstances of social engagement-the social nature of laughter can never be overemphasized.Notwithstanding its interesting research content, making progress in this exploration may also contribute to enlivening and humanizing the environment surrounding the diagnosis and treatment of depression and other mental disorders.The present experience shows that patients participate with gusto in the laughing exercise.
From an evolutionary point of view, the results obtained are coherent with the current views on the acoustic expression of emotions and with their specific ontogenetic/phylogenetic development.
There have been interesting debates on the nature of human emotions and the reliability of their acoustic detection, both in humans and in other primate and non-primate species [4,17,[44][45][46][47].It is relevant that the three essential variables statistically discriminated in our previous study (energy, entropy, and F 0 ) appear repeatedly in the different experimental studies.From another angle, it may be emphasized that our results are compatible with both the discrete states approach to emotions [48] and the continuous bi-dimensional approach [49].In the former, the different emotional states would correspond with the different combinations of essential variables.While, in the latter, emotions (and laughter production) are represented in a continuum of two variables: arousal and valence, respectively meaning the level of neural excitation and the positive or negative connotation inherent to each emotion class.Most of the acoustic counterpart of arousal is conveyed by both energy and entropy (amplitude and dispersion of the frequency spectrum), while the excursions of F 0 and also a relative presence of entropy would correspond to valence.
Why is entropy so important in the acoustic manifestation of emotions as well as in the emission of laughter?The two reasons advanced in the Introduction have to be considered.On the emission side, recent studies corroborate the importance of order and disorder, of energy dispersion, in the production of acoustic signals [17,35,50,51].There is a juvenile pattern of more entropic calls, which are progressively matured into well-formed calls, also influenced by the contact with adults [17].Evolutionarily, higher entropy is related to poorer or more primitive control by neural circuits; however, at the same time, it attracts more attention by third parties and is more efficient as an emotional mover.In the human case, the neural circuit in charge of producing laughter's inarticulate sounds, partially the vagal system with its laryngeal nerve branch, has less controlling capabilities than the circuits in charge of spoken language (or contrived laughter for that matter); and something similar would happen with the rest of the physical phonatory-articulatory system involved [4,20,42].Thus, spontaneous laughter would have higher entropic content respect to language, which is even higher in toddlers and infants-also in accordance with the higher arousal level that usually accompanies their laughter displays.Conversely, lower emotional engagement and lower arousal, as happens in depression patients, should be accompanied by both lower entropy and lower energy in the emitted signal, and that is precisely what our studies show.
On the reception side, the effect of laughter on the receiver conduces to a discussion on its nature as an honest signal.Given that it has a considerable social effect on the receiver side, it may be easily subject to manipulation and faked by the emitter.As pointed out [20], contrived laughter is relatively well distinguished from spontaneous laughter by means of a series of differences in duration, pitch, F 0 , loudness, and entropy, and this is an aspect that has to be carefully considered in the case of depression.Quite probably, the depressed patient, with his/her lower arousal, is even "too honest" in the disclosure of this socially unwanted condition.Then, there appears a significant gender difference, as social conventions in most cultures penalize more the social evaluation of depressed males than of depressed females.Thus, depressed males suffer the full blunt of the social stigma, while females may feel less socially pressed and freer to express their depressed state or not.The gender aspects of depression include some other complexities derived from the different prevalence of the disorder, the different appreciation of humor between the sexes, and the differences involved in neuroanatomic-connectomic matters, as discussed in [30].In any case, very clear indications of gender differences have surfaced in our two studies.
Finally, recent approaches to brain dynamics are relying on informational/entropic constructs.Following [18,19], the brain unifies its information processing by means of a distributed free-energy variable based on the coupling of excitation and inhibition, the informational entropy of which is maximized (optimized) in the ongoing search of adapting the sensory-motor states to the environmental demands.Given the mappings, gradients, circuit topologies, and self-organizing rhythms in the couplings between excitation and inhibition, the blind optimization of this brute "neural entropy" produces the outcome of well-fitted states.In the human context, laughter would have been co-opted as a generalized information-processing tool, a finalizer accompanying the higher-level cortical processing [52].In some way, we laugh abstractly: when a significant neurodynamic constellation coding for some problematic circumstance suddenly vanishes, i.e., when a relatively relevant behavioral problem becomes unexpectedly channeled in a positive way and vanishes as such problem.Laughter is then spontaneously produced by the subject to display his/her own behavioral competence in an instinctive way.Powerful neurodynamic, neuromolecular, and physiological resources have been internally mobilized without implying any extra computational-cognitive burden upon the subject's ongoing consciousness processes [30].This role of laughter makes a lot of sense in the really complex social world that the "talkative" human brain has to confront, with a myriad of cognitive, behavioral, and relational problems.The conceptual-symbolic world of language is crucial in the making and breaking of social bonds, where emotional-relational problems may dramatically accumulate in extremely short periods of time [3,12].When we laugh, the inner entropy generated is emitted to the outside, reflecting the occurring evolution of the neurodynamic processing gradients.Thus, the entropy of laughter is not only a useful biomedical resource; it may also be an amazing window to our most basic informational operations.
Conclusions
The present paper has explored the potential of a relevant variable of laughter's acoustic signature-entropy-in the detection of a widespread mental disorder, depression, as well as in gauging the severity of its diagnostic.By using laughter's entropy and by applying the decision trees procedure, it was possible to discriminate between patients and controls with 82.1% accuracy.
Potentially, laughter appears as a promising model system in early diagnostics, severity, and recovery course regarding important mental pathologies as well as neurodegenerative diseases.The variables herein explored (and some other characteristics) could be easily checked, and the methodology followed could also be effortlessly incorporated within the existing medical practices, in a more convivial and patient-friendly way.However, further studies are needed to ultimately establish these new kind of laughter-centered methods, so that they could be incorporated into the present stock of diagnostic/therapeutic tools.
As for the main limitations of the study: (i) the sample size is too reduced and may not be representative of the general population of depressed patients; (ii) the existing compilation of humorous videos is too general, irrespective of age, sex, mood, specific cultural backgrounds, etc., and in some cases it may fail to produce laughter; (iii) although the conditions surrounding patient and control subjects are comfortable enough, it is not easy for them to feel at ease and laugh naturally when they know that they may be observed and that all their sounds are recorded; (iv) and finally, in the evaluation of the records, determining whether a laughter episode was suitable or not-"clean" enough-depended on personal inspection, not easily amenable to complete description by a rule system (although the procedure was the same for both patients and controls).
Figure 1 .
Figure 1.Decision tree showing depressed patients and healthy controls according entropy values measured at each plosive.This decision tree was conducted using IBM SPSS Decision Trees 22 (2014).
Figure 1 .
Figure 1.Decision tree showing depressed patients and healthy controls according entropy values measured at each plosive.This decision tree was conducted using IBM SPSS Decision Trees 22 (2014).
Figure 2 .
Figure 2. Cluster tree obtained after performing the clustering analysis using a mixed strategy.A total of 960 laughs were classified into six differentiated clusters.
Figure 2 .
Figure 2. Cluster tree obtained after performing the clustering analysis using a mixed strategy.A total of 960 laughs were classified into six differentiated clusters.
Table 1 .
Classification table of depressed patients and healthy controls based on the "0.5 rule".
Table 1 .
Classification table of depressed patients and healthy controls based on the "0.5 rule".
Table 2 .
Partition description of each cluster, with "´" the low values of entropy on a plosive, with "+" the high values of entropy, and "++" when very high values of entropy were measured in a plosive.
Table 3 .
Statistical analysis of entropy values.
|
2016-03-14T22:51:50.573Z
|
2016-01-21T00:00:00.000
|
{
"year": 2016,
"sha1": "428951605c6341c0bfbf9ad7a72aeb2268963cd2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1099-4300/18/1/36/pdf?version=1453362060",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "428951605c6341c0bfbf9ad7a72aeb2268963cd2",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
244298309
|
pes2o/s2orc
|
v3-fos-license
|
Design, Synthesis and Characterization of Thiophene Substituted Chalcones for Possible Biological Evaluation
Development of new antimicrobial agents is a better solution to rectify drug resistance problems in society. In this circumstances new functionalized sulphur bearing heterocyclic moiety were designed, synthesized and evaluated for their in vitro antibacterial activity. The present work encompasses the designing novel series of thiophene substituted analogous linked to para amino acetophenone and different aldehydes were successfully synthesized and biological activity was predicted using various computational software’s such as Chemsketch, Molinspiration, and admetSAR. Among the synthesized thiophene substituted chalcones T-IV-I and thiophene T-IV-B displayed significant activity against Streptococcus auresis. Compounds T-IV-J, T-IV-H and T-IV-C bearing sulphur moiety possess better activity against Staphylococcus aureus. Moreover T-IV-C and T-IV-J exhibits good antibacterial activity against E. coli and Pseudomonas aeruginosa. In general, most of the synthesized compounds exhibited remarkable antibacterial activity due to the presence of sulphur atom in the heterocyclic moieties as well as its lipophilic characters. Molecular docking studies indicated that the synthesized compounds are potent inhibitor of microsomal enzyme Glutathione-S-transferases (PDB ID: 1GNW) also find the different interacting residues, Original Research Article Joseph and Alaxander; JPRI, 33(49B): 55-79, 2021; Article no.JPRI.76759 56 bond distanceand nature of bondingbetween the target and the ligand molecules. The results provide important information for the future design of more effective antibacterial agents.
INTRODUCTION
In sulphur containing heterocyclic, thiophene substituted chalcone derivatives are at the focus as these candidates have structural similarities with active compounds to develop new potent lead molecules in drug design. Thiophene scaffold is one of the privileged structures in drug discovery as this core exhibits various biological activities allowing them to act as antimicrobial, antioxidant, antitubercular, antifungal [1] actions. Further, numerous thiophene -based compounds as clinical drugs have been extensively used to treat various types of diseases with high therapeutic potency, which has lead to their extensive developments. Due to the wide range of biological activities of substituted thiophenes, their structure activity relationships (SAR) have generated interest among medicinal chemists, and this has culminated in the discovery of several lead molecules against numerous diseases [2]. The present review is endeavouring to highlight the progress in the various pharmacological activities of thiophene substituted chalcone derivatives [3]. Also biological studies that highlight the chemical groups responsible for evoking the pharmacological activities of synthesized derivatives are studied and compared. Design of Insilico filters to eliminate compounds with undesirable properties poor activity or poor Absorption, Distribution, Metabolism, Excretion and Toxicity, (ADMET)) and select the most promising candidate. Fast expansion in this area has been made possible by advances in software and hardware computational power and sophistication. Identification of molecular targets and an increasing database of publicly available target protein structures like the protein data bank www.pdb.org. CADD is being utilized to identify this active drug candidates, select lead compounds (most likely candidates for further evaluation), and optimize leads compounds i.e. transform biologically active compounds into suitable drugs by improving their physicochemical, pharmaceutical, ADMET/PK (pharmacokinetic) properties. Virtual screening is used to discover new drug candidates from different chemical scaffolds by searching commercial, public, or private 3-dimensional chemical structure databases [4].
In-silico technique is reducing the number of molecules synthesized and helping researchers in the process of drug development. Tools and models available are used to estimate the ADMET properties, and structure-based molecular docking, helps in predicting the possible interactions with the target under study. Major information whether the compound under study can work as a drug at an early stage of development is provided by in-silico physico chemical properties such as saturation, size, lipophilicity, solubility, polarity, and flexibility [5].
The purpose of the current study was to perform simulated screening of molecules through molecular docking strategy and identify possible lead molecules which could serve as a template for designing new proposed molecules with improved binding affinities, and better molecular interactions with the receptor. Additionally in-silicoADME and drug likeness properties of the designed compounds were also evaluated for oral bioavailability and safety of the compound. Discovery studio 2020 prediction was performed on selected compounds to assess the probability of antimicrobial activity.
Protein Target
3D Structures of protein were procured from PDB. The protein structures were cleaned (water molecules and other hetero atoms removed), prepared and minimized before docking.
Docking
Docking module LibDock using Discovery Studio 2020 was used to study interaction between the Protein and ligand molecules. The binding site of the protein defined and the docking performed. The LibDock scores, nature of bonding and bond length of the docked ligands were estimated.
Protein Preparation
Prepare the protein structure before docking because, In general PDB structures contain water molecules, all water molecules are removed except the important ones in protein preparation. Hydrogen atoms will be missing in PDB structure; many docking programs need the protein to have explicit hydrogen. Hydrogen can be added unambiguously except in the case of acid/ basic side chains through protein preparation. The PDB structure can be incorrect in some protein side chains. The crystallographic structure gives electron density, not molecular structure. Click on, Macromolecule then prepare protein then automatic preparation followed by prepare protein give input protein (select the saved protein structure) apply run. Then save the resultant prepared structure in a new file.
Ligand Preparation
Preparation of ligand is also done because of some reasons; a reasonable 3D structure is needed as starting point. Protonation state and tautomeric form of a particular ligand could influence its hydrogen bonding ability. Small molecule Prepare then alter-ligands then prepare ligand add input ligand (select the saved ligand structure) finally run. The resultant prepared structures of ligands are saved in new file
Define Binding Site
After the protein and ligand preparation, next step is to define binding site for docking.
Docking
Click on Receptor ligand interaction open dock Ligands click LibDockwhich was used for docking, because a target needs to dock with multiple ligands. After docking each and every poses of dock result are analysed in detail. Then the result was screened based on presence of Hydrogen-bond interaction and Libdock score, and are listed out. The listed ligand poses are screened based on the presence of H-bond interaction at GLU230 residue and the molecular properties of these ligands are calculated in DS by ADMET descriptors and toxicity prediction. The binding energy of the ligands was also calculated.
In silico Molecular Studies
In silico molecular modeling studies were carried out for different derivatives using different softwares like ACD Lab Chemsketch, Molinspiration, admet SAR and Biovia Discovery Studio 2020. Analysis of Lipinski Rule of Five was carried out for the proposed analogues using Molinspiration software. Insilicomolecular modeling using ACD Lab Chemsketch software was carried out as ACD Lab Chem sketch is a chemically intelligent drawing interface that allows drawing almost all chemical structure including organics, organo metallics, polymers and Markush structures. Use it to produce professional looking structures and diagrams for reports and publications. Determination of drug likeness and Lipinski rule of five using Molinspiration software [6] indicated in Table 1.
Determination of drug likeness is an important aspect of the drug design. These properties mainly electronic distribution, hydrophobicity, molecular size, hydrogen bonding characteristic, flexibility and presence of various pharmacophoric features influence the behaviour of molecule in a living organism, including bioavailability, transport properties, affinity to proteins, reactivity metabolic stability, toxicity and many others. Drug likeness score is calculated by Mol inspiration software [7] mentioned in Table 2.
The Lipinski Rule of five provides a measure for determining the oral bioavailability of a compound specified in Table 3.
Data's Computed from Software
Antimicrobial activity was predicted using Discovery Studio 2020 software
Determination of ADMET Profile Using AdmetSAR
In total, 22 highly predictive qualitative classification models were implemented in admetSAR software. These models includes human intestinal absorption, blood-brain barrier penetration, Caco-2 permeability, P-glycoprotein substrate and inhibitor, CYP450 substrate and inhibitor (CYP1A2, 2C9, 2D6, 2C19, and 3A4), hERG inhibitors, AMES mutagenicity, carcinogens, fathead minnow toxicity, honey bee toxicity, and tetrahymenapyriformis toxicity. In addition, all classification models were given a probability output instead of simple binary output. In scientific community of ADMET prediction, quantitative predictions are more useful. The report was summarized in
EXPERIMENTAL
A mixture of ketone like 4-(acetyl phenyl) thiophene 2-carboxamide and different ortho, meta, para substituted benzaldehyde ((0.01m) and 40% aqueous potassium hydroxide is added to 30 ml of ethanol and was stirred at room temperature for about 2-6 hrs. The resulting products was keep overnight in refrigerator. The solid separated out was filtered, washed with water and recrystallized from ethanol yields pure crystalline products. After drying in an over at about 70 0 C yield was 68%. Melting point was found to be 132 0 C. TLC was checked by n-hexane and chloroform (9:1) ratio as an eluent. All compounds were prepared by same method. Chemical characterization in synthesized derivatives summarized in Table 6.
N-(4-(3-(Furan-3-yl)Acryloyl)Phenyl) Thiophene-2-Carboxamide(T-IV-J)
Yield, (70%).IR-vales.(cm The three-dimensional structure of Glutathione S-Transferase was downloaded from PDB database with PDB ID: 1GNW with crystallographic resolution 2.20 A 0 (Fig. 1). The protein chain consists of two polypeptide chain A and B with total 211 amino acids and has a molecular weight of 47860.6 Daltons. The active site of protein interacting with the standardised ligand molecules was selected as the binding site.
Fig. 12. Spectral images of all synthesized compounds
The result showed that T-IV-I have high binding affinity to target compared to other ligands. All the docked complexes were analysed to study non-bond interactions between the target and the ligand molecule. The results are summarized in the Table 10. The results revealed that all the ligands bind the same active site.
In vitro Antibacterial Activity
The selected synthesized thiophene derivatives of the present investigation were screened for their anti-bacterial activity by subjecting the compounds to standard procedures [8]. The antibacterial activity was performed by cup and plate method (diffusion technique). The fresh culture of bacteria was obtained by inoculating bacteria in nutrient broth media and incubated at 37 ± 2 o C for 18 -24 hrs. This fresh culture was mixed with nutrient agar media and poured into sterile petri-plates by pour plate method, by following aseptic technique [9]. After solidification of the media, six bores were made at equal distance by using sterile steel cork borer (8 mm diameter). Into this cup 100µg/ml, 200µg/ml concentration solution of standard drug and synthesized compounds were introduced. Dimethyl formamide [10] was used as a control. After introduction of standard drugs and synthesized compounds, the plates were placed for proper diffusion of drug into the media for about 2 hrs. After 2hrs. The plates were incubated in BOD incubator and maintained at 37 ± 0.5 o C for 18-24 hrs. After the incubation period, the plates were observed for zone of inhibition [11] by using Hi-antibiotic zone reader. Results were evaluated by comparing the zone of inhibition shown by synthesized compounds with the standard drug. [12] the results were the average value of zone of inhibition measured in millimeter of three sets. The standard drug was dissolved in distilled water and the synthesized compounds were dissolved [13] in minimum quantity of DMF and diluted with water to get desired concentrations [14]. All data's were summarized in Table 12. ) and Streptomycin was used against Escherichia coli and. P.aeruginosa [15] (gram-ve organism).
DISCUSSION
The present study can be summarized as the designing of novel thiophene substituted chalcones as Glutathione-S-Transferase (PDB ID: 1GNW) selective inhibitors and analysis of the compounds through ADMET filters and molecular docking studies. From a library of designed compounds were chosen which had binding energy more could serve as lead compound for the development of newer potent anti-bacterial agents. In vitro study of synthesized compounds T-IV B and T-IV-H shows good antibacterial activity against Streptococcus auresis compared to docking result because docking is a hypothetical methodology. Sometimes experimental result may change with docking report. T-IV-B 15 12 11 11 T-IV-C 13 13 15 14 T-IV-F 11 12 13 12 T-IV-H 14 13 11 12 T-IV-I 15 10 14 15 T-IV-J 12 13 15 14 Procaine penicillin, 18 19 --Streptomycin --19 20 Control(DMF) 00 00 00 00
DISCLAIMER
The products used for this research are commonly and predominantly use products in our area of research and country. There is absolutely no conflict of interest between the authors and producers of the products because we do not intend to use these products as an avenue for any litigation but for the advancement of knowledge. Also, the research was not funded by the producing company rather it was funded by personal efforts of the authors.
CONSENT
It is not applicable.
ETHICAL APPROVAL
It is not applicable.
|
2021-11-18T16:24:43.862Z
|
2021-11-12T00:00:00.000
|
{
"year": 2021,
"sha1": "5b99456d433cacdfd491692875e6b03eeb920fc5",
"oa_license": "CCBY",
"oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/33341/62784",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bd20bd4398be5bf3045e67424fbc758915306271",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": []
}
|
12042907
|
pes2o/s2orc
|
v3-fos-license
|
Irreducibility and r-th root finding over finite fields
Constructing $r$-th nonresidue over a finite field is a fundamental computational problem. A related problem is to construct an irreducible polynomial of degree $r^e$ (where $r$ is a prime) over a given finite field $\mathbb{F}_q$ of characteristic $p$ (equivalently, constructing the bigger field $\mathbb{F}_{q^{r^e}}$). Both these problems have famous randomized algorithms but the derandomization is an open question. We give some new connections between these two problems and their variants. In 1897, Stickelberger proved that if a polynomial has an odd number of even degree factors, then its discriminant is a quadratic nonresidue in the field. We give an extension of Stickelberger's Lemma; we construct $r$-th nonresidues from a polynomial $f$ for which there is a $d$, such that, $r|d$ and $r\nmid\,$#(irreducible factor of $f(x)$ of degree $d$). Our theorem has the following interesting consequences: (1) we can construct $\mathbb{F}_{q^m}$ in deterministic poly(deg($f$),$m\log q$)-time if $m$ is an $r$-power and $f$ is known; (2) we can find $r$-th roots in $\mathbb{F}_{p^m}$ in deterministic poly($m\log p$)-time if $r$ is constant and $r|\gcd(m,p-1)$. We also discuss a conjecture significantly weaker than the Generalized Riemann hypothesis to get a deterministic poly-time algorithm for $r$-th root finding.
Introduction
The problem of finding r-th roots in a finite field is to solve x r = a given an r-th residue a ∈ F q . Note that, without loss of generality, we can assume r to be prime, otherwise for r = r 1 · r 2 , we can solve the problem iteratively by first solving x r 1 = a and then solving y r 2 = x. Moreover, we can assume r|(q − 1), otherwise x = a r −1 mod (q−1) is an easy solution.
It can be shown that x r = a has a solution iff a q−1 r = 1 . If a q−1 r 1 then we call a an r-th nonresidue. Interestingly, the problem of finding an r-th nonresidue is equivalent to that of finding r-th roots in F q [2,21,29]. This gives a randomized poly-time algorithm for finding r-th roots and, thus, solves the problem for practical applications. Also, assuming Generalized Riemann hypothesis (GRH) there is a deterministic poly-time algorithm for finding r-th nonresidue in any finite field [3,5,8,13]. For a detailed survey see [6,Chap.7].
The special case of r = 2 is particularly well studied. The problem now is to find square-roots in F q , which is equivalent to finding a quadratic nonresidue in F q . There are other randomized algorithms -Cipolla's algorithm [10], singular elliptic curves based algorithm [18], etc. There are also deterministic solutions for some special cases: • Schoof [20] gave an algorithm using point counting on elliptic curves having complex-multiplication to find square-roots of fixed numbers over prime fields. • Tsz-Wo-Sze [28] gave an algorithm to take square-roots over F q , when q − 1 = r e t and r + t = poly(log p). However, computing square-roots over finite fields in deterministic polynomial time is still an open problem. The best known deterministic complexity for this problem is exponential, namely,Õ(p 1/4 √ e ); which is also a bound on the least quadratic nonresidue [9]. The distribution of quadratic nonresidues in a finite field is still mostly a mystery; it relates to some interesting properties of the zeta function, see Thm. 6.7. In 1897, L. Stickelberger [26] proved that if p is a prime, K is an algebraic number field of degree n of discriminant D, and integer ring O K where the ideal (p) factorizes as p 1 p 2 p 3 . . . p s into distinct prime ideals then Equivalently, if the number of even degree irreducible factors of a squarefree f (x) mod p are odd, then the discriminant of f will be a quadratic nonresidue in F p . Swan [27] and Dalen [11] gave alternative proofs of Stickelberger lemma. Stickelberger lemma is used in factorization of polynomials over finite fields and in constructing irreducible polynomials of a given degree over finite fields [30,27,12].
We generalize this idea of constructing quadratic nonresidues from Stickelberger's lemma to constructing r-th nonresidues from "special", possibly reducible, polynomials. Formally, these "special" polynomials are over F q and satisfy the following factorization pattern, Property 1.1. Let r be a prime and f (x) ∈ F q [x] be a squarefree polynomial. f satisfies Stickelberger property 1.1 if ∃d, such that, r|d and r ∤ # irreducible factor of f (x) of degree d .
Our goal is to show that the construction of such a, possibly reducible, polynomial solves many of the open problems. It is somewhat surprising that a reducible polynomial be related so strongly to non-residuosity and irreducibility.
Our first main result relates Property 1.1 to the construction of r-th nonresidues in any field above F p (equivalently, finding r-th roots there). Theorem 1.2. Given ζ r ∈ F q and any polynomial f satisfying Property 1.1, we can find r-th roots in any finite field of characteristic p, in deterministic poly(deg(f ), log q)time.
We get a stronger result in the case when we have F p r available and r = O(1). Even r = 2 is an interesting special case. Finding an r-th nonresidue a in F q suffices to construct an extension F q r . For example, we have F q [a 1/r ] F q r ; equivalently, X r − a is an irreducible polynomial. However, it is not clear how to find r-th nonresidue given F q r . Anyways, the question of constructing F q r efficiently is of great interest [1,23,24] and still open.
Our second main result relates Property 1.1 to the construction of an irreducible polynomial of degree m, where m is any r-power. Theorem 1.4. Given a polynomial satisfying Property 1.1, we can construct the field F q m , for any r-power m, in deterministic poly(deg(f ), m log q)-time.
Note that, if we are given fields F q m 1 and F q m 2 (for coprime m 1 , m 2 ), we can combine them to get the field F q m 1 m 2 [22,Lem.3.4]. Hence, it is sufficient to be able to construct fields whose sizes are prime powers.
Organization of the paper. In this paper, the main results and ideas are presented in Sec.3. Sec.2 has notation and preliminaries. For concreteness, Sec.4 sketches our algorithm for finding an r-th nonresidue in any finite field, given a polynomial (in F p [x]) satisfying Property 1.1. We discuss some special cases of our analysis in Sec.5.
In Sec.6, we discuss few conjectures; particularly in Sec.6.2 we introduce a strictly weaker version of Generalized Riemann hypothesis to get poly-time algorithms.
Preliminaries
We are going to work in the finite field F q , where q = p d for some prime p. We will assume that F q is specified by a degree d irreducible polynomial over F p . This can be assumed without loss of generality, see [15,Thm.1.1].
Given a finite field F q and its extension F q k , the multiplicative norm of an element α ∈ F q k is defined as, The following properties of finite fields will be useful (for proofs refer standard texts, eg. [16]). Theorem 2.1 (Finite fields). Given a finite field F q with characteristic p and algebraic closure F p , • For any a ∈ F p , a q = a if and only if a ∈ F q .
of degree k has at most k roots in F q . The notation Z(f ) will be used to denote the set of zeros of polynomial f (x).
We are interested in finding r-th nonresidue in F q for a prime r. An element a ∈ F q is called an r-th nonresidue iff x r = a has no roots in F q . This possibility is there only if r|(q −1). In that case, a is an r-th nonresidue iff a q−1 r 1 [6]. Using this characterization, the following lemma constructs an r-th nonresidue in F q given an r-th nonresidue in F q k .
Lemma 2.2 (Projection). Let r be a prime which divides q − 1. Then, α ∈ F q k is an r-th nonresidue iff N F q k /F q (α) is an r-th nonresidue in F q .
Proof. We know that, Also, α ∈ F q k is a r-th nonresidue iff α q k −1 r 1. Hence, the proof follows from the bi-implication, We can define a multiplicative character-χ r (a) := a q−1 r -of F * q . Notice that χ r (a) 1 iff a is an r-th nonresidue in F q . Multiplicativity follows from the definition, i.e., Since a q−1 = 1, χ r (a) is an r-th root of unity. We will denote a primitive r-th root of unity by ζ r .
Since F * q is cyclic and r | q − 1, we have that ζ r exists in F q . Note that ζ i r , i ∈ F * r , are the (r − 1) primitive r-th roots of unity in F q .
One of the central algebraic tool used in our analysis is the resultant of two polynomials. Let f (x) = a m x m +a m−1 x m−1 +· · ·+a 0 and g(x) = b n x n +b n−1 x n−1 +· · ·+b 0 be two polynomials over a field F .
We will use the following properties of resultant (for proof see [16,Chap.1]). In fact, the property (3) in Lem.2.4 can be taken as the general definition of resultant, as it makes the resultant efficient to compute even when the base ring is not a field.
Another tool, closely related to resultant, is called the discriminant.
Note that although resultant (resp. discriminant) is defined in terms of the zeros of the polynomials, it can be computed without the knowledge of the zeros. This relationship between the zeros and the coefficients is very useful computationally.
Main results
We will prove the main theorems in this section. We are interested in finding r-th nonresidue in the finite field F q . So we will assume that r | q − 1 in Sec.1.3 and Sec.3.2. Moreover, for r = 2 we can assume that 4|(q − 1), otherwise −1 is a quadratic nonresidue and we are done.
Our first step will be to construct an r-th nonresidue using an irreducible polynomial f of degree divisible by r.
The following theorem finds an r-th nonresidue in F q using f .
This implies that L f ,r is an r-th nonresidue in Proof. We know that L f ,r ∈ F q d and ζ r ∈ F q . Taking the q k -th power, Using the above equation, .
Proof of Cor.1.3. Since F p m is specified by an irreducible polynomial of degree m (and we know r| gcd(m, p − 1)), we get an r-th nonresidue by Thm.3.1 if we can find ζ r in F p . The latter can be done using Pila's algorithm based on arithmetic algebraic-geometry [19, Thm.D]. Once we have an r-th nonresidue one gets an r-th root finding algorithm [21,29].
Thm. 3.1 also gives us a way to construct r-th nonresidue, in F p n for any n, using an irreducible polynomial of degree divisible by r.
with degree d = rk and ζ r ∈ F q , where F q has characteristic p. Then, we can find r-th nonresidue in any finite field F q ′ of characteristic p (assuming r|(q ′ − 1)).
Proof. Let F p m be the smallest subfield of F q , with r|(p m − 1). Using Thm. 3.1 & Lem. 2.2 on f , we can find an r-th nonresidue in F p m . Now consider the given field F q ′ with, say, p mℓ elements (since r|q ′ −1, ℓ ∈ N). It has a subfield F ′ of size p m , and so by [15,Thm.1.2], we also get an r-th nonresidue in F ′ , say a. We intend to lift this nonresidue to the bigger field F q ′ ; to do that we consider two cases.
• Case 1: Last inequality holds because If r | ℓ then we have an irreducible polynomial that defines F p mℓ and on that we can apply Thm.3.1 to get an r-th nonresidue in F q ′ .
The following lemma relates N F q d /F q (g) to the resultant R(f , g) when f is irreducible.
Lemma 3.3 (Resultant as a norm). If f is an irreducible polynomial of degree d in
Proof. We know that the roots of polynomial f are Z(f ) = {α, α q , · · · , α q d−1 }. Using the definition of resultant, Using Thm. 3.1 and Lem. 3.3, we immediately get the following information about the quadratic character of the resultant of the Lagrange resolvent, Corollary 3.4 (Resultant of resolvent). In the notation of Thm. 3 In particular, χ r (R(L f ,r , f )) = ζ −1 r .
3.2.
From a reducible polynomial f -Proof of Thm.1.2. We will look at the case of reducible polynomials now. The Thm.1.2 shows that a reducible polynomial satisfying Property 1.1 will give us an r-th nonresidue. Note that an irreducible polynomial of a degree divisible by r will trivially satisfy Property 1.1. We know that f satisfies Property 1.1. So, the distinct degree factorization guarantees a factor h i = f 1 f 2 · · · f r ′ of f such that,
Proof of
For convenience we shall denote f 1 f 2 · · · f r ′ as f from now on. Define g(x) to be the Lagrange resolvent inspired polynomial, We will show that R(f , g mod f ) is an r-th nonresidue in F q .
Here g mod f refers to some representative in F q d [x]. We will now show that the resultant is independent of the representative chosen.
Claim 1. Let f , g be two polynomials over any field. Then, Proof. Let g ′ := g mod f be a representative. Using the definition of resultant, = R(f , g) .
Clm.1 implies that, Since χ r is multiplicative, we have, The last step follows from Cor. 3.4 and the fact that f i are irreducible. Since r ∤ r ′ , we get χ r (R(f , g mod f )) 1 and hence R(f , g mod f ) is an r-th nonresidue in F q .
The last statement of the theorem (about fields of characteristic p) follows in the same way as in the proof of Cor. 3.2.
The time complexity is straightforward and further discussed in Sec.4.
3.3.
Constructing fields -Proof of Thm.1.4. The result in the previous subsection required the existence and knowledge of ζ r . Now we would like to eliminate those assumptions, hence we will remove the assumption r|q − 1. First, we will show that if we have a reducible polynomial f satisfying Property 1.1 then we can construct F q r (equivalently, we can construct an irreducible polynomial of degree r). The concepts that we will use are inspired from the proof of [15,Thm.5.2]. The starting idea is to work with a "virtual" ζ r , i.e. define the ring F q [ζ] := F q [Y ]/ ϕ r (Y ) , where ϕ r (Y ) := 0≤i≤r−1 Y i , and let ζ be the residue-class of Y mod ϕ r (Y ) in that ring. Let e be the smallest positive integer such that r|q e − 1, in other words, the multiplicative order of q modulo r. Then ϕ r (Y ) completely splits over F q e as where η ∈ F q e is a primitive r-th root of unity, but we may not have access to η and in general not even to F q e . So we will do computations over the ring F q [ζ] and try to construct the field F q r .
Clearly, ζ has order r in the unit group F q [ζ] * . For each integer a ∈ F * r there is a unique ring automorphism ρ a of F q [ζ] that fixes F q and maps ζ → ζ a . The set {ρ a | a ∈ F * r } =: ∆ forms a group (under map composition) that is isomorphic to F * r . If we consider the elements of the ring fixed under ∆ then we get back with f i 's being irreducibles of degree d = rk and r ∤ r ′ . When we move to F q e , f i factors into ℓ := gcd(k, e) = gcd(d, e) many irreducibles each of degree d/ℓ = kr/ gcd(k, e) =: k ′ r. Since r ∤ e, we have that r ∤ ℓ.
Our ring F q [ζ] is a semisimple algebra that decomposes as: and the proof given in Sec.3.2 holds simultaneously over each of the component fields ( F q e ) of F q [ζ]. Hence, simply by Chinese remaindering, we get the equality: where, as expected, g(x) is the following Lagrange resolvent over F q [ζ], (Also, note that we are now computing mod and resultant over the base ring F q [ζ].) Teichmüller subgroup. Let r ′′ be an integer representative for (r ′ ℓ) −1 mod r. Let q e − 1 = ur t such that r ∤ u and t ≥ 1. Define δ := R(f , g mod f ) ur ′′ . Then, by Eqn.2, we have δ r t−1 = ζ −1 . In particular, δ has order r t in F q [ζ] * . Define a function ω that maps any integer a to a r t−1 mod r t . Note that, by binomial expansion, (a + r) r t−1 ≡ a r t−1 mod r t . In other words, value of ω(a) only depends on a mod r. Now we come to the key definition, inspired from [15], The following properties can be easily verified: . At this point recall the definition of Teichmüller subgroup w.r.t. F q : ζ] * | ǫ has r-power order, and ∀ρ a ∈ ∆, ρ a (ǫ) = ǫ ω(a) .
By the properties above and invoking [15,Thm.5
Algorithm
For concreteness, we state our algorithm (Algo.1) for constructing r-th nonresidue in this section. The proof of correctness for this algorithm follows directly from Thm.1.2.
The input to this algorithm is a polynomial f (x) ∈ F p [x] satisfying Property 1.1, ζ r ∈ F p , and the finite field F q ′ of characteristic p where we want to construct r-th nonresidue. The algorithm outputs an r-th nonresidue in F q ′ .
Note that, since f (x) satisfies Property 1.1, wlog (by the distinct degree factorization) f = f 1 f 2 · · · f r ′ such that, • f i 's are irreducible of degree d = rk, and • r ∤ r ′ .
Algorithm 1 BIMS
where v := p n/r and h(x) is the minimal polynomial of F q ′ over F p .
Time complexity analysis-
One can refer to [25] for basic arithmetic operations. Polynomial computation in Step 2, takes timeÕ(rn log p log q ′ ) using repeated squaring. Similarly, Step 5 can be done inÕ(r 2 k log p deg(f )). The most expensive part of the algorithm is the resultant computation in Step 6. The same can be done in timeÕ(deg(f ) ω 0 log p), where ω 0 < 2.373.
5. Some special case applications 5.1. The special case of r = 2. Notice that for r = 2, we have ζ 2 = −1 available in any finite field with odd characteristic. Thus, using Thm.1.2 and an f (Property 1.1), we can construct a quadratic nonresidue. The same can also be calculated using Stickelberger lemma directly.
A striking difference, in the case of r = 2, is that using Stickelberger lemma (Eqn.1) discriminant is the quadratic nonresidue. This implies that over even degree finite field extensions, the derivative of the minimal polynomial of the extension is a quadratic nonresidue. We formally state this property below.
Proof. Using Stickelberger lemma (Eqn.1) we know that the discriminant is a quadratic nonresidue in F q . Since, Using Lem.3.3 we get that N F q d /F q (f ′ ) is a quadratic nonresidue in F q , and using
5.2.
Cases for which ζ r is known. Since our first main theorem, Thm.1.2, requires ζ r , in this section we state some known methods to construct the same. One of the most significant results on this is by Pila [19]. He generalized Schoof's [20] elliptic curve point-counting algorithm to Fermat curves, and as an application gave an algorithm for factoring the r-th cyclotomic polynomial over F p . The algorithm is deterministic and runs in time polynomial in log p for a fixed r. If r|p − 1 then the factorization of the r-th cyclotomic will give us ζ r ∈ F p .
A limitation of Pila's algorithm is that it can give us ζ r only in prime fields. Below we state few results that can give ζ r in extensions of prime fields.
The following theorem by Bach, von zur Gathen and Lenstra [7] gives an elegant way to construct ζ r ∈ F q using "special" irreducible polynomials.
Theorem 5.2. [7, Thm.2] Given two prime numbers p and r, the h = ord r (p), the explicit data for F p h ; and given for each prime ℓ|(r − 1) but not dividing h, an irreducible polynomial g ℓ of degree ℓ in F p [X], there is a deterministic poly(rh log(p))-time algorithm to construct a primitive r-th root of unity in F p h .
We immediately get the following. There are also some other methods for finding ζ r ∈ F q that are based on the factorization pattern of q − 1. We present one such result and its proof.
Proof. The number of elements whose order is not a multiple of r is t. So if we take t + 1 elements in F q , this will give us an element a that has order a multiple of r. Then, a t is an element with an r-power order. Let ord(a t ) =: r s , where s ≥ 1. Finally, a t·r s−1 is an element of order r in F q .
5.3.
Necessary condition for the irreducibility of a polynomial. Our analysis provides a necessary condition for checking irreducibility of a polynomial.
Lem.5.5 for r = 2 is used by von zur Gathen in his paper to prove properties about irreducible trinomials [30,Cor.3]. We hope that this generalized lemma gives conditions that can help construct additional polynomial families.
6. Some conjectures 6.1. Finding polynomials satisfying Property 1.1. A natural question that arises from our analysis is: How can one construct a polynomial satisfying Property 1.1? An approach can be to come up with a polynomial family F such that at least one of the polynomial in F satisfies Property 1.1. We leave the construction of such a polynomial family as an open question.
This question for r = 2 will also be very interesting. For r = 2, if we can construct a polynomial satisfying Property 1.1 then its discriminant will be a quadratic nonresidue by Stickelberger's lemma.
A well studied polynomial family for such properties are trinomials. Trinomials are univariate polynomials with sparsity three: An elegant property of trinomials is the closed form expression for their discriminant and, thus, it can be computed efficiently. (Even if the degree of the trinomial is exponential.) Theorem 6.1 (Swan [27]). Let n > k > 0. Let d = gcd(n, k) and n = n 1 d, k = k 1 d . Then, where E = n n 1 b n 1 −k 1 + (−1) n 1 +1 (n − k) n 1 −k 1 k k 1 a n 1 .
Trinomials are used to construct irreducible polynomials in [30,27]. Based on our experiments we give the following conjecture.
Conjecture 6.2. The following polynomial family has at least one polynomial that satisfy property 1.1 for r = 2, We leave the proof, or a refutation, of this conjecture as an open question.
6.2. Weaker Generalized Riemann Hypothesis. In 1952, Ankeny [3] proved that if the Generalized Riemann Hypothesis is true then the least quadratic nonresidue in F p is O(log 2 p). The Generalized Riemann hypothesis(GRH) says that all the non-trivial roots ρ of the Dirichlet L function are on real line z = 1 2 , but what if we consider a weaker form of it? Instead of saying that all the nontrivial roots lie on Re(ρ) = 1 2 , we "merely" conjecture that all the nontrivial roots lie in a wider strip [ 1 2 − ǫ, 1 2 + ǫ], for a constant ǫ. Conjecture 6.3 (Weak GRH). Let χ be a Dirichlet character, i.e. χ : F * p −→ C * . There exists a constant 1 2 > ǫ ≥ 0 such that the Dirichlet L function L(s, χ) = χ(n) n s have all its nontrivial roots in the interval 1 2 − ǫ < Re(s) < 1 2 + ǫ. We will now use some known facts from Analytic number theory, for detailed proofs of these facts see [17,Chap.7]. Let Λ be the Mangoldt function and ζ(s) be the Riemann zeta function. Lemma 6.4 (Bounds for ψ(x, χ)). Let ψ(x, χ) = i≤x Λ(i)χ(i) and χ be a primitive Dirichlet character of F * p , then where ρ = σ + iγ are the nontrivial roots of the Dirichlet L function L(s, χ). Also, Lemma 6.5 (Bounds for ψ(x)). Let ψ(x) = i≤x Λ(i), then where ρ = σ + iγ are the nontrivial roots of the Riemann zeta function ζ(s). Also, We will now prove bounds on ψ(x) and ψ(x, χ) assuming Weak GRH.
Using this lemma we will bound the least r-th nonresidue in F p . This elementary analysis, assuming Weak GRH, has remarkable consequences. Ankeny's result has been used in derandomizing many computational problems under the assumption of GRH. Some of them are primality testing [6,Chap.9], rth root finding [2], constructing irreducible polynomials over finite fields [1] and cases of polynomial factoring over finite fields [7,1]. (Also, see [4,14] and the references therein.) Our result implies that, for derandomizing these problems, proving the Weak GRH suffices.
Conclusion
We give a significant generalization of Stickelberger Lemma (Eqn.1); we can construct an r-th nonresidue in F q given ζ r ∈ F q and a polynomial f satisfying Stickelberger property 1.1. Using this, we also gave an algorithm to find r-th roots in F q m if r = O(1) and r| gcd(m, p − 1). An interesting open question here is whether one can weaken the Stickelberger property (eg. remove the nondivisibility by r condition?).
Our result along with some known results on finding ζ r ∈ F q gives us some interesting applications. It seems that finding ζ r ∈ F q is an inherent requirement in our analysis. We leave removing the requirement of ζ r from our algorithm as an open question. This we have been able to achieve, if the goal is only to construct a degree r irreducible (given f ) instead of an r-th nonresidue.
|
2017-01-31T09:32:00.294Z
|
2017-02-02T00:00:00.000
|
{
"year": 2017,
"sha1": "b9bd8e8cee7488f23d10427205d4baa6a841741d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b2642a5c28571a884ffefdb329a5ecdbba78a1cb",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
258275517
|
pes2o/s2orc
|
v3-fos-license
|
The metaverse in medicine
Abstract The metaverse is an alternative digital world, accessed by means of dedicated audiovisual devices. In this parallel world, various forms of artificial intelligence meet, including individuals in the form of digital copies of real people (avatars), able to interact socially. Metaverse in medicine may be used in many different ways. The possibility to perform surgery at a distance of thousands of miles separating the patient from the surgeon, who could have also the possibility to visualize in real-time patient’s clinical data, including diagnostic images, obviously is very appealing. It would be also possible to perform medical treatments and to adopt pharmacological protocols on human avatars clinically similar to the patients, thus observing treatment effects in advance and significantly reducing the clinical trials duration. Metaverse may reveal an exceptional educational tool, offering the possibility of interactive digital lessons, allowing to dissect and to study an anatomical apparatus in detail, to navigate within it, not only to study, but also to see the evolution of the pathological process, and to simulate in advance surgical or medical procedures on virtual patients. However, while artificial intelligence is now an established reality in the clinical practice, the metaverse is still in its initial stages, and to figure out its potential usefulness and reliability, further developments are expected.
Introduction
In 1992, for the first time, Neal Stephenson in the novel 'Snow Crash' introduced the term 'metaverse', meaning with it a virtual reality (VR) going beyond the physical reality, that is an alternative virtual world in which individuals can create activities as they like and share them with other users. 1 The metaverse therefore appears as a parallel 3D reality located in the network, which can be accessed through dedicated technological tools such as glasses or audiovisual devices. It is characterized by a shared collective virtual space, based on the social interaction and created by the convergence of physical reality and virtually augmented digital reality. Real individuals can express their personalities and habits in the metaverse, using a digital alternative identity projected into an environment and context created by the convergence of the virtual world and the real world. [1][2][3] The metaverse is based on four technologies: VR, augmented reality (AR), mixed reality (MR), and extended reality (XR). Virtual reality is a technology service that allows users to experience a real-life environment in a virtual world created by digital devices; AR is a technological service that provides an environment in which a virtual object represented in 2D or 3D interacts with a real space; MR is a technology service that combines information from both the real and the virtual world to create a virtual space where the two worlds are combined; XR is a technology service that implements a concept that includes VR, AR, and MR, as well as another form of reality that will appear in the future. Extended reality is a generic technology that implements the metaverse and is therefore considered the main key to convergence with the industry (and with the healthcare industry in particular) to build a new social and industrial ecosystem. [2][3][4] From artificial intelligence to the metaverse In order to understand the concept of the metaverse and how it differs from previous forms of technology, it is essential to consider the role that the 'internet of things' (IoT) plays in it. 2,[4][5][6] The concept of IoT consists in the *Corresponding author. Email: massimo.massetti@unicatt.it extension of the internet to the world of concrete objects and places, which acquire their own autonomy and digital identity in order to be able to communicate with other objects on the network and thus be able to provide services to users. The objects (the 'things') thus become recognizable and acquire intelligence thanks to the fact of being able, on the one hand, to communicate data about themselves and, on the other, to access aggregate information from other objects. Thus, based on the information received and/or sent via the network, the alarm clock will sound earlier in the event of rain or traffic problems, the accessories worn by the athletes will be able to transmit times and performances, speed, and distance travelled, to compete in time with other athletes in different areas of the world, or the intelligent medicine containers will be able to warn the patient or family members if they forget to take the medicine. All objects can acquire an active role, thanks to the connection to the network, and participate in the life of the individual, hence the introduction of the socalled 'smart' objects, which are characterized by certain properties such as identification, connection, localization, the ability to process data, and the ability to interact with the external environment. For decades, the use of tools and appliances has supported people in their daily lives (mobile phones and computers). The difference is that now, these tools are 'smart', that is they have a certain degree of autonomy of functions and interconnection with reality.
The goal of the IoT is to make the electronic world draw a map of the real one, giving an electronic identity to things and places in the physical environment. Objects and places equipped with radio frequency identification (RFID) labels or QR codes communicate information on the web or to mobile devices such as Personal Digital Assistants, tablets, or mobile phones. In other words, the IoT represents the bridge between the real world and the metaverse, or the tool with which to create the metaverse and from which to find the information necessary for its continuous updating.
The metaverse is therefore configured as a digital world still partially unexplored or in any case to be composed progressively, where people will be able to navigate and move through reproductions, also generated in their likeness (avatars and digital twins), more or less faithful graphic representations of users, able to interact in real time with other digital people, each with their own individuality. A dimension therefore where VR, AR, artificial intelligence, IoT, and various other technologies meet and complement each other to articulate physical and digital worlds.
The metaverse and medicine
Among the fields of application of the metaverse, medicine is a field of great relevance, and the term medical IoT (MIoT) represents the artificial model applied to medicine, which is facilitated by particular equipment such as AR and VR glasses. 2,[4][5][6][7] The recent coronavirus disease 2019 (COVID-19) pandemic has greatly favoured the diffusion and technological development of increasingly sophisticated 'remote' telemedicine. Due to the need to maintain social distancing, people were forced to switch from the direct doctor-patient relationship to an electronic relationship based mostly on videoconferences. This solution can certainly be optimal for those consultations that do not strictly require a physical examination in the presence; however, the lack of physical contact between doctor and patient can reduce the empathy that commonly develops in a traditional medical examination. 8,9 The metaverse can partially overcome this barrier in the context of a 3D reality in which visual and auditory information is combined with tactile sensations thanks to particular devices such as gloves with appropriate sensors. Furthermore, thanks to AR, the physician simply through his smart glasses would be able to visualize, within his field of vision, all the patient's clinical and instrumental data, diagnostic images, and complete electronic records, with no need to get such information from the computer. Furthermore, the system will be able to highlight errors during the diagnostic-therapeutic process or, thanks to specific smart devices (mobile phone, smartwatch, and vital parameters detectors), it would be able to signal the possible inadequate adherence of the patient to the assigned therapy.
In various contexts, MIoT has proven effective in serving the patient community. A case in point is the Asthma Prevention Application (AHA), developed in the USA, which was designed to conduct healthcare research on a large scale and provide real-time monitoring of air pollution. Based on the analysis of patients' electronic asthma diary data, combined with atmospheric data, this application can predict acute asthma attacks, contributing to primary and secondary prevention of the disease. 10 Another example of the MIoT application is provided by the Japanese Toshiba, which has developed an artificial intelligence device made up of wrist sensors and a palmtop, able to analyse and monitor the user's health, activities, and personal habits on a daily basis, providing reminders and advice for an appropriate healthy diet and regular exercise, tailored to the specific individual. Based on the characteristics of the arterial pulse, movement, heart rate, and electro-dermal activity, artificial intelligence was found to play a key role in making behavioural changes and in reducing the risk of lifestyle diseases. The software achieved 90% accuracy in detecting user activities such as eating and exercising. 11 The taking of digital care paths to extremes as a physician-patient interaction tool led to create virtual hospitals in the metaverse, such as the Hospital Alfa in the virtual city called Aimedis Health City, where doctors and patients of different origins and nationalities meet and interact, sharing their experience and knowledge.
Currently, in the 'real world', efforts of the health authorities are aiming at concentrating resources (i.e. technologies and skills) in reference and highly specialized centres. This centralization of care can generate difficulties for patients (especially the elderly and frails with little autonomy) who live in peripheral areas and are geographically distant from the reference centre for excellence of medical care. In this context, the MIoT can be a valid support to the doctor's activity, through its three fundamental characteristics: complete perception, reliable transmission, and intelligent processing.
The possibility of a VR integrated by intelligent medical devices capable of reconstructing a faithful avatar or 'digital twin' of the patient (a twin virtual copy of the patient), supported also by current diagnostic tools (including computed tomography and positron emission tomography), may at least partially compensate the lack of a direct doctor-patient relationship.
The metaverse in medicine B105
Another application of the metaverse in medicine was adopted in China since 2018, where an MIoT model was used for lung cancer screening campaign. The innovative screening system compares, via a network of suitable processors, the tomographic image of the subcentimetre nodules identified in a patient with those of an image archiving system. In this way, an effective screening model was created based on radiological evaluation in real time and on simultaneous comparison with previous exams. By developing PNapp5A, an application based on the five-step assessment of pulmonary nodules, it was possible to improve the early diagnosis of pulmonary nodules using big data-based management technologies. [12][13][14] This artificial intelligence system has been adopted in China in 900 centres adhering to the Chinese Alliance Against Lung Cancer (CAALC) and, according to data from Fudan University -Zhongshan Hospital, has proved to be effective in the early diagnosis of pulmonary nodules, lowering the average age of diagnosis from 63 to 50 years, and obtaining that of all patients undergoing surgery, 60.3% had undergone early diagnosis with this model. Based on this experience, Dr Chunxue Bei introduced the term 'human-computer multidisciplinary team', emphasizing the interaction and collaboration between the physician and the artificial intelligence, the latter considered almost an external entity with its identity and autonomy. This new approach facilitated the standardization of screening, diagnosis, and treatment of early-stage lung cancer and complex cases of nodules of an indeterminate nature. 2,12,13 From these first experiences, we anticipate that the applications of the metaverse in medicine can be multiple with an apparently unlimited potential.
In the field of rehabilitation and psychology and psychotherapy, perspective for applying technologies related to the metaverse seems very real. Techno Village Rehaveware is a VR rehabilitation service focused on recovering impaired motor function in patients with brain diseases such as stroke, Parkinson's disease, and brain surgery. Through the interaction with objects belonging to the IoT typology (smart balls or globes), it is expected that the patient's motivation to exercise would increase, as well as the clinical results. In the future, thanks to the possibility to create virtual entities parallel to the real world, its use is expected to be extended to the psychological treatment of patients with dementia, as well as children and adolescents suffering from family violence or other severe mental disorders. 2,3 Another field of application of artificial intelligence extended to the metaverse is surgery. In Lisbon, at the Breast Unit of the Champalimaud Foundation, the Portuguese surgeon Dr Pedro Gouveia and his Spanish colleague Dr Rogelio Andrés-Luna, thanks to the metaverse, performed an operation simulating of being in the same operating room, despite of being 900 km away. Dr Gouveia wore special AR glasses, namely Hololens. 2 Not only could he see the patient in front of him, but he also had patient's diagnostic images and clinical information projected onto the appropriate lenses. In this context, 5G technology proved to be essential, capable of overcoming the limits (such as latency time) of 4G technology.
Artificial intelligence and VR appear to be very promising in the field of surgery. Not only would they allow interventions to be performed remotely, but they would allow operators to be completely immersed in the intervention itself. In this sense, AR would play a fundamental role. The surgeon will wear smart glasses able to inform him in real time of any vital parameter changes and to provide him with all the information necessary to improve his performance, without the need to take his eyes off the patient and the operating field.
Moreover, the possibility of accessing the VR would also represent a valid tool in surgical planning. Thus, it offers the possibility of performing not only 3D reconstructions of the target organ but, providing a 4D vision, also the possibility of navigating inside of it and analyse any possible anomaly. It would also be possible to simulate the operation in VR on an avatar identical to the patient to be operated on, acquiring experience before performing the real operation in the operating room. 1,[2][3][4]14 Perspectives are also encouraging in the field of clinical research. Thanks to the IoT, through intelligent devices, we might be able to reproduce virtual patients, with characteristics similar to real ones, to treat with innovative therapeutic protocols, considerably speeding up achievement of results of clinical trials and thus reducing time and costs of trials.
The metaverse in medical education and training
The role of the metaverse in healthcare education should also be emphasized. Through VR and through the use of intelligent objects, a series of very useful possibilities in teaching are available. There is the possibility to perform anatomical studies on virtual models of real organs, being able to dissect them, to feel the consistency of the tissues, to evaluate the anatomical relationships with the surrounding structures, up to enter into their interior to better evaluate their anatomical characteristics. It is possible to perform surgical procedures in conditions equal to real ones or to simulate clinical or diagnostic activities on virtual models. In the same way, simulation models of physiological or pathological mechanisms can be recreated. For example, the BRM all-in-one machine adopting holographic emulation technology was used to show students the mechanism of cigarette smoke inducing lung cancer. This pioneering pedagogical practice produced sensational effects as students observed closely the alveolar damage caused by smoking and its relationship to the onset of lung cancer. By means of virtual patients with characteristics similar to real patients, it is possible to safely carry out treatment paths, from the anamnesis, to the diagnostic procedure, to the patient's therapy, representing a valuable teaching tool for the student to approach the clinical situation in total security and still gaining valuable experience.
The possibility of sharing studies with colleagues located in different parts of the world favours group activity and teamwork in an interactive way, from the reconstruction of anatomical systems to the collegial discussion of clinical cases. Sharing experiments gave very positive results. Furthermore, students can be trained to quickly learn various therapeutic techniques as if they were present in clinical practice, such as magnetic navigation, or even procedures such as endoscopy or intubation. Naturally, as already highlighted, the training of young B106 M. Massetti and G.A. Chiariello surgeons is facilitated with the possibility to perform simulated operations on virtual patients and to guide the movements of a student even remotely. 1,14 The metaverse would also favour direct experiences regarding the state of patients. In 2006, a virtual psychiatric unit was created, with the aim to simulate the virtual experience of visual and auditory hallucinations, to give to trainee doctors the opportunity to learn more about these pathological phenomena by experiencing them directly. 15
Conclusions
The connection between artificial intelligence (in its varied forms) and the metaverse is very complex, and often various models of artificial intelligence can be the basis and the essential components of an innovative model of the metaverse. Transition from the various forms of artificial intelligence to the metaverse is not immediate. The potential of the metaverse appears unlimited, but to reach its effective utility and reliability in the field of medicine, future developments are expected.
Funding
None declared.
|
2023-04-23T05:06:07.841Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "48687cdbad1ba988d5cb2700c4977e8152a84b75",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "48687cdbad1ba988d5cb2700c4977e8152a84b75",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239618705
|
pes2o/s2orc
|
v3-fos-license
|
Loss of Function Glucose-Dependent Insulinotropic Polypeptide Receptor Variants Are Associated With Alterations in BMI, Bone Strength and Cardiovascular Outcomes
Glucose-dependent insulinotropic polypeptide (GIP) and its receptor (GIPR) are involved in multiple physiological systems related to glucose metabolism, bone homeostasis and fat deposition. Recent research has surprisingly indicated that both agonists and antagonists of GIPR may be useful in the treatment of obesity and type 2 diabetes, as both result in weight loss when combined with GLP-1 receptor activation. To understand the receptor signaling related with weight loss, we examined the pharmacological properties of two rare missense GIPR variants, R190Q (rs139215588) and E288G (rs143430880) linked to lower body mass index (BMI) in carriers. At the molecular and cellular level, both variants displayed reduced G protein coupling, impaired arrestin recruitment and internalization, despite maintained high GIP affinity. The physiological phenotyping revealed an overall impaired bone strength, increased systolic blood pressure, altered lipid profile, altered fat distribution combined with increased body impedance in human carriers, thereby substantiating the role of GIP in these physiological processes.
INTRODUCTION
Glucose-dependent insulinotropic polypeptide (GIP) is a gut-derived hormone that is secreted from the enteroendocrine K cells in the proximal part of the small intestinal in response to nutrient intake (Baggio and Drucker, 2007;Sonne et al., 2014). GIP, along with a related hormone, glucagon-like peptide-1 (GLP-1), constitute the incretin hormones that regulate postprandial glucose tolerance by stimulating insulin release from pancreatic β-cells (Gasbjerg et al., 2020a). In contrast to GLP-1, GIP has been demonstrated to enhance glucagon secretion in a glucosedependent manner in healthy individuals, thus at low-and normal blood glucose levels GIP stimulates glucagon secretion from α-cells, but fails to do so at higher blood glucose levels (Christensen et al., 2011(Christensen et al., , 2015. GIP has also been ascribed a role in mediating fat deposition (Asmar et al., 2016). The GIP receptor (GIPR) belongs to the class B1 G protein-coupled receptor (GPCR) superfamily and signals through Gα s /adenylyl cyclase activation, leading to increased cyclic adenosine monophosphate (cAMP) concentrations (Holst, 2019).
The GIPR is not only expressed in pancreatic islet cells and adipocytes but has a wide expression profile including, but possibly not limited to, the heart, spleen, lung, central nervous system, and thyroid cells (Baggio and Drucker, 2007). Additionally, the GIP system is important for bone metabolism through GIPR expression on osteoblasts and osteoclasts (Bollag et al., 2000;Zhong et al., 2007;Skov-Jeppesen et al., 2021) through which GIP inhibits bone resorption as well as promotes bone formation (Tsukiyama et al., 2006;Zhong et al., 2007;Berlier et al., 2015;Skov-Jeppesen et al., 2019). Even though it is now getting recognized that GIP/GIPR is involved in bone metabolism, it is largely unknown how genetic alterations, influencing GIPR signaling, affect bone growth and resorption. The potential impact of the GIP-GIPR axis in other organ systems is similarly underinvestigated. A recent review emphasized the potential importance of GIP/GIPR in cardiovascular diseases, although details of the operation of this axis in humans are virtually unknown (Heimbürger et al., 2020).
GIP is associated with the pathophysiology of obesity and type 2 diabetes mellitus (T2D) and have therefore been the focus of therapeutic interest for many years. It is currently debated whether to use GIPR agonists or -antagonists in combination with GLP-1 agonists to treat obesity and T2D, as both combinations show promising results (Holst and Rosenkilde, 2020;Killion et al., 2020;Min et al., 2020). Clearly, there is a need to better understand the biology of the GIPR system to be able to exploit its pharmacological potential.
Genome-wide association studies (GWAS) have revealed that common variants in the GIPR are associated with obesity (Vogel et al., 2009;Speliotes et al., 2010) and impaired glucose-and bone mineral homeostasis (Sauber et al., 2010;Saxena et al., 2010;Torekov et al., 2014). With the exemption of rs1800437 causing the amino acid change E354Q, which leads to longterm functional impairment due to its distinct ligand binding kinetics, signaling and internalization profile (Kubota et al., 1996;Almind et al., 1998;Fortin et al., 2010;Mohammad et al., 2014;Gabe et al., 2019), the GIPR variants have not been functionally characterized. In a recent exome-wide association study designed to discover protein-altering variants associated with body mass index (BMI), two rare variants in GIPR were identified (Turcot et al., 2018). These missense variants result in amino acid changes, R190Q (rs139215588) and E288G (rs143430880). From gnomAD (Karczewski et al., 2020), the frequencies of R190Q and E288G in Europeans are 0.00093 and 0.0017, corresponding to ∼1 in 500 and ∼1 in 300 being heterozygous carriers, respectively. For each variant, heterozygote carriers of the rare allele had a ∼0.15 SD lower BMI compared to non-carriers, corresponding to an effect of ∼0.65 kg/m 2 . Interestingly, one middle-aged woman carried both rare GIPR mutations in heterozygote form and she weighed ∼11 kg less than the average non-carrier of the same height (Turcot et al., 2018).
Here we combine molecular pharmacological phenotyping with the physiological consequences of carrying these two rare GIPR variants. First, we investigated experimentally the GIP receptor binding and activation properties of the two variants, and secondly, we linked our findings to human physiology by assessing summary data of previously published studies and online portals.
Human GIP(1-42) was purchased from Caslo ApS (Lyngby, Denmark). HEK293 and COS-7 cells were both purchased from ATTC (Manassas, VA). Cell medium for HEK293 was purchased from Thermo Fisher Scientific (Waltham, MA) and the cell medium for COS-7 cells were prepared in-house. Other chemicals were purchased from standard commercial sources.
Transfection and Tissue Culture COS-7 cells were cultured at 10% CO 2 and 37 • C in Dulbecco's Modified Eagle Medium (DMEM) 1885 supplemented with 10% fetal bovine serum (FBS), 2 mmol/L glutamine, 180 units/mL penicillin and 45 g/mL streptomycin. HEK293 cells were cultured at 10% CO 2 and 37 • C in DMEM GlutaMAX TM -I supplemented with 10% FBS, 180 units/mL penicillin and 45 g/mL streptomycin. Both cell lines were transfected using the calcium phosphate precipitation method (Jensen et al., 2008) for binding and cAMP assay. For β-arrestin 2 recruitment assay, the PEI-transfection method was used and the Lipofectamine transfection method was used for the internalization assay.
Transiently transfected COS-7 cells were used in homologous competition binding assay. HEK293 cells were used in cAMP accumulation, β-arrestin 2 recruitment and internalization experiments.
cAMP Experiments HEK293 cells were transiently transfected with either wild-type GIPR, R190Q, E288G or the double mutation R190;E288G, and the cAMP measurements were done with an enzyme fragment complementation (EFC)-based assay (Hansen et al., 2016). In brief, the cells were seeded in white 96-well plates at a density of 35.000 per well 1 day after the transfection. The following day, the cells were washed twice with HEPES-buffered saline (HBS) and incubated with HBS and 1 mM 3-isobutyl-1-methylxanthine (IBMX) for 30 min at 37 • C. The cells were then stimulated with increasing concentrations of GIP(1-42) and incubated for additional 30 min at 37 • C. The HitHunterTM cAMP XS assay (DiscoverX, Herlev, Denmark) was carried out according to manufacturer's instructions.
Homologous Competition Binding Assay
Transiently transfected COS-7 cells expressing either wild-type GIPR, R190Q, E288G or R190Q;E288G were seeded in a clear 96-well plate 1 day after transfection. The number of cells added per well was adjusted aiming for 5-10% specific binding of 125 I-GIP(1-42). The following day, the cells were assayed by competition binding for 3-h at 4 • C using ∼15-40 pM of 125 I-GIP(1-42) and increasing concentrations of GIP(1-42) in binding buffer (50 mmol/L HEPES buffer, pH 7.2 supplemented with 0.5% bovine serum albumin (BSA). After incubation, the cells were washed in ice-cold binding buffer and lysed with 200 mmol/L NaOH with 1% SDS for 30 min. The samples were analyzed by the Wallac Wizard 1470 Gamma Counter.
β-Arrestin 2 Recruitment Assay
To measure β-arrestin 2 recruitment, HEK293 cells were transiently transfected with either wild-type GIPR, R190Q, E288G or R190Q;E288G and the donor Rluc8-Arrestin-3-Sp1, the acceptor mem-linker-citrine-SH3 and GPCR kinase 2 (GRK2) to facilitate β-arrestin 2 recruitment. Two days after transfection, the cells were washed with PBS and re-suspended in PBS with 5 mmol/L glucose. Subsequently, 85 µL of the cell suspension solution was transferred to its respective wells on a white 96-well isoplate followed by the addition of PBS with 5 µmol/L coelenterazine-h. After a 10 min incubation of the cells with coelenterazine-h, increasing concentration of endogenous GIP(1-42) were added and luminescence was measured by the Berthold Technologies Mithras Multilabel Reader (Rluc8 at 485 ± 40 nm and YFP at 530 ± 25 nm).
Real-Time Internalization Assay
HEK293 parental cells transiently expressing the SNAP-tag GIPR or the variant, SNAP-tag-R190Q or-E288G were seeded in white 384-well plate after transfection, at a density of 20.000 cells per well. The following day, the medium was removed and fresh medium was added to all wells. The next day, the assay was carried out by labeling all SNAP-tagged cells with 100 nmol/L Taglite SNAP-Lumi4-Tb (donor) in OptiMEM for 60 min at 37 • C. Subsequently, the cells were washed 4 × with HBBS supplemented with 1 mM CaCl 2 , 1 mM MgCl 2 , 20 mM HEPES and 0.1% BSA (internalization buffer, pH 7.4). 50 µM pre-heated fluorescein-O'-acetic acid (acceptor) was added to all wells, except wells where only donor signal was measured. The 384-plates were incubated at 37 • C for 5-10 min prior to ligand addition. Then, the cells were stimulated with increasing doses of GIP(1-42), that was pre-heated at 37 • C, and donor signal and internalization rate were measured every 4 min for 90 min at 37 • C in PerkinElmer TM Envision 2014 multi-label Reader.
Analysis of Online High Quality Summary Statistics of R190Q and E288G
Frequencies of R190Q and E288G were from gnomAD v2.1.1 (Karczewski et al., 2020). We examined available summary data from published papers to determine the effect of GIPR R190Q and E288G on relevant phenotypes. Data on bone mineral density (BMD) and bone fracture risk have been contributed by Morris et al. (2019). The p-values P.NI and P.I were used, respectively, as recommended by the authors. The data was downloaded from http://www.gefos.org/?q=content/data-release-2018. The BMD and fracture risk summary data derive from analyses performed in UK Biobank (N BMD = 426,824; fracture risk = 53,184 cases and 373,611 controls). Summary statistical data on body composition, obesity risk, physical activity, and cardiovascular events were derived from GeneATLAS (UK Biobank, N = 452,264) (Canela-Xandri et al., 2018). These summary data were downloaded from http://geneatlas.roslin.ed.ac.uk/. Summary data on circulating leptin levels (N = 57,232) have been contributed by Yaghootkar et al. (2020) via the NHGRI-EBI GWAS Catalog. The NHGRI-EBI GWAS Catalog is funded by NHGRI Grant Number 2U41HG007823, and delivered by collaboration between the NHGRI, EMBL-EBI and NCBI. Summary statistics were downloaded from the NHGRI-EBI GWAS Catalog (Buniello et al., 2019) for study GCST90007307 and GCST90007319 (Yaghootkar et al., 2020) on 15/12/2020 and 16/12/2020, respectively. Risk of T2D was assessed by summary statistical data (48,286 cases and 250,671 controls) contributed by Mahajan et al. (2018), and the data were downloaded from http:// diagram-consortium.org/downloads.html. Results included two models either not including BMI as a covariate or adjusted for BMI (BMI adj.). The lipid levels association results were derived from summary data of an exomewide meta-analysis (N = ∼350,000) contributed by Lu et al. (2017), and we downloaded the data from http://csg.sph.umich.edu/willer/public/lipids2017EastAsian/. Blood pressure and hypertension were investigated based on summary data derived from a meta-analysis of rare variants associated with blood pressure measures in European individuals (N = 1,164,961) performed by Surendran et al. (2020). These summary data were downloaded from https://app.box.com/s/1ev9iakptips70k8t4cm8j347if0ef2u. Data on myocardial infarction include summary statistics (N = 42,335 cases and 78,240 controls) contributed by the CARDIoGRAMplusC4D Consortium (Myocardial Infarction Genetics and CARDIoGRAM Exome Consortia Investigators, Stitziel et al., 2016). Data on coronary artery disease/myocardial infarction were contributed by the Myocardial Infarction Genetics and CARDIoGRAM Exome investigators and were downloaded from www.CARDIOGRAMPLUSC4D.ORG. Summary statistical data on SOFT coronary artery disease [fatal or non-fatal myocardial infarction, percutaneous transluminal coronary angioplasty or coronary artery bypass grafting, chronic ischemic heart disease, and angina; N = 71,602 cases and 260,875 controls (53,135 cases and 215,611 controls for the exome markers)] are derived from a meta-analysis of three GWAS, namely UK Biobank (interim release), CARDIoGRAMplusC4D 1000 Genomes-based, and the Myocardial Infarction Genetics and CARDIoGRAM Exome (Nelson et al., 2017). Data on coronary artery disease/myocardial infarction have been contributed by the CARDIoGRAMplusC4D and UK Biobank CardioMetabolic Consortium CHD working group who used the UK Biobank Resource (application number 9922). Data have been downloaded from www.CARDIOGRAMPLUSC4D.ORG. Supplementary Table 1 provides further details about the different studies and cohorts. A p-value below 0.05 was considered as statistically significant in analyses of specific hypotheses, while a significance threshold of 10 −4 was applied on the phenome-wide scan in UK Biobank data.
RESULTS
The Glucose-Dependent Insulinotropic Polypeptide Receptor Variants, R190Q and E288G, Show Markedly Reduced G Protein-Mediated Signaling Despite Maintained Glucose-Dependent Insulinotropic Polypeptide Binding The residue R190 is placed in the second transmembrane (TM2) domain in position 67 of the GIPR, hence denoted R190 2.67 (Wooten nomenclature in superscript; Wootten et al., 2013), near the first extracellular loop (ECL1), whereas E288 residue is located in the second extracellular loop (ECL2) of the receptor ( Figure 1A). It has previously been shown that the N-terminal part of GIP, interacts with R190 2.67 by forming a hydrogen bond (Smit et al., 2021;Zhao et al., 2021).
As Gα s is the main signaling pathway for the GIPR, we assessed the impact of these two mutations either separately or in combination. This was done by measuring intracellular cAMP accumulation in transiently transfected HEK293 cells in response to increasing concentrations of GIP. Both variants displayed reduced signaling capacity compared to wild-type GIPR with a markedly decreased (>250-fold) potency of GIP with EC 50values of 10 nM for R190Q and 3.6 nM for E288Q, compared to the wild-type GIPR with an EC 50 -value of 4.2 pM ( Table 1). R190Q reached a maximal activation (E max ) of 75% of that of wild-type GIPR at 1 µM, whereas E288G reached 90%. The double mutant, R190Q-E288G resulted in a complete loss of activation through Gα s (Figure 1B).
To determine whether the reduced cAMP formation was due to impaired agonist binding, we performed homologs competition binding, using 125 I-GIP(1-42) as radio-ligand for the wildtype plus all three GIPR variants. Both single mutations displayed reduced binding capacity (B max ) with 30% of the wildtype GIPR for R190Q, and only 13% for E288G, while the double mutant exhibited minimal binding (< 1%) ( Figure 1D). The binding affinities (K D ) of GIP were, however, not affected substantially as GIP bound with an affinity (K D ) of 5.0 nM and 3.9 nM for R190Q and E288G, respectively, while it bound the wild-type GIPR with an affinity of 2.7 nM ( Figure 1C).
The Glucose-Dependent Insulinotropic Polypeptide Receptor Variants Display Impaired β-Arrestin 2 Recruitment and Internalization
Due to the maintained binding affinity but lower number of receptors expressed, we next set out to investigate βarrestin 2 recruitment given its role in the desensitization and internalization of the GIPR (Gabe et al., 2018(Gabe et al., , 2020. All three variants displayed reduced ability to recruit β-arrestin 2 with an E max of 9.0% for R190Q, 8.6% for E288G, and 12% for the double mutant compared to wild-type GIPR. There was, however, no major difference with respect to the potencies of the receptors' ability to recruit β-arrestin 2; R190Q had an EC 50 of 0.76 nM while E288G had an EC 50 value of 0.23 nM compared to wildtype GIPR with an EC 50 of 0.88 nM. The double mutant, however, displayed an EC 50 -value of 11 nM ( Figure 1E). Thus, the overall maintained potency in β-arrestin 2 recruitment but lower E max corresponded with the binding profiles of the variants.
We then performed real-time internalization experiments to determine whether the reduced β-arrestin recruitment influenced receptor internalization. Here, we used SNAP-tagged versions of the single mutant GIPR variants expressed transiently in HEK293 cells while the double mutant was omitted due to its low expression. Upon transfection with same amount of DNA of either wild-type SNAP-tagged GIPR or SNAP-tagged GIPR mutants, we observed a significantly lower receptor cell surface expression of 60% of wild-type GIPR for both single mutant variants (Figure 1G). This indicates that the reduced binding capacity of GIP to R190Q and E288G could partly be explained by the lower receptor cell surface expression. Since internalization measurements are dependent on receptor expression (Foster and Bräuner-Osborne, 2018), we next titrated receptor concentrations to obtain similar donor signal (i.e., similar cell surface expression) from the SNAP-tag in the different GIPR variants. For similar expression levels, we observed no internalization of either variant receptors ( Figure 1F).
Taken together, the molecular pharmacological phenotype of the GIPR variants comprised diminished signaling through Gα s , reduced β-arrestin 2 recruitment and impaired receptor internalization. The affinity of GIP was maintained for the GIPR variants but with lower binding capacity, which could be explained by the lower receptor cell surface expression.
The Glucose-Dependent Insulinotropic Polypeptide Receptor E288G Variant Reduces Bone Mineral Density
Since R190Q (rs139215588) and E288G (rs143430880) diminished receptor activation, we were interested in linking these functional consequences with phenotypes in humans. At first, we searched for the largest genetic studies to gather available results of the two GIPR variants. The present study therefore includes high quality data for R190Q and E288G Frontiers in Cell and Developmental Biology | www.frontiersin.org from these genetic studies, in which we evaluated each GIPR variant separately.
We started our physiological investigation by examining bone mineral density (BMD) and fracture risk in carriers of R190Q and E288G using summary data from a study in UK Biobank with a total sample size of 426,824 individuals (Morris et al., 2019). Interestingly, E288G was associated with lower BMD (Beta -0.056 SD, p-value = 0.002) and R190Q showed similar effect size (-0.057 SD), but this was not statistically significant (Figure 2). None of the two GIPR variants seemed to be associated with an overall risk of bone fracture ( Table 2).
Both Body Mass Index-Lowering Glucose-Dependent Insulinotropic Polypeptide Receptor Variants Show Effects of Cardio-Metabolic Importance
Next, we examined the association with several traits of importance for cardio-metabolic health and disease. First, we evaluated the impact of R190Q and E288G on blood pressure in summary data from a newly published paper of rare genetic variations associating with blood pressure measures, which comprised > 800,000 individuals (Surendran et al., 2020). Both GIPR variants were associated with higher systolic blood pressure (R190Q: 0.045 SD; E288G: 0.049 SD), although the diastolic blood pressure was not significantly different between carriers and non-carriers (Figure 2). Furthermore, the E288G variant was associated with higher pulse pressure (Figure 2), while neither of the GIPR variants were associated with increased risk of hypertension ( Table 2).
We next examined the lipid profile to gain further insight into how R190Q and E288G with impaired GIPR signaling affected lipid homeostasis. Here we used summary statistics from an exome-chip based meta-analysis of ∼350,000 individuals (Lu et al., 2017). Carriers of R190Q did not have altered lipid levels compared to non-carriers, whereas carriers of E288G had lower high-density lipoprotein (HDL) cholesterol levels (beta = -0.10 SD, p-value = 0.02), yet with no changes in low-density lipoprotein (LDL), triglycerides or total cholesterol (Figure 2). Despite the impact on cardiovascular parameters, neither one of the rare GIPR variants, R190Q and E288G, in the present study associated with overall risk of cardiovascular events as major cause of death (Supplementary Table 2 Alterations in circulating leptin levels could be a putative mechanism of body weight regulation, and we therefore evaluated whether the two GIPR variants had altered levels from a genetic study of circulating leptin in early adiposity (N = 57,232) (Yaghootkar et al., 2020). Only R190Q was significantly associated with lowered leptin levels, although this association was lost when adjusting for BMI (Figure 2).
We also explored how the GIPR variants affect risk of T2D in summary data from a study of coding variants in T2D (48,286 cases and 250,671 controls) (Mahajan et al., 2018). In a model not adjusted for BMI, none of the rare GIPR variants were associated with risk of T2D. In contrast, a BMI-adjusted model showed that carriers of E288G had a decreased risk of T2D compared to non-carriers (OR 0.76, p-value = 0.04) ( Table 2).
Finally, we investigated UK Biobank data by a phenome-wide study. Here, all above-mentioned findings at p-value < 10 −4 for both GIPR variants were related to adiposity (Supplementary Tables 3, 4).
DISCUSSION
We show that two naturally occurring rare GIPR variants, R190Q and E288G (rs139215588 and rs143430880, respectively), result in impaired GIPR function at the molecular level which in turn seems to impact human physiology and pathophysiology regarding adiposity, bone health and the cardiovascular system (Figure 4).
The prevailing model for ligand-binding and receptor activation of class B1 receptors, including the GIPR, is that the extracellular domain (ECD) of the receptor recognizes the C-terminal of the endogenous peptide hormone that in turn allows the N-terminal part of the ligand to position itself into the transmembrane domain (TMD) (Schwartz and Frimurer, 2017). While several structure models exist for the closely related class B1 receptors, GLP-1R and glucagon receptor (Zhang et al., , 2018, the structural data of the full length human GIPR are scarce, and only few studies have been conducted to describe GIPR residues of importance for receptor activation (Yaqub et al., 2010;Cordomí et al., 2015). However, the importance of the R190-and E288 residues for GIP binding and GIPR activation was recently discussed in a study that combined MD simulations and mutagenesis experiments (Smit et al., 2021). Here, it was shown that R190 is an important residue for GIPR activation as the N-terminal part of the GIP was described to form a hydrogen bond with this residue. A similar observation was made earlier FIGURE 2 | Association of GIPR R190Q and E288G variants with quantitative cardio-metabolic traits in GWAS. For each variant, beta, standard error (SE), the p-value (P), sample size (N), estimate of heterozygous variant carriers (N het), and the publication of the study from which we have gathered data from are shown. The forest plot shows the beta in SD and the 95% confidence interval. Statistically significant results are shown in red. The number of heterozygous variant carriers (N het) was estimated from allele frequency and total number of individuals (N). HDL cholesterol, high-density lipoprotein cholesterol; LDL cholesterol, low-density lipoprotein cholesterol. by Yaqub et al. (2010) who showed a decrease in cAMP signaling upon agonist binding. Moreover, a recent cryo-EM structure by Zhao et al. (2021) of the human GIPR in complex with GIP and a G s -heterotrimer confirmed the formation of hydrogen bond between GIP and the R190 residue. The E288 residue appears to have a bigger impact on ligand binding (5.4-fold reduction in affinity and a B max of 32% compared to wild-type) than on activation, when substituted with an alanine (Smit et al., 2021). This is in line with the results of the present study, as we also saw a limited maximum binding capacity of 13% in E288Q, as we would expect a mutation to glycine (in E288G) to remove all functionality like alanine does (in E288A). In addition, we also observed a > 250-fold reduction in the GIP potency in G protein signaling for E288Q compared to wild-type GIPR, and supra-physiological GIP levels were needed for near maximum receptor activation. Similar impairment in terms of cAMP production was also published very recently (Akbari et al., 2021). We, in addition, found that R190Q and E288G displayed a diminished arrestin recruitment that in return resulted in a lack of receptor internalization, consistent with the previously established arrestin dependency for GIPR internalization (Gabe et al., 2018). Altogether, the functional data indicate that both GIPR variants disrupt the conformational changes necessary for receptor activation and arrestin recruitment, and also reduce receptor cell surface expression, while still preserving the binding of GIP. Circulating GIP is a multi-functional incretin hormone that acts on several targets, among which bone metabolism has been the focus of several recent studies. Rodents that lack GIPR have reduced bone size, bone mass, altered bone microarchitectureand bone turnover (Xie et al., 2005;Gaudin-Audrain et al., 2013;Mieczkowska et al., 2013). Thus, GIP analogs have been shown to improve bone composition and strength in rodents (Mabilleau et al., 2014;Vyavahare et al., 2020), while a GIPR antagonist impairs bone remodeling in humans (Gasbjerg et al., 2020b;Helsted et al., 2020). In the present study, E288G carriers had a significantly lower BMD, yet neither of the two GIPR variants showed a significantly increased overall bone fracture risk, possibly due to low statistical power. The common GIPR variant, E354Q (rs1800437), showed similar effects of lowered BMD along with increased risk of non-vertebral fractures (Torekov et al., 2014). However, E354Q shows either a similar or slightly enhanced signaling pattern as wild-type GIPR with an increased rate of receptor internalization, possibly due to a longer residence time of GIP for this mutant (Almind et al., 1998;Fortin et al., 2010;Mohammad et al., 2014;Gabe et al., 2019). As a result of decreased recycling of the receptor to the cell surface, this ultimately may result in functional impairment of the GIPR variant, E354Q, thus exhibiting the same phenotypic trait as R190Q and E288G.
Previous studies have already established the importance of the GIP-GIPR axis in glucose regulation. For instance, GIPRdeficient mice showed lower glucose-stimulated insulin levels and higher levels of plasma glucose (Miyawaki et al., 1999), a risk factor for T2D (Garber, 2000). In the present study, we found that E288G associated with a 24% decreased risk of T2D, whereas Turcot et al. (2018) did not detect this protective effect (Turcot et al., 2018), perhaps due to the lower sample size in the previous study [N ∼50,000 compared to ∼300,000 individuals ( Table 2)]. Several GWAS have identified variants positioned in the GIPR locus, including the E354Q GIPR variant, to associate with increased 2-h glucose levels, decreased insulin secretion, insulin resistance and risk of T2D (Almind et al., 1998;Hu et al., 2010;Sauber et al., 2010;Saxena et al., 2010), further supporting the importance of the GIP-GIPR axis in glucose regulation.
Regarding the impact on the cardiovascular system, it was previously shown that GIP infusions decreased mean arterial blood pressure and increased resting heart rate (Wice et al., 2012). In fact, GIP infusions decreased diastolic blood pressure and increased heart rate during normoglycemia and hypoglycemia (Skov-Jeppesen et al., 2019;Heimbürger et al., 2020), whereas during hyperglycemia, the systolic blood pressure was increased as well (Gasbjerg et al., 2021). In our study, carriers of either GIPR variants had a higher systolic blood pressure and pulse pressure. Since a previous study showed no association between the two GIPR variants and systolic blood pressure (Turcot et al., 2018), the higher statistical power of the current study (N ∼700,000; Figure 2) compared to the study by Turcot et al. (2018) (N ∼135,000) may explain this discrepancy. Taken together, our results establish that GIPR signaling is important for the regulation of blood pressure in a manner dependent on the glycemic state.
Dysregulation of circulating lipids is also a risk factor of cardiovascular diseases. High circulating levels of GIP have shown beneficial effects on the lipid profile in humans (Møller et al., 2016), and treatment with GIPR/GLP-1R co-agonists have shown improvement of the lipid profile in patients with T2D (Frias et al., 2017(Frias et al., , 2018. We found that carriers of E288G had significantly decreased HDL cholesterol levels without effect on other parameters of the lipid profile, suggesting that reduced GIPR signaling is involved in part of the cholesterol and lipid metabolism. These results are consistent with a previous study (Turcot et al., 2018), and the GIPR E354Q variant also showed a trend toward decreased HDL levels (Nitz et al., 2007). Even though carriers of R190Q and E288G have higher blood pressure and decreased HDL levels, they are not at higher risk of a cardiovascular event, and E354Q only nominally associated with cardiovascular disease (Nitz et al., 2007). Thus, reduced GIPR signaling does not seem to have fatal effects on the cardiovascular system, however, it is more likely that this study lacks statistical power to detect an effect on a clinical dichotomous phenotype even though association with a quantitative risk factor is detected. Similarly, we observe an association with BMD, yet no association with risk of fractures. Our observation that carriers of either GIPR variants had lower body fat mass and lean body mass than non-carriers corresponds with a previous association with lower BMI (Turcot et al., 2018), and was confirmed recently by wholeexome sequencing (Akbari et al., 2021). These results suggest that GIPR signaling contributes to regulation of body weight and body composition, and that reduced GIPR signaling is a potentially beneficial strategy against obesity. In support, obese Gipr knockout mice show lower body weight gain compared to wild-type mice, which may be explained by a lower fat mass, lean tissue mass and food intake, and an increased physical activity in these mice (Boer et al., 2021;Zhang et al., 2021). In the present study, we did not see an increased self-reported physical activity among carriers of R190Q or E288G. Furthermore, no increase was observed for the GIPR variant carriers regarding circulating leptin levels. In a previous study, obese Gipr knockout mice maintained leptin sensitivity compared to obese wild-type mice, and their leptin-induced anorectic effect was not inhibited by GIP infusion (Kaneko et al., 2019). If same scenario applies for humans, inadequate GIPR signaling, as for R190Q and E288G, may have beneficial effects in treatment of obesity. Further investigation in humans is needed to understand how GIPR signaling affects leptin sensitivity and long-term appetite control.
Although our results together with several studies of anti-GIPR antibodies (Gault et al., 2005;Killion et al., 2018;Min et al., 2020;Svendsen et al., 2020;Chen et al., 2021) could indicate that GIPR antagonists could protect from diet-induced obesity and improve glycemic and insulinotropic effects, other studies have shown the same for GIPR agonists (Nørregaard et al., 2018;Mroz et al., 2019;Samms et al., 2021). It is therefore still uncertain whether an agonist or an antagonist would be superior for the treatment of obesity. It is also worth noticing that the most prominent anti-obesity effect of GIPR agonists as well as antagonist is accomplished in combination with GLP-1R agonists (Killion et al., 2018(Killion et al., , 2020Nørregaard et al., 2018;Holst and Rosenkilde, 2020) indicating an important interplay between the two incretin hormones and their receptors.
CONCLUSION
In conclusion, our results suggest that reduced GIPR signaling can have both beneficial and disadvantageous effects on human physiology. Long-term use of GIPR antagonists may be of exceptional benefit in lowering adiposity for treatment of obesity and its comorbidities, such as T2D. In contrast, long-term use of a GIPR antagonist may, to some extent, negatively affect bone metabolism and the cardiovascular system, although the effects seem to be rather small. There are various additional GIPR missense variants detected in the human population, which could be explored for their potential impairment and/or altered signaling properties. This may provide a more complete picture of the physiological impact of GIPR signaling and how to best exploit its therapeutic potential.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material.
AUTHOR CONTRIBUTIONS
HK, KS, MR, and NG: study design, manuscript writingoriginal draft. HK, MR, AS-U, and CK: functional studies. KS and NG: human genetic studies. AH: structural modeling. HK, KS, LG, AH, NG, and MR: manuscript writing-reviewing and editing. All authors revised the manuscript and approved the final version. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by a scholarship to HK from the Danish Diabetes Academy, which was funded by the Novo Nordisk Foundation (Grant No. NNF17SA0031406) and a grant to MR from EFSD/Lilly European Diabetes Research Programme. KS and NG from The Novo Nordisk Foundation Center for Basic Metabolic Research were funded by an unrestricted donation to the University of Copenhagen by the Novo Nordisk Foundation (Grant No. NNF18CC0034900).
|
2021-10-25T13:12:10.603Z
|
2021-10-25T00:00:00.000
|
{
"year": 2021,
"sha1": "d27b5419fe30b8b84ec85f63164975a667c5a444",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.749607/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d27b5419fe30b8b84ec85f63164975a667c5a444",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233208855
|
pes2o/s2orc
|
v3-fos-license
|
Biomedical Photoacoustic Imaging and Sensing Using Affordable Resources
The photoacoustic (PA) effect, also called the optoacoustic effect, was discovered in the 1880s by Alexander Graham Bell and has been utilized for biomedical imaging and sensing applications since the early 1990s [...].
Introduction
The photoacoustic (PA) effect, also called the optoacoustic effect, was discovered in the 1880s by Alexander Graham Bell and has been utilized for biomedical imaging and sensing applications since the early 1990s [1]. In biomedical photoacoustic imaging, nanosecond-pulsed or intensity-modulated light illuminates tissue of interest, which when absorbed by intrinsic (such as hemoglobin, lipid) and extrinsic optical absorbers results in the generation of ultrasound (US) signals via thermoelastic expansion. These optically generated US signals can be detected on the tissue surface using conventional ultrasonic probes to generate the tissue's optical absorption maps with high spatiotemporal resolution. PA imaging thus offers advantages of both US imaging (imaging depth, spatiotemporal resolution) and conventional optical imaging techniques (spectroscopic contrast), making it an ideal modality for structural, functional, and molecular characterization of tissue in situ. Moreover, since both PA and US imaging rely on acoustic detection, it is feasible to share the probe and data acquisition system to perform naturally co-registered dualmode PA and US imaging with complementary contrast. Since US imaging is ubiquitous in clinics, such a dual-mode approach is expected to facilitate accelerating the clinical translation of the PA imaging technique. Owing to all these advantages, PA imaging has been explored for myriads of preclinical and clinical applications and is undoubtedly one of the fastest-growing biomedical imaging modalities of recent times. Even though PA imaging is matured in lab settings, clinical translation of this promising technique is not happening at an expected pace. One of the important reasons behind this is the costs of pulsed light sources and acoustic detection hardware. Affordability is undoubtedly an important factor to be considered in the following years to help translate PA imaging to clinics around the globe. This first-ever Special Issue focused on biomedical PA imaging and sensing using affordable resources is thus timely, especially considering the fact that this technique is facing an exciting transition from benchtop to bedside.
The overarching goal of this Special Issue is to provide a current picture of the latest developments in the capabilities of PA imaging and sensing in an affordable setting, such as advances in the technology involving light sources, and delivery, acoustic detection, and image reconstruction and processing algorithms. This issue includes 13 papers (2 reviews, 2 letters, 1 communication and 8 full-length articles) which cover a comprehensive spectrum of research from technology developments and novel imaging methods to preclinical and clinical studies, predominantly in a cost-effective setting.
Review Articles
The Issue starts with a comprehensive review article on portable and affordable lightsource-based PA tomography from Kuniyil Ajith Singh and Xia [2]. In this review, the authors focus on (1) basics of PA imaging, (2) cost-effective pulsed light sources for PA imaging and (3) important preclinical and clinical applications reported until now using affordable-light-source-based PA imaging. Because of tremendous developments in solidstate device technology, high-power LEDs have been explored heavily in recent years as illumination sources in PA imaging. In another interesting paper, for the first time, Zhu et al. comprehensively reviewed the use of LEDs in biomedical PA imaging, covering all technical details, preclinical and clinical applications reported until now [3].
Original Papers-Full-Length Articles, Communications, and Letters
The rest of the original papers in this issue are summarized in the following subsections based on the contents.
Instrumentation, Technology Developments
Conventional US probes are opaque and are not ideal for miniaturized PA imaging systems, especially for microscopic applications. In an exciting work from Chen et al., the authors developed an affordable transparent lithium niobate (LiNbO 3 ) US transducer with a 13-MHz central frequency and a 60% reception bandwidth for optical resolution PA microscopy [4]. The authors tested the system performance by imaging vasculature in chicken embryos and melanoma depth profiling using tissue phantoms. The system proposed in this work is expected to have a promising future in wearable and highthroughput imaging applications.
LED-based PA imaging is gaining more popularity in recent years because of its portability, affordability, and ease of use. However, due to the large beam divergence of LEDs compared to traditional laser beams, it is of paramount importance to quantify the angular dependence of LED-based illumination and optimize its performance for imaging superficial or deep-seated lesions. Kuriakose et al. reported on the development of a custom 3D printed LED array holder to be used along with a commercial LED-based PA imaging system (Acoustic X, CYBERDYNE INC, Tsukuba, Japan) and demonstrated the importance of changing LED illumination angle when used for superficial and deep-tissue applications [5].
Considering the tradeoffs between portability, cost, optical energy, and frame rate, it is critical to compare the PA imaging performance of LED and laser illuminations to help select a suitable source for a given biomedical imaging application. Agrawal et al. reported on the development of a setup for a head-to-head comparison of LED and laser-based PA imaging of vasculature [6]. With measurements on tissue-mimicking phantoms and human volunteers, authors concluded that LED-based PA imaging performs equally and sometimes even better than laser-based systems demonstrating its strong potential to be a mobile health care technology for diagnosing vascular diseases such as peripheral arterial disease and stroke in point-of-care and resource-limited settings.
In applications like wound screening and laser surgery guidance, conventional PA imaging systems that usually require US probes in contact with the tissue are not an ideal option. In a work by Lengenfelder et al., it is demonstrated for the first time that remote PA sensing by speckle analysis can be performed in the MHz sampling range by tracking a single speckle using a four-quadrant photo-detector [7]. By demonstrating PA sensing and endoscopic capabilities, this work demonstrated that single speckle sensing is, therefore, an easy, robust, contact-free photoacoustic detection technique and holds the potential for economical, and ultra-fast PA sensing.
Image Processing and Enhancement Techniques
Multispectral PA imaging can be used to non-invasively visualize and quantify tissue chromophores with high spatial resolution. Utilizing multiwavelength PA data, one can characterize the spectral absorption signature of prominent tissue chromophores, such as hemoglobin or lipid, by using spectral unmixing methods. Grasso et al. reported the feasibility of an unsupervised spectral unmixing algorithm to detect and extract the tissue chromophores without any a priori knowledge of the optical absorption spectra of the tissue chromophores or exogenous contrast agents and user interactions [8]. The authors validated this novel algorithm using simulations, phantom studies, and mouse in vivo experiments and demonstrated the feasibility of extracting and quantify the tissue chromophores in a completely unsupervised manner.
Conventionally, hand-held linear-array-based PA imaging systems operate in the reflection mode using a dark-field illumination scheme, where the illumination is on both sides of the elevation plane (short-axis) of the US probe. Uliana et al. reported and demonstrated a novel multiangle long-axis lateral illumination approach with several advantages [9]. Phantom, animal, and human in vivo results in this work demonstrate a remarkable improvement of the new illumination approach in light delivery for targets with a width smaller than the transducer's lateral dimension.
Portable and affordable light sources like pulsed laser diodes and LEDs have the potential in accelerating the clinical translation of PA imaging. However, pulse energy offered by these sources is often low when compared to solid-state lasers, thus resulting in a low signal-to-noise-ratio (SNR), especially in deep-tissue applications. Improvement of the SNR is of paramount importance in these cases. Thomas et al. proposed a continuous-wave laser-based pre-illumination approach to increase the temperature of the imaging sample and thus the PA signal strength from it [10]. In this work, using tissue-mimicking phantoms and common contrast agents, the authors showed the feasibility of enhancing the PA signal strength significantly.
Preclinical and Clinical Applications
Small animals are widely used as disease models in medical research, especially in the pharmaceutical industry. Since PA imaging can offer functional and molecular information, it is an ideal modality for small animal imaging. Kalloor Joseph et al. reported a portable and affordable approach for performing fast tomographic PA and US imaging of small animals [11]. In this work, the authors used LED-based PA and US tomographic imaging and showed its potential in liver fibrosis research.
Conventional PA imaging systems utilize expensive and bulky solid-state lasers with low pulse repetition rates; as such, their availability for preclinical cancer research is hampered. In an interesting study from Xavierselvan et al., the authors validated the capability of an LED-based PA and US imaging system for monitoring heterogeneous microvasculature in tumors (up to 10 mm in depth) and quantitatively compared the PA images with gold standard histology images [12]. The results of this work give a direct confirmation that LED-based PA and US imaging hold the potential to be a valuable tool in preclinical cancer research.
Hypoxia and hyper-vascularization are the hallmarks of cancer and oxygen saturation imaging is arguably the most important application of PA imaging. By illuminating tissue using two wavelengths, it is feasible to probe the oxygen saturation of tissue with microvasculature scale resolution. Bulsink et al. reported fluence-compensated oxygen saturation imaging using two wavelength LED-based PA imaging [13]. In this work, the authors demonstrated real-time fluence compensated oxygen saturation imaging in phantoms, small animals, and measurements on human volunteers. Follicular unit extraction and follicular unit transplantation are used in most hair transplant procedures. In both cases, it is important for clinicians to characterize follicle density for treatment planning and evaluation. Hariri et al. utilized 2D and 3D LEDbased PA imaging for measuring follicle density and angles across regions of varying density [14]. The authors validated the idea using experiments on small animals and also using measurements on healthy human volunteers.
Summary
PA imaging is growing at a tremendous pace and is expected to reach clinics soon. At this point, this promising technology is facing an exciting transition phase and thus this Special Issue with a focus on affordability is timely. Thirteen excellent papers in this Special Issue from academia and industry represent a small sample that demonstrates the immense developments in the field. We hope that this Special Issue can provide the motivation and inspiration for further technological advances in this exciting field and accelerates the clinical translation of this promising biomedical imaging modality.
|
2021-04-12T17:22:21.364Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ea63000308362d48325c22d715c79b387ad292ac",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/7/2572/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea63000308362d48325c22d715c79b387ad292ac",
"s2fieldsofstudy": [
"Physics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
187401291
|
pes2o/s2orc
|
v3-fos-license
|
Research on Modeling and Simulation of the anti-submarine patrol aircraft taking call search task using the circular sonar buoy array
According to the characteristics of the call search task, this paper analyzes the submarine position dispersion model, and puts forward the evaluation model of the submarine buoy array search probability based on the method of the circular sonar buoy array. The correctness of the model is proved by the computer simulation method, and the results show that the results of the submarine search are closely related to the submarine position, and the method can provide a useful reference for the optimal use of sonar array.
Introduction
Compared with the helicopter, the anti-submarine patrol has the advantages of high speed, long range, long life time, and a variety of anti-submarine combat equipment on board. Therefore, it becomes an important aviation anti-submarine force.
Call anti-submarine warfare is one of the most basic and most commonly used methods. It refers to the antisubmarine patrol aircraft at the airport or designated airspace, flies to the initial position of the submarine. After obtaining information about the activities of the enemy submarine, then takes combat operations about searching, locating, tracking and attacking submarine.
In order to accomplish this task, some submarine detection equipment is needed, such as radar, photoelectric equipment, radio sonobuoy, mad, etc. Among them, the radio sonobuoy is a good and effective equipment, and it is subject to geographical constraints, more use of convenient exploration, so it attaches great importance by navies in most countries.
A distributed model of submarine position for call search 2.1 The initial position of the submarine during the call search
Due to the uncertainty of the submarine position data from other sources, according to the central limit theorem, it can be considered that the initial position of the submarine obeys two-dimensional normal distribution 2 0 (0, ) N , and the distribution center is at the initial position (coordinate origin), that is, the mathematical expectation value is 0, and the joint probability density function of the initial position (X, Y) is
Location dispersion of submarine after call search
In the time, which is from the initial information of the submarine position to the anti-submarine force arrives at the search area, the position of the submarine continues to expand. It is centered on the initial position, and the speed of the original navigation The size of the dispersion area is related to the speed and the dominant time of the submarine (the call time)Therefore, the current position dispersion of the submarine should contain two parts: the initial dispersion and the uncertainty caused by the motion。 In most cases, the speed of the submarine is unknown. When the submarine speed is unknown and the course Where, the 2 1 is still unknown, and the velocity distribution law of submarine is analyzed as following, in order to solve the 2 1 value: The position probability density function of the submarine, caused by the uncertainty of velocity and course , can be expressed in polar coordinates as following: It can be seen from the formula and obeys the uniform distribution on the interval.
From the formula (4), we can obtain the probability density function of submarine speed V: , then the upper form can be expressed as following: The economic speed of the submarine se V is taken as the mean of the velocity V distribution function, according to the definition of mean: Thus we can get: So far, the probability density function of the submarine position is obtained when the anti-submarine patrol machine starts to search through time 0 t :
Deployment Circular sonar array to call search the submarine
In the anti-submarine patrol aircraft taking call the search task, for the situation about known the initial target location, it placed a sonar buoy in the general dispersion center as soon as possible, in order to shorten the time of the submarine was found for the first time, and then deploy some sonar buoys around the center by a circular array, and try to cover a large area as large as possible. Considering that there is little difference between the circular and annular array in the actual operation (the main difference is that the annular array can be omitted from the center point buoy), the search efficiency of the call anti-submarine warfare is considered according to the circular array.
When determining the number of buoys to be used, it should be considered that there is a certain spread of each sonar buoy. In order to ensure reliable detection, the distance between two sonar buoys should be less than 2 times buoy's range. The size of the sonar buoys array is related to some elements, such as lost time which the anti-submarine patrol aircraft and submarine formation lost contact (call time), the submarine speed, the number of sonar buoy on the plane, and so on. The typical circular (or annular) search array has 5, 7, 9 or 11 sonar buoys, and 5 and 7 circular search arrays are as shown in Figure 1 and Figure 2.
The flight path is depended on both the distance between two sonar buoy and the plane turning radius. And the deploying order is as much as possible to make the flight route optimization, that is, deploying array time should be short, and the aircraft's large slope turn and maneuver should be less.
Taking 5 passive omnidirectional sonar buoy circular array for example, the deployment sonar buoys process is illustrated as following: When the probe range of sonar buoy is 3.8km and the interval between the buoys is 7km, the first buoy should be layed out in the position deviate from the submarine initial point 5km, and then throw second buoys after fly before 5km, then the third also after fly ahead 5km; After the aircraft to turn left at the 5km radius, when flying through the fourth buoy locations above, it throws the buoy; when the direction of flight and the original direction of 90 degrees and from the first buoys to 5 km, throw the fifth buoys.
The plane flies almost in a great circle in the whole laying array process (see Figure 1), it is mainly due to the aircraft turning limit, and this is one of maneuvering performance difference between the large and medium sized fixed wing anti-submarine patrol aircraft and antisubmarine helicopter. According to the performance of each type of aircraft, there are differences in the corresponding turning radius. For the modern antisubmarine patrol aircraft, when anti-submarine patrol aircraft's command and control system will outputs the flight path of the array for Tactical Navigation System, flight control system can automatically control the plane array process.
Model validation
Assuming that: the initial distribution of the target is 1 nautical miles (mean square error) when in call search submarine, and the speed of the anti-submarine patrol aircraft is about 400 km/h. The detected range of a single passive sonar buoy is calculated according to the hydrological conditions of the search area Because the single sonar buoy's detection range is limited, in the actual aviation anti-submarine search action, it usually lays a set of sonar buoys to form a search array, which is in accordance with the spread law of submarine's position, and cover the search area as much as possible. In this paper, the simulation verification is based on 9 round sonar buoy array.
First of all, according to the submarine distribution law as shown in chapter 1.2, and combination with the characteristics of the search task, the plane lays out a circular search array composed of multiple buoys. For simulation of the search process in computer, the specific steps are as follows: (1) Input the initial condition. Including the submarine speed (0-24kn), each buoy's coordinate parameter which composed of search array.
(2) Generate the initial position of the target, when in call search task. Figure. 3 The search probability curve of the 9 circular sonar buoy array for the unknown submarine Figure 3 shows the results of the evaluation of the search probability for the circular array of the 9 sonar buoys, using the formula of search probability and the method of computer simulation. The result is shown in figure 3, that the curve with cross mark is the result of the model formula, and the curve of diamond mark is the result of computer simulation.
As is shown in Figure 3, the result of the model calculating is consistent with the simulation results when the anti-submarine patrol machine performs the call search task.
In order to make the calculation results of the model approximate to the simulation results, the corresponding correction coefficients can be added to the model formula. The corrected results are shown in
Conculsion
The efficiency calculation of the search for the sonar buoy array is a complicated problem, The paper established the calculation model of circular sonar buoy array searching submarine, using the proposed submarine position distribution model. The result of respectively assessment model calculation and computer simulation method for searching probability calculation is consistent, which proves the correctness of the model. It also shows that the evaluation results of submarine is closely related to the submarine position distribution, and it should be selected reasonably according to the situation. In the practical use, the correction term can be added to the model, which makes the calculation results of the model and the simulation results closer.
|
2019-06-13T13:17:21.176Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "f666631e9ffded5fb569576c1f0b1d79184d1583",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/32/matecconf_smima2018_01023.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7f471101b7713760d84ff657b392da735db80a2a",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
}
|
252499953
|
pes2o/s2orc
|
v3-fos-license
|
Aspects of Medieval Japanese Religion
: The focus of this Special Issue is on medieval Japanese religion. Although Kamakura “new” Buddhist schools are usually taken as unquestioned landmarks of the medieval religious landscape, it is necessary to add complexity to this static picture in order to grasp the dynamic and hybrid character of the religious practices and theories that were produced during this historical period. This Special Issue will shed light on the diversity of medieval Japanese religion by adopting a wide range of analytical approaches, encompassing various fields of knowledge such as history, philosophy, materiality, literature, medical studies, and body theories. Its purpose is to expand the interpretative boundaries of medieval Japanese religion beyond Buddhism by emphasizing the importance of mountain asceticism (Shugend¯o), Yin and Yang (Onmy¯od¯o) rituals, medical and so-teriological practices, combinatory paradigms between local gods and Buddhist deities (medieval Shint¯o), hagiographies, religious cartography, conflations between performative arts and medieval Shint¯o mythologies, and material culture. This issue will foster scholarly comprehension of medieval Japanese religion as a growing network of heterogeneous religious traditions in permanent dialogue and reciprocal transformation. While there is a moderate amount of works that address some of the aspects described above, there is yet no publication attempting to embrace all these interrelated elements within a single volume. The present issue will attempt to make up for this lack. At the same time, it will provide a crucial contribution to the broad field of premodern Japanese religions, demonstrating the inadequacy of a rigid interpretative approach based on sectarian divisions and doctrinal separation. Our project underlines the hermeneutical importance of developing a polyphonic vision of the multifarious reality that lies at the core of medieval Japanese religion.
Flexible Taxonomies and Marine Networks
The multifaceted assemblage of religious phenomena, which began with a reconceptualization of the rituals for worshipping the kami after the insurrection of Taira no Masakado 平将門 (?-940) and extended until the first half of the seventeenth century, can be imagined as the mare magnum of medieval Japanese religion. 1Within the fluidity of this cultural arena the fixity of the protagonists' names, i.e., Buddhism (bukkyō 仏教), Shint ō 神道, 2 Onmy ōd ō 陰陽道, and Shugend ō 修験道, hides more than it reveals. 3A rigid taxonomical approach based on reassuring (although fictive) clear-cut divisions between different nomenclatures simply ends up wiping away all the "twilights zones", "interception points", and "hybrid terrains", where medieval religious traditions mingled, mutually fertilized, and sprouted (or withered) new rhizomes of their constitutive networks.As a whole, medieval Japanese religion can be imagined as an immense fractal, the innumerable facets of which are charac-terized by a diverse, fluid, and polysemic nature that prioritizes inclusion and hybridity over exclusion and sterilization (cf.Faure 2012, pp. 3-7;2015a, pp. 10-16;2015b, p. 7).
And yet, it may be worth asking if the only possible modus operandi of the porous network of Japanese medieval religion had to be invariably denoted by an inexorable fluidity and dissolution of one element into its opposite.The answer seems to be negative because the network, in order to survive, also needs moments of (apparent) crystallization.In these circumstances, certain gods, social actors, sectarian forms, and doctrinal discourses reach an equilibrium, which creates an illusion of stability.A case in point could be the diverse cultural forms associated with those gods, which Yamoto Hiroko defines as "strange" or "alien deities" (ijin 異神), such as K ōjin 荒神, Matarajin 摩多羅神, or Gozu Tenn ō 牛頭天王 (Yamamoto 2003).Sometimes the names of these gods whose presence has always informed a plurality of religious taxonomies give the impression of transforming into fixed stars, the attractive power of which freezes the possible combinations with other nodes of different networks in a sort of immobile perfection.Nevertheless, an active network of gods and human actors prospers only if a certain amount of noise freely circulates within the system.A perfectly stabilized religious network may survive for a few or many years but stagnation inevitably condemns it to death if noise does not intervene to shake the stasis, provoking unpredictable expansions (or contractions) of the system itself.
For instance, the Onmy ōd ō text Hoki naiden 簠簋内伝 (late fourteenth century) played a crucial role in solidifying certain distinctive functions of Gozu Tenn ō, whose identity of translocal pestilence deity (ekijin 疫神) active in India, China, and Japan was recast as a benevolent calendrical god (rekijin 暦神).Wearing this new mask, Gozu Tenn ō started to be worshiped under the cumulative name of "God of the Heavenly Way" (Tend ōshin 天 道神), an astral and directional god who incorporated the essence of all the other celestial bodies and bestowed good fortune to all his devotees.The Onmyō ryakusho 陰陽略書 (late Heian period) already mentioned the importance of Tend ōshin in conjunction with Tentoku 天徳, another ambulatory deity (yugyōjin 遊行神) whose peregrinations through space determined auspicious directions.According to this text, the correct direction for the placenta disposal (ena osame 胞衣納め) always had to correspond to Tentoku's direction but, in case of accidental transgressions of the regulations against impurity (kinki 禁忌), it could also be placed in the spatial sector of Tend ōshin (Suzuki 2021, p. 165).The Onmyō ryakusho could have influenced the Hoki naiden reinterpretation of the myth of Gozu Tenn ō in which not only the Ox Head god is equated with Tend ōshin, but also Somin Sh ōrai 蘇民 将来, the humble peasant who gave hospitality to Gozu Tenn ō on his way to India to marry the nāga princess Harisaijo 波梨釆女, is identified as Tentoku (Faure 2021, p. 118).
This progressive neutering of the lethal nature of Gozu Tenn ō-who was transformed from a dangerous pestilence deity into an omnipotent astral god of the Onmy ōd ō pantheonreduced his protean character, providing the illusion of what we have previously defined as a moment of crystallization or stasis of the network.On the contrary, even in this period Gozu Tenn ō never stopped transcending taxonomies and continued to expand his divine complexity, being alternatively associated with the Healing Buddha Yakushi Nyorai 薬師 如来, the twelve yaks .a generals of Yakushi's retinue (alias the twelve months or the twelve years of the sexagesimal circle), the Cosmic Buddha Dainichi Nyorai 大日如来, or the terrific deity Jadokkeshin 蛇毒気神.It is important to keep in mind that in all these moments of crystallization there was an original aspect of the god that always remained intact.Since his epiphany, Gozu Tenn ō has always been considered an allochthonous deity (toraijin 渡 来神), which embodied a database of medico-religious knowledge about the etiology and cure of epidemics shared between Tenjiku 天竺 (India), Shintan 震旦 (China), and Japan (honchō 本朝).Even after becoming an Onmy ōd ō directional god, Gozu Tenn ō did not give up his connective function between the archipelago and continental Asia but simply recalibrated its focus on the common heritage of calendric and divinatory techniques for controlling time and space.In other words, this glocal theriomorphic god worked as a sort of Latourian mediator, which generated an epidemiological as well as calendrical fertile terrain where medieval Japan could negotiate its geographical marginality within the paradigm of the Three Countries (sangoku 三国) staying connected with the broad religious culture of mainland Asia (cf.Moerman 2021).
If we pinpoint fluidity as the key feature of medieval Japanese religions, the ultimate zone of contact where all the knots of the religious networks interwove was not the inland part of the archipelago but rather its marine sectors (cf.Simpson 2018).In her recent study on the diffusion of the cult of Shinra My ōjin 新羅明神 in medieval Japan, Sujung Kim describes the sea inlet that unites and at the same time separates Japan from China and Korea as the East Asian "Mediterranean", stressing its centrality as the watery conduit of religious ideas and practices (Kim 2020, p. 2).Likewise, Fabio Rambelli speculates about the marine influence on medieval Japanese religions, defining the sea as a "huge semiotic shifter", which did not simply transport meanings from one shore to the other but also turned upside down the significance of the entities it mobilized, transforming, for instance, purity into impurity and vice versa (Rambelli 2018, p. xvii).
It is not a coincidence that since the twelfth century the port of Hakata in northern Ky ūsh ū became one the most relevant international hub for religious as well as commercial exchanges between Japan and what Andrew Goble calls the sphere of "East Asian macroculture" (Goble 2011, p. xiv).Medieval Hakata hosted sparkling communities of Chinese merchants and monks who also worked as proxies in organizing trading/cultural missions on behalf of the Kamakura shogunate (the H ōj ō 北条, in particular) or powerful temples such as the T ōfukuji 東福寺 in Kyoto.A center of this international marine network was the Munakata Shrine 宗像神社.The male members of its priestly lineages were often married to Chinese women who descended from powerful trading families in Hakata.For instance, in 1235 the Rinzai monk Enni Ben'en 円爾弁円 (1202-1280) was able to realize his project to travel to Song China thanks to the logistical support of Xieguoming 謝國明, a very much respected expatriate from Huangzhou province married to a Japanese woman from Hakata (Goble 2011, p. 10).Seven years later, in 1241, Enni returned to Kyoto again via Hakata, bringing with him not only commentaries on Buddhist scriptures and ritual manuals but also a precious collection of Song medical texts, which constituted part of the T ōfukuji's library.In this temple, his disciple D ōsh ō 道生 (1262-1331) completed his first cycle of medical studies before going to China for perfecting his background as a monastic physiatrist.Later on, D ōsh ō provided an oral transmission (kuden 口伝) about medical treatments to Muj ū Ichien 無住一円 (1226-1312), author of the famous medieval collection of Buddhist tales Shasekish ū 沙石集 (1283).Muj ū in turn orally initiated his Dharma-brother Jissh ō 實照 (d.u.) who finally transmitted these teachings to the Ritsu monk Kajiwara Sh ōzen 梶原性全 (1266-1337).The latter composed two of the most relevant medical treatises of medieval Japan, the Ton'ishō 頓医抄 (1304) and the Man'anpō 万安方 (1327), which were not only intellectually influenced but also logistically connected with the exuberant transmarine culture that had penetrated Japan via Hakata during the Kamakura period (1185-1333) (cf.Macomber 2020).
The medieval Yellow Sea can be conceived as a heterotopic place, i.e., a physical site perennially connected with other real sites through "direct or inverted analogies" (Foucault 1986, p. 3).China ships (karabune 唐舟) were paradigmatic self-propelled heterotopic machines, which shuttled between the geographical nodes of this maritime network after being stuffed with human bodies and goods.In 1323 one of these China ships sank off the southwestern coast of Korea on its route toward Hakata, in what became known as the tragic Shin'an wreck.Among the gargantuan number of commercial items, which constituted the cargo of the karabune, there were more than eight million Chinese and Vietnamese coins weighing about twenty-eight tons.The Kamakura shogunate did not simply plan to use these coins as currency for trading but also as a metallurgic reserve for coating Buddhist statues (Kim 2016, p. 73). 4 Notably, some sections of the cargo were provided with packing notes in the guise of wooden tablets (mokkan 木簡) reporting the names of religious solicitors (kanjin hijiri 勧進聖) connected with the Hakozaki Shrine 筥崎神社 in Fukuoka and the T ōfukuji (Amino 2012, p. 90).This detail shows that medieval Buddhist monks were often at the center of such international business enterprises as exponents of a religious and cultural koinè, which overcame the geopolitical divisions and could be fully exploited for sustaining trading activities abroad as well as domestically.For instance, the kanjin hijiri who were supposed to be among the recipients of the lost cargo of this China ship planned on using some of the less luxurious imported Chinese goods as exotic merchandizing for attracting lay devotees during their itinerant preaching activities in fundraising campaigns on behalf of important temples (cf. Andrei 2018).These examples demonstrate that medieval social actors probably categorized as "religion" certain practices and logic that do not necessarily belong to that conceptual frame according to our contemporary mindscape.On the contrary, medieval actants went through apparently unrelated semantic fields of reality such as sailing or trading to negotiate their conceptualizations of religion.Trying to understand the complex networks of medieval religion requires us to deactivate, at least partially, our contingent forma mentis about what is and what is not a religion in order to include within the framework all those phenomena, which are usually left outside the picture (cf.Iyanaga 2021, pp. 389-92).
The international currents that rippled the water of the sea of medieval religion, were also represented by the substantial trilingualism that characterized most of the written or oral productions of this period.For instance, in the Myōgishingyōsh ū 明義進行集 Shinzui 信瑞 (?-1279), the disciple of the Pure Land master H ōnen 法然 (1133-1212), noted that the latter was accustomed to reciting the three sections of the Amidakyō 阿弥陀経 every day while alternating the Tang reading of the characters (to'on 唐音), the Wu reading (go'on 呉音), and the Japanese paraphrase of the Chinese sentences (kundoku 訓読).In addition to this linguistic base, which was made of classical Chinese (kanbun 漢文) with all its phonetic renderings, there was also Sanskrit (bongo 梵語), the "seed letters" (shuji 種 子) of which functioned as graphic and aural embodiments of buddhas and bodhisattvas as well as magic graphemes and sounds for mantras (shingon 真言) and dhāran .ī (darani 陀羅尼).These two Chinese and Indian scriptural codes were often interwoven within a third autochthonous one.This last written system, in turn, included a multiplicity of other hybrid writing styles, some of which were based on the combination of Chinese characters and phonetic signs (kanji kana majiri 漢字仮名混じり), a writing style often used in the collections of Buddhist tales (setsuwa 説話), while some others show a prevalence of Japanese words (wago 和語).For example, Japanese poems (waka 和歌) lexically prioritized Japanese terms were expressed through phonetic signs rather than Chinese characters.In various ritual contexts such as, for instance, the Ise consecration ceremonies (Ise kanjō 伊 勢灌頂), certain wago contained in waka anthologies were conceived as a sort of Japanese mantra especially suitable for summoning the kami that inhabited the body-mind (kokoro 心) of the humans and were particularly receptive when exposed to the sonic vibrations of specific Japanese words (cf.Uejima 2021, pp. 24-25;Rambelli and Porath 2022).Not only the logic and practices of medieval religion were intimately related to trans-Asian discourses, but also the language through which transmission and reception took place was intrinsically hybrid and multicultural (Dolce 2022, pp. 22-23).
Swinging Kami
The expression "amalgamation between kami and buddhas" (shinbutsu sh ūgō 神仏習 合) defines the combinatory paradigms, which informed the relationships between these two different groups of deities from the mid-sixth century to April 1868 when the Meiji government issued the kami-buddha clarification edicts (shinbutsu hanzenrei 神仏判然令) and forcefully put an end to this companionship.The term shinbutsu sh ūgō appeared for the first time in 1901 in the title of a book by Adachi Ritsuen 足立栗園 (d.u.) called Kinsei shinbutsu sh ūgō ben 近世神仏習合弁 (Suzuki 2020, pp. 39, 233).A few years later in the article "Honji suijaku setsu no kigen ni tsuite" 本地垂迹説の起源について (1907) the famous scholar of Japanese religion, Tsuji Zennosuke 辻善之助 , reused Adachi's expression shinbutsu sh ūgō as a heuristic term for describing the medieval theories concerning buddhas as "original grounds/forms" (honji butsu 本地仏) of the kami, which were, in turn, conceived as their local "traces" (suijaku jin 垂跡神).From that moment on the expression shinbutsu sh ūgō spread among scholars in Japan as well as abroad.It is interesting to note that scholars were able to develop a scientific terminology such as shinbutsu sh ūgō for describing the ubiquitous combinatory paradigms between kami and buddhas only in reaction to the official disentanglement of kami from buddhas (shinbutsu bunri 神仏分離) at the beginning of the Meiji period .In other words, the technical term "shinbutsu sh ūgō" can be taken as an intellectual creation, which was directly triggered by its opposite, i.e., the "shibutsu bunri" phase.
And yet, Adachi Ritsuen did not invent the compound shinbutsu sh ūgō by himself but borrowed it from an expression used by Yoshida Kanetomo in the Yuiitsu shintō myōhō yōsh ū 唯一神道妙法要集 (late fifteenth century).In this text, Kanetomo theorized three types of Shint ō: a lower Shint ō in which kami played the role of avatars of Buddhist deities (honjaku engi shintō 本迹縁起神道), a median Shint ō where the kami of Ise, namely Amaterasu Ōmikami 天照大神, was associated with the Dainichi Nyorai of the Womb and Diamond Realms (ryōbu sh ūgō shintō 両部習合神道), and the highest Shint ō, alias Yuiitsu Shint ō, where kami embodied the original beginning and ancestral source of everything (genpon sōgen shintō 元本宗源神道) (It ō 2021, p. 649).For Kanetomo both honjaku engi shintō and ryōbu sh ūgō shintō portrayed spurious kami whose perfection was oversimplified through the fictionalization of Buddhist narratives as in the first type, or clouded by the esoteric theories of Saich ō 最澄 (767-822), K ūkai 空海 (774-835), Ennin 円仁 (794-864), and Enchin 円珍 (814-891) as in the second type.Kanetomo considered the kami who mixed (sh ūgō) with buddhas and became their avatars as inferior to those local gods who behaved as independent matrixes (genpon) of everything, buddhas and bodhisattvas included.In other words, Kanetomo theorized a non-Buddhist Shint ō where the usual assembling paradigm was turned upside down and the kami functioned as origin while buddhas functioned as traces (shinpon butsujaku 神本仏迹).
It goes without saying that since the moment of its creation toward the end of the Muromachi period (1392-1573), the compound sh ūgō was associated with a critical nuance toward those kami who fell under this category.Moreover, from its rediscovery in the late Meiji period until the end of the Second World War many Japanese scholars kept applying the expression shinbutsu sh ūgō to stigmatize a sort of deterioration in the rituals and logic dedicated to the local gods, which in that historical moment had to be mandatorily conceived as independent autochthonous deities in opposition to the foreign Buddhist ones.After 1945, with the dismantling of the ideological structure of the so-called "state Shint ō" (kokka shintō 国家神道), the term shinbutsu sh ūgō progressively lost its negative nuance.Nevertheless, until very recently Japanese scholars had been using the term shinbutsu sh ūgō for referring to amalgamation paradigms between Buddhist deities and local gods as if this was a unique religious phenomenon limited to premodern, and especially medieval, Japan.On the contrary, Yoshida Kazuhiko has demonstrated how combinatory processes between kami and buddhas in premodern Japan were often synchronous with other similar socio-religious phenomena that happened in a vast geo-cultural sphere extending from Central to South-East Asia.For this reason, Yoshida proposes dismissing the problematic term sh ūgō and adopting the less deprecative, less Japan-centric, and more translocal term "merger" (y ūgō 融合) when referring to the mingling models between Buddhist and local deities in premodern Japan as elsewhere (Yoshida 2021, pp. 11-12).Especially for what concerns medieval Japan, it is possible to say that the hybrid networks between kami and buddhas can be properly appreciated only when local aspects are reconnected to their larger translocal and pan-asiatic dimension.
As mentioned above, the medieval encounter between kami and buddhas was a gradual phenomenon with numerous paradigm shifts taking place in different historical periods.Modifications in the architectural elements of sacred sites provide valuable examples for comprehending the nature of such phases.Already before the tenth century, the shrinetemple multiplexes (jing ūji 神宮寺) showed a high level of proximity between buildings for the veneration of the kami and those for worshipping the buddhas.Nevertheless, the latter were built close to the main shrine (shatō 社頭) but still outside its external precinct.
Since the beginning of the tenth-century, prefabricated Buddhist buildings started to be temporary, transported and assembled within the shrine's precinct for occasions of special rituals, at the end of which they were dismantled and returned once again to their original external location.It was only in the eleventh century that Buddhist architecture succeeded in creating a stable hybridization of the worshiping spaces dedicated to the kami.For example, the Shōy ūki 小右記 (1032) reports that in 1020 Fujiwara no Michinaga 藤原道長 (966-1027) offered a copy of the Human Kings Wisdom S ūtra (Nin' ō hannya ky ō kuyō 人王 般若経供養) for the kami of the Kamo Shrine 賀茂神社 in Kyoto.Michinaga also patronized a Dharma assembly (hōe 法会), which was performed by Buddhist monks in the central space between the lower hall for ritual dances (mai dono 舞殿) and the bridge building (hashi dono 橋殿) of the upper hall.This passage of the Shōy ūki testifies to the permanent presence of Buddhist buildings such as, for instance, the s ūtra repository (kyōzō 経蔵) inside the inner space of the shrine and the fervent ritual activities performed by Buddhist monks within the same architectural frame (Uejima 2021, p. 583).
In the twelfth century, there was a further transformation in the nature of the Buddhist buildings included within the shrine's territory.Not only s ūtra repositories but also st ūpa (buttō 仏塔) were built in front of the main halls for the kami.Buttō belonged to an upper class of Buddhist buildings because their internal space enshrined the main statue of a buddha (honzon 本尊), which required Buddhist monks to perform constant rituals on its behalf.During these ritual activities, the architectonic and ceremonial center of the jing ūji was materially represented by the buttō and the honzon within it.Because the buttō was built in front of the shatō it ended up being surrounded by other background secondary shrines the kami of which became architectonically and doctrinally reconfigured as Dharmaprotectors (gohōjin 護法神).In other words, the st ūpa was protected by a crown of shrines as much as the Buddha was guarded by the kami all around.Nevertheless, it should be underlined that medieval kami never completely gave up their terrific pre-Buddhist nature of jealous and violent guardians of the land (chinju 鎮守).At the end of the day kami always remained the ultimate owners of the territory (jinushigami 地主神) on which the buttō was erected.Mingling with Buddhist deities, medieval kami simply added a hitherto unknown ethical dimension to their authority, channeling a portion of their herculean power for shielding the buddhas and the Dharma against external menaces.The relevance of such a reconceptualization of kami from chinju to gohōjin marked the turning point between the ancient and medieval periods.In fact, before the ninth-century, Buddhist monks not only considered local gods to be unable to actively foster the Dharma, but also in need of being passively exposed to the Buddha's teachings in order to escape the suffering rooted in their status of unenlightened sam .sāric deities (shinjin ridatsu 神身離脱) (cf.Teeuwen and Rambelli 2003, pp. 9-13).
During the Dharma assemblies, there was a ritual moment called "invitation of the Dharma-protector deities" (gōhōjin kanjō 護法神勧請) during which Buddhist monks recited invocatory formulas in kundoku style for summoning the kami to the ceremonial arena.The guest gods were divided into two categories: the group of heavenly beings (tenshu 天衆) and the earthly crowd (jirui 地類).Tenshu presided over the upper realms (jōkai 上 界), i.e., the form realm (shikikai 色界) and the formless realm (mushikikai 無色界), while jirui governed the lower realm (gekai 下界), alias the realm of desire (yokukai 欲界), which was also inhabited by the humans.Tenshu included powerful Indian gods such as Brahmā, Indra, the Four Heavenly Kings, nāga Kings, or influent Chinese astral deities such as "the star of fundamental destiny" (Honmy ōsh ō 本令星), which was another name for Shukujin 宿神 or the "birth star" that watches over each person from birth to death.Within the tenshu, there were also Chinese astrological deities such as Zokush ō 属星, which represents the annual star in charge of regulating the life (and death) of all the individuals born during that year.In opposition to this Indo-Chinese group of non-Buddhist deities who provided their services for the sake of the Dharma since long ago, there was the jirui crowd to which Japanese gods (Daimy ōjin 大明神) also belonged, together with a multitude of other chthonian and demonic deities.Unlike the tenshu, the jirui had been included in the Dharma-protectors' category only in recent times and were still perceived as potentially subversive.Once shielded by this curtain of ex-natural-born killers, now hired as bodyguards of the Dharma, monks could finally start exposing the teachings of the Buddha.
At this point, a fundamental question may concern the level of external diffusion of these Buddhist taxonomies about local deities in standard medieval society beyond monastic circles.A possible answer is provided by an oath wooden tablet (kishō mokkan 起 請木簡), which was excavated in the Shiotsuk ō 塩津港 archeological site on the northern side of Biwa Lake in 2006.This oath wooden tablet was composed in 1137 by a business agent, a certain Kusabe no Yukimoto 草部行元, who was in charge of making fluvial transportation of commercial goods on behalf of the governors of the Ōmi domain (presentday Shiga prefecture).In the upper part of the tablet, Yukimoto wrote a list of names of powerful Indian deities specifying their connection with the upper realms.In the mid part, the agent copied the names of those local deities who belonged to the lower realm mentioning Hachiman Daibosatsu, Kamo shimo kami 賀茂下上, all the countless Daimy ōjin, Sann ō shichisha 山王七社 who protected the Ōmi domain, and the four guardian deities of Shiotsuk ō: Gosho Daimy ōjin 五所大明神, Inakakehafuriyama 稲懸祝山, Tsu My ōjin 津 明神, and Wakamiya Sansho 若宮三所.Finally, in the lower part of the tablet, Yukimoto wrote that he had sworn in front of all the great and small kami (daishō no kami 大小神) of Japan not to lose even one single piece of baggage, or else "may the punishment of (all the above mentioned) kami penetrate the eighty-four thousands pores (of my body)" (goshinbachi wo hachiman shisen ke-kuchi-ana goto kaburubeku to mōsu 御神罰ヲ八万四千毛口 穴如かふるへくと申) (Uejima 2021, pp. 602-3).This oath wooden tablet demonstrates that already in the first half of the twelfth century lay members of subaltern classes adopted Buddhist taxonomies for displaying hierarchical relationships between different classes of local deities.In the specific case of the Shiotsuk ō kishō mokkan the usual list of Indian and Japanese eminent gods was further customized in order to also include the names of the four kami who ruled over the village.It is also interesting to consider that all these non-Buddhist translocal, as well as local deities, were not invoked by Yukimoto as benevolent protectors but rather as sharp-eyed threatening guarantors of his honesty who were ready to immediately disintegrate his body in case of transgression.In short, the hieratic aura of medieval kami derived from the fact of being conceptualized, at the same time, as moralized guardians of the Dharma and, if needed, ruthless killers of humans.
The second half of the Insei period (1086-1192) was the moment when the original grounds (honji)/traces (suijaku) system stabilized and started laying the doctrinal foundations for creative interpenetrations between buddhas and kami.Nevertheless, it is important to keep in mind that these combinatory trends were already implemented by Buddhist monks since the early eleventh century.For instance, in the Kanjō goganki 灌頂御願記 (1039) Ningai 仁海 (951-1046) writes that when he was employed at the court as Shingon monk for the protection of the emperor's body (gojisō 護持僧) every night he stood in front of the twenty-one shrines dedicated to the emperor-protecting kami performing invitation ceremonies for these gods (Kanjō goganki 1978, p. 494).During this delicate operation, Ningai mobilized the kami by reciting the "magic formula of the original ground" (honji ju 本地呪), i.e., the mantra of the original buddha or bodhisattva associated with each individual kami venerated in the twenty-one shrines (Uejima 2021, p. 601).This means that by the early eleventh century Buddhist deities played a pivotal role in allowing human actors to handle extremely powerful kami who were willing to act as suijaku jin of a fixed array of honji butsu.Now, it is worth asking if the honji suijaku paradigm was just a simple binary system based on the dyadic union of buddhas and kami.Of course, it was not.Each combinatory pattern was similar to a sexual intercourse between two different deities who gave birth to a third god who was, at the same time, equal to and different from the archetypal parental copula.Once plunged into the honji suijaku system, both buddhas and kami experienced a partial loss of their original identity and a partial gain of a new personality.The hybrid god generated by the interweaving of a buddha with a kami should be considered an independent deity, namely neither a complete buddha nor a complete kami but a continuous mixage between the two original poles.Yet, are we sure that the generating gametes had always to be limited to two?To embrace each other, medieval buddhas and kami often needed external intermediaries such as rocks, ritual tools, paintings, clothes, human bodies, stars, equinoxes, animals, which did not limit themselves to passively fostering monogamous intercourses between Buddhist deities and local gods but actively interfered with the original dyad transforming a stable binary grid into a multi-nodal fluid network.Therefore, as we have already seen in the case of Gozu Tenn ō, medieval combinatory deities can be conceived as "interstitial gods" or "swing-geometry gods" (divinità a geometria variabile).These hybrid gods never perfectly overlapped with, but always attracted, a multitude of heterogeneous elements such as other gods, natural elements, or even human actors, which contributed to transforming these interstitial deities into something such as elusive gravitational centers continuously percolating toward the margins (cf.Rappo 2017).
Between the eleventh and twelfth centuries, another crucial phenomenon started taking place in the medieval religious landscape.Classical mythological narratives about the kami reported in the Nihon shoki 日本書紀 (720) went through an extremely creative process of exegesis.This is what Fabio Rambelli defines as a moment of "decontextualization" and "reconfiguration" of the kami of the ancient period, which were employed as moldable casts for forging new highly hybridized medieval kami (cf.Rambelli 2017, p. 238).The technical term "medieval myth" (ch ūsei shinwa 中世神話) specifically points out the removal of certain kami from the historical context of the ancient period and their semantic re-qualification within the medieval legitimizing discourses, most of which were explicitly linked to new models of imperial authority.It goes without saying that Buddhist deities and, in certain cases, even Hindu gods such as Vis .n .u (known via Buddhist texts) played a pivotal role in re-shaping the semantic values of medieval kami (cf.Iyanaga 2009, pp. 264-68).
Toward the thirteenth century, further developments of medieval mythology led to the creation of teaching groups (kyōdan 教団) or doctrinal currents (ry ūha) between kami ritualists who started producing theories (shintō setsu 神道説) about specific kami and their cultic sites such as Amaterasu and Ise Shrine (Ise Shint ō 伊勢神道, Ry ōbu Shint ō 両部神道), the kami of Mt.Miwa (Miwa-ry ū Shint ō 三輪流神道) (cf.Andreeva 2017), or those of Mt.Hiei (Sann ō Shint ō 山王神道).These ry ūha reached their apogee in the fifteenth century when Yoshida Kanetomo created the Yuiitsu Shint ō, assembling together heterogeneous theories and practices borrowed from esoteric Buddhism (Mikky ō 密教), Daoism, Onmy ōd ō, and even Confucianism.A bird's eye view of all these devotional streams, which focused on extreme theoretical and ritual customizations of certain kami and their worshiping spaces, reveals that medieval Shint ō was fundamentally an acephalous structure.Namely, it lacked a centralized doctrinal headquarters regulating the cultural productions of all the other religious centers scattered on the archipelago.In spite of the contemporary highly politicized narratives about Amaterasu and Ise Shrine as guardians of an elusive Shint ō orthodoxy since the dim and distant past, the ultimate characteristic of medieval Shint ō is exactly its infinite fragmentation and decentralized proliferation of heterodoxies (in the etymological sense of eterodoxéo or "think different") about the kami.
If we want to stigmatize a trend in the thirteenth-century conceptualizations concerning the local deities, it is impossible to overlook the influence wielded by the theories on the Dharma-nature body (hosshōshin 法性身) of the buddhas. 5According to these discourses, the imperceptible physicality of the kami, Amaterasu in primis, was described as perfectly coincident with the Dharma-nature body, which constitutes the most elevated type among the three buddha's bodies.Such a convergence between local deities and the buddhas' Dharma-nature body triggered a process of universalization of the kami who expanded their classical role of undisputed guardians of localized areas, becoming matrixes and masters of the entire cosmos.In other words, kami ritualists (the majority of whom were Buddhist monks) exploited the archetypal aspect of the Dharma as transcendental origin of everything for upgrading the local dimension of the kami and elevating them to the rank of omnipresent cosmogonic forces.For example, the Nakatomi no harae kunge 中臣祓訓解 (late twelfth century), which is considered the first text of Ry ōbu Shint ō, reports that a kami (specifically Amaterasu) is the same as the Dharma-nature body and, therefore, should be referred to as "august kami of the great origin" (daigen sonshin 大元尊神) (It ō 2021, p. 649).
It is interesting to compare this statement of the Nakatomi no harae kunge with a passage of the Sōshobun 惣処分 (1280) in which Kuj ō Michiie 九条道家 (1193-1252) describes the preparatory procedures for making a human-size wooden statue of Dainichi Nyorai, which had to be enshrined in the main hall of the K ōmy ōbuji 光明峯寺, his funerary temple, at Higashiyama in Kyoto. 6Michiie nominated Enni as religious supervisor of the project while the sculptor K ōkei 康慶 (d.u.) was asked to carve the statue of Dainichi Nyorai, the wooden materiality (literally, "wooden garment" or misogi 御衣木), of which came from the "pivot post" (shin no mihashira 心御柱) hidden under the floor of the Inner Shrine (Naik ū 内 宮) at Ise. 7 Once removed from its original chthonian dimension the pivot post was sealed against external sight with protective curtains and placed in front of the Aramatsuri Shrine (Aramatsuri no miya 荒祭宮) for receiving pacificatory rituals, which took place during the night at the Hells-valley (Jigokudani 地獄谷) (Teeuwen and Breen 2017, pp. 40-47).These precautions were necessary because the shin no mihashira embodied the powerful rough spirit (ara mitama 荒御霊) of Amaterasu (Ogawa 2014, pp. 189, 203-4).Creating a statue of Dainichi Nyorai out of the wooden pillar in which resided the ara mitama of Amaterasu did not simply produce a sense-able conflation between the Dharma-body (hosshin) of the honji butsu and the original enlightenment (hongaku) of the suijaku jin but also pushed this material symbiosis a step further.Being a kami of the lower realm (gekai) Amaterasu shared with humans a profound understanding of the three poisons (sandoku 三毒) that affected the existence of all the sentient beings in the six sam .sāric paths of rebirth. 8Becoming one with the violent aspect (ara mitama) of Amaterasu, the cosmic buddha Dainichi Nyorai could channel the aggressive power of the kami against those spiritual obstacles that prevented the human body-mind to reach salvation.Michiie probably wanted to enshrine this highly customized statue of Dainichi Nyorai in his funerary temple for its ability to rely on a double soteriology.One salvific discourse was based on the universal and perfect enlightenment of a cosmic buddha, while the other one focused on the eschatological help provided by the wrathful spirit of a kami interpreted as a great expert of human things and, therefore, an ideal savior of dead.
Between the fourteenth and fifteenth centuries, the Dharma assemblies declined and a different combinatory pattern appeared on the scene.According to this new model, buddhas and bodhisattvas could voluntarily "dim their radiance and become identical to the dust" (wakō dōjin 和光同塵), namely downgrading themselves to the level of kami, for saving those sentient beings who were born in the age of degeneration of the Dharma (mappō 末法).As a result of the wakō dōjin theory, kami faced an intense process of internalization.That is to say, kami were not simply conceived as universal entities due to their correspondence with the Dharma-body of the buddhas but also as internal rulers of the body-mind (kokoro) of the humans.Inhabiting the body-mind, kami were able to freely orient it toward positive (as well as negative) actions.For example, in the Noh play Miwa 三輪 (1465) Konparu Zenchiku 金春禅竹 (1405-1470) depicts Miwa My ōjin 三輪明神 not only as a spiritual guardian of human morality but also as a compassionate mediator who voluntarily chose to descend into a female body for taking over the five hindrances (goshō 五障) associated with this specific type of human rebirth (Uejima 2021, p. 621). 9Zenchiku's play marked the ultimate detachment of the medieval kami from the old shinjin ridatsu paradigm, which characterized the Buddhist approach toward local deities during the ancient period.This Noh play shows how by the fifteenth-century kami were not merely transformed into universal and ethically internalized entities but also became able to absorb within their bodies the psychophysical suffering of the humans for guiding them toward the final liberation.From the ancient to the medieval period, the body of the kami was radically reinterpreted from a site of sorrow into a site of enlightenment.
And yet, it is crucial to keep in mind that the linearity of the above-described transformation processes concerning medieval local gods is more fictive than real.Namely, there were myriad kami who furiously refused to be paired with buddhas or to undergo universalizing, internalizing, and moralizing metamorphosis.These die-hard "real gods" (jissha 実者) embodied the intrinsic resilience of the kami against the Buddhist combinatory paradigms of the honji suijaku.Jissha were not just a thorn in the flesh of the assembling networks between buddhas and kami but also constituted a precious "noise" thanks to which the system was perennially bound to reorganize its ultimate structure.The jissha category also included a lot of the so-called "moot" or "transversal deities", i.e., powerful entities inhabiting the area of undecidability between buddhas and kami, such as vengeful spirits (onryō 怨霊), possessing spirits (mono no ke 物の怪), animal spirits of foxes, snakes, crows, rats, scolopendras, long-nosed goblins (tengu 天狗), or water goblins (kappa 河童) (cf.Teeuwen and Rambelli 2003, pp. 24-25;Castiglioni 2021).These numinous actants too encroached sometimes peacefully, some other times pugnaciously, with kami and buddhas, demanding for themselves appropriate ritual procedures and devotional protocols.
If we keep focusing on the issues where the assemblage between buddhas and kami struggled to maintain its apparent unity, death turns out to be another extremely revealing terrain.For buddhas, the human corpse was a sort of organic stage on which to perform the ideal of compassion (jihi 慈悲); on the contrary, for the kami, the cadaver was a polluted element against which they manifested their resentment against impurities.Some kami never abandoned their repulsion for death defilement and pretended to be treated differently from the buddhas on such matters.Nevertheless, other kami were more prone to adopt a Buddhist approach even toward death.For instance, the Hosshinsh ū 発心集 (1215) reports a tale where the kami of Hie 日吉 tells a monk who is afraid to continue his pilgrimage, because he had to bury the corpse of a woman along the way, not to fear its course (tatari 祟り).The kami of Hie explained that the ritual protocols against impurity before visiting shrines were simply "provisional skillful means" (kari no hōben 仮の方便) to help humans understand the importance of purity, which ultimately coincided with sympathy (awaremi 憐れみ).In a later spurious version of the same text, the kami adds that the anti-pollution behavior of the gods actually works as a trigger for infusing into human beings the sentiment of "disgust for the contaminated world of the present and desire for the Pure Land" (enri edo gongu jōdo 厭離穢土欣求浄土).Another striking example of a kami who agreed to manifest himself in close proximity to death pollution is reported in the Byakugōji issaikyō engi 白毫寺一切経縁起 (1335).The Byakug ōji was a Ritsu temple built on the old sacred site (chinza chi 鎮座地) dedicated to the cult of Kasuga My ōjin 春日明神 before the deity was ritually transferred to Kasuga Shrine 春日神社 due to the incineration of his ancient pavilion because of a thunderstorm.The Byakugōji engi reports that upon visiting this site K ūkai expressed the intention of transforming this territory into a burial ground for making ritual offerings on behalf of the dead (bōkon kuyō 亡魂供養).At that moment Kasuga My ōjin not only granted K ūkai the privilege of turning his sacred place into a cemetery but also assured an active collaboration in guiding the spirits of the dead toward the final liberation.Once a year during the section for inviting the deity (jinbun kanjō 神分勧請) to assist the monastic assembly for the Buddhist canon (issaikyōe 一切経会) the Byakug ōji monks invoked Kasuga My ōjin.On this occasion, they re-staged the original gift of the land from the kami to K ūkai and again asked Kasuga My ōjin to keep interceding for saving the spirits of the dead.This sacred representation implied that Kasuga My ōjin was ritually invited to descend directly onto Byakug ōji's burial ground (shidarin 尸陀林) among the corpses (shikabane 尸) for marking the semantic shift of a polluted site into a field of merit (shōchi 勝地) and the overcoming of death in favor of salvation (saido 済度) (Funada 2018, pp. 324-26, 331-34).
Symbiosis, Transversality, and Rivalry: Shugend ō and Onmy ōd ō
If we prioritize a holistic approach toward Japanese medieval religious traditions, it is impossible not to take into account Onmy ōd ō and Shugend ō, which were among the privileged interlocutors of the exoteric Buddhist (kengyō 顕教) schools in general, and the esoteric ones, i.e.Shingon and Tendai, in particular.
Onmy ōd ō encompasses a vast arrangement of techniques for divinatory (senjutsu 占 術), geomantic (sōchi 相地), incantatory (jujutsu 呪術), calendrical (reki 暦), and astrologic (tenmon 天文) purposes, which were transmitted from China via Korea to Japan since at least the seventh century.In the ninth century, the imperial court established the Yin-Yang Bureau (Onmy ōry ō 陰陽寮) where Yin-Yang masters (onmyōji 陰陽師) served as officials (kannin 官人), together with Buddhist monks who were allowed to suspend their monastic duties to temporarily hold this position.Akazawa Haruhiko defines this sort of ministerial Onmy ōd ō as "narrow Onmy ōd ō" because it was limited to the cultural environment of the imperial court.Nevertheless, since the tenth century, literary sources such as the Konjaku monogatari sh ū 今昔物語集 (completed in the twelfth century) highlight the presence of what could be defined as a "diffused Onmy ōd ō" (Akazawa 2021, p. 50).In this enlarged cultic environment, onmyōji cooperated with mountain ascetics (yamabushi 山伏), esoteric masters (ajari 阿闍梨), blind monks (mōsō 盲僧), holy men (hijiri 聖), and itinerant specialists of folk performing arts such as the so-called "auditors" (shōmonji 唱門師) for providing divinatory practices to extra-court groups such as the urban mid-low-rank aristocracy, provincial warrior aristocracy, and even members of subaltern classes.
During the Insei period, the symbiosis between Onmy ōd ō and esoteric Buddhism became so intense that it is almost impossible, and even counterproductive, to try to disentangle one from the other (cf.Kanagawa Kenritsu Kanazawa Bunko 2007; Kotyk 2018).Buddhist monks were used to define Onmy ōd ō procedures as "external rituals" (gehō 外法) underlining the fact the Buddhist canon did not include Onmy ōd ō protocols because they did not originally refer to Buddhist deities.Nevertheless, this was simply a constative judgment, not a qualitative one, and the fruitful interaction between these two traditions can be analyzed, for instance, in the esoteric rituals based on the divinatory board (shikiban 式盤).These types of rituals were collectively known as "board rituals" (banpō 盤法) and were particularly performed by the esoteric masters of the Ono lineage (Ono-ry ū 小野流) of the Shingon school.During the banpō, the material structure of the divinatory board symbolized the universe as well as the human body.The spheric upper part of the shikiban was specifically associated with the vault of heaven (tenban 天盤) and the human head, while the square part beneath represented the earth (jiban 地盤) and the lower part of the body.Ajari applied these board rituals to deal with special classes of esoteric deities such as Kangiten 歓喜天, Godai Kok ūz ō 五大虚空蔵, Godai My ō ō 五大明王, Nyoirin Kannon 如意輪観音, or Shinko-ō Bosatsu 辰狐王菩薩 (alias Dakiniten 荼枳尼天).
In the lower part of the shikiban, there were the twelve animals of the Chinese zodiac (j ūni shi 十二支), the twenty-eight celestial mansions (nij ūhasshuku 二十八宿), and the thirty-six birds (sanj ūroku kin 三十六禽) associated with the twelve animals, which preside over the diurnal and nocturnal sections of the day.For instance, a board ritual dedicated to Shinko-ō Bosatsu was known as the "ritual for the sudden obtainment of supernatural powers" (tonjōshicchi hō 頓成悉地法) and was performed for the immediate realization of any desire as well as for creating insurmountable obstacles against enemies.In other rituals, Shinko-ō Bosatsu/Dakiniten was supposed to transform into a jewel (hōju 宝珠) and, by extension, into a relic of the Buddha (busshari 仏舎利).The presence of the jewel/relic was materially evoked, enshrining in the central space of the shikiban a manufactured relic or a paper talisman (fu 付) showing the written characters for relic.The ritual manual Ban kenritsu saigoku hisohiso ch ūsho betsuden 盤建立最極秘々中書 別伝 (extant copy of the fourteenth century) specifies that since the wooden part of the shikiban represents the bones of the human body, the insertion of a relic within it corresponds to the installation of the "spiritual bone" (reikotsu 霊骨) within the body.In other words, the shikiban was conceptualized as a living object, the materiality of which overlapped with the human physicality and the living body (shōjin 生身) of the Buddha (Iyanaga 2018;Takahashi 2020, p. 193).
Moreover, the association between Dakiniten and the jewel/relic derived from the fact that Dakiniten could manifest herself as a scavenger for devouring the heart of corpses.
The heart stores the "vital energy" (literally, the "human yellow" ninnō 人黄), by the eating of which Dakiniten is able to discern the good or bad karmic load of the dead and the consequent reward or punishment.The shape of this mysterious "human-yellow" was said to resemble a jewel or a relic.These two objects, in turn, were interpreted as the constitutive elements of the human as well as the cosmic body (Trenson 2012, p. 113;Iyanaga 2016).
Nevertheless, the relationship between Onmy ōd ō and esoteric Buddhism was not merely characterized by a smooth symbiosis but also creative tension.For example, at the court Yin-Yang masters had to compete with "destiny-star masters" (sukuyōshi 宿曜師) who were a special type of Buddhist monk specializing in Indian divinatory techniques focused on the cult of the "birth-star deity" (Honmy ōsh ō) as well as other stars, the moon, and the spirits of eclipses.Such astrological knowledge was formally included in the teachings of the Buddhist canon.Aristocrats gave sukuyōshi great consideration because they were supposed to be able to predict and influence the destiny of a person.For these reasons sukuyōshi were often consulted before making important decisions and each noble often developed an intense relationship with his own destiny-star master.In the Insei period the competition between onmyōji and sukuyōshi reached its climax and in 1038 sukuyōshi were prevented from participating in calendrical activities, which became a prerogative of those onmyōji who belonged to the Kamo 賀茂 family.Although they lost this battle, sukuyōshi remained the undisputed authority for rituals concerning the Ursa Major or Northern Dipper (hokuto 北斗) and the elimination of inauspicious influences due to the changes in the transit of Jupiter (saisei ku 歳星供) and Saturn (chinsei ku 鎮星供).It is beyond doubt that many Onmy ōd ō rituals were directly influenced by the theories and practices of the Sukuy ōd ō 宿曜道.For the onmyōji, the possibility to import Sukuy ōd ō techniques into their ritual protocols also meant to be able to add a Buddhist ultra-mundane touch to a doctrinal system, which was, otherwise, prevalently focused on the mundane dimension.
During the thirteenth century, onmyōji started providing geomantic practices in conjunction with practical suggestions concerning agricultural techniques for maximizing the productivity of the fields.This aspect shows how in the medieval period religious and techno-scientific discourses invariably proceeded hand in hand without epistemic breaks.The thirteenth century also marked a substantial expansion of the onmyōji presence in every sector of society including the Kamakura bakufu where a spurious branch of the Abe 阿倍 family monopolized most of the divinatory activities besides the calendric ones, which were exclusively administered by the Kamo at the imperial court of Kyoto.In the fourteenth century, this situation pushed some popular onmyōji of the Kant ō region to start a cooperation with esoteric monks and Shugend ō practitioners (shugenja 修験者) for producing calendars independently from the court.These semi-clandestine Onmy ōd ō calendars of the Kant ō region were not only extremely accurate but also included information on lunar phases for the sowing period, the occurrence of bissextile months, and divinatory protocols for supporting warfare strategies (heihō 兵法) (Akazawa 2021, p. 56).The fact that the countryside onmyōji could bypass the court of Kyoto and even the Kamakura bakufu in putting autonomous calendars into circulation highlights their pervasiveness in the urban as well as rural society.
The Onmy ōd ō influence also had a deep impact on kami-related practices such as the ritual for the "great purification" (ōharae 大祓).In the medieval period, this ceremony took place close to the Shujakumon 朱雀門 Gate in the outskirts of Kyoto during the last day of the sixth and twelfth month.The ritualists of the Ministry for the Gods (Jingikan 神祇官) recited invocatory formulas, namely the Nakatomi no harae 中臣祓, on behalf of the kami for eliminating every sort of defilement and producing an ideal condition of permanent purity.After this initial phase of the ritual, onmyōji performed other incantatory and apotropaic formulas such as the "water-facing ritual" (karin no harae 河臨祓), which was subsequently included in the esoteric "six-syllables water-facing ritual" rokuji karinhō 六字河臨法 (cf.Lomi 2014).This conclusive part of the service became so relevant that the entire leadership of the ritual progressively shifted from the kami ritualists of the Jingikan to the onmyōji of the Onmy ōry ō, marking a transformation in the conceptualization of purity from the ancient to the medieval period.If we take into account the Nakatomi harae kunge it is clear that Onmy ōd ō and esoteric Buddhist deities played a pivotal role also in the associations of names for invoking the "four purification gods" (haredo shijin 祓戸四 神) of Ise.For instance, according to this text Seoritsuhime 瀬織律比咩 paired with Enma H ō ō 焰魔法王, Hayaakitsuhime 速開都咩神 was associated with God ōdaijin 五道大神, Ibukidonushi 気吹戸神 matched with Taizan Fukun 大山府君, and Hayasasurahime 速佐 須良比咩神 was accompanied by Shimei 司命 and Shiroku 司録.Being associated with five post-mortem deities (meijin 冥神), namely one Buddhist deity (Enma) and four Onmy ōd ō deities (God ōdaijin, Taizan Fukun, Shimei, and Shiroku), the four purification kami of Ise developed the capacity of saving the spirits of the dead.In other words, coming into contact with Onmy ōd ō and Buddhist gods, the medieval kami of Ise stopped being simply invoked for guaranteed ritual purity but were also requested to provide benefits for humans after death.This detail underlines a fundamental shift in the notion of harae from the ancient to the medieval period when purity did not only mean a stable absence of defilement but also an active soteriological support.Moreover, this transition shows how medieval Onmy ōd ō should not be narrowed down to mere religious taxonomies to interpret reality but should also be analyzed within the vast panorama of devotional practices concerning a rich pantheon of deities (It ō 2021, p. 352).
As we have seen in the previous sections, mountain ascetics were often ritual companions of onmyōji who operated outside the court.For instance, in the Sangōshiiki 三教 指帰 (797) K ūkai writes that during his ascetic training as a young monk (shamon 沙門) on the mountains of Shikoku he performed the elaborate Kok ūzō gumonjihō 虚空蔵求聞持 法 ritual at the end of which Venus (My ōj ō 明星) appeared in the sky over the side altar dedicated to My ōj ō Tenshi 明星天子.For K ūkai the manifestation of My ōj ō marked his perfect achievement of the Dharma.This is an interesting aspect because the canonical s ūtra concerning the ritual protocol of the gumonjihō, i.e., the Kokōzō Bosatsu nōman shogan saishōshin darani gumonjihō 虚空蔵菩薩能満諸願最勝心陀羅尼求聞持法, does not mention this celestial body.It is highly plausible that K ūkai followed a customized local version of the gumonjihō, which was influenced by the doctrinal discourses of local onmyōji as well as ascetic groups operating in the Shikoku mountains.This version of the gumonjihō had to be relatively popular among monastic circles because My ōan Y ōsai 明菴栄西 (1141-1215) also mentions the same apparition of My ōj ō and underlines the existence of a great number of oral transmissions about the correct procedures for performing this ritual.For instance, the Genshidan hishō 玄旨檀秘鈔 (ca.1604), which also includes some medieval Tendai secret teachings about the gumonjihō, specifies that My ōj ō represents the amalgamation between Nittenshi 日天子 as Yang element and Gattenshi 月天子 as Yin element.Moreover, My ōj ō creates the union between the principle (ri 理) and the knowledge (chi 智), delusion and enlightenment (meigo 迷悟) as well as the supreme enlightenment (daigo 大悟), and can ultimately be considered as a celestial transformation of Kok ūz ō (Ogawa 2021, pp. 301-3).
The first traces of these ascetic traditions began in the Nara (710-794) and Heian (794-1185) periods when some autochthonous practitioners (gyōja 行者) embraced new forms of asceticism included in the so-called "Buddhism of mountain forests" (sanrin bukkyō 山林仏教) while some others remained focused on non-Buddhist practices.Toward the beginning of the fourteenth century, some of these miscellaneous ascetic groups gathered together within a sectarian movement known as Shugend ō, which was integrated within and, at the same time, distinguished from the network of exo-esoteric Buddhist teachings and institutions.
The compound shugen does not appear in the Buddhist canon and emerged for the first time during the early tenth century for indicating the achievement of "marks" (shirushi 験) through specific "practices" (osameru 修).These "marks" referred to the miraculous powers that characterized the ascetic's body and could be generated by performing austerities principally on mountains but also at riversides and seashores.Fundamental concepts such as the notions of self-discipline (tsuta 斗藪), meditation (zenjō 禅定), and abstinence (jōgyō 浄行), which played a pivotal role in the sanrin bukkyō of the Nara period, wielded an important influence on the conceptualization of shugen during the Heian period.In fact, until the fourteenth century, the semantic aspect of the term shugen kept constantly swinging between Buddhist theories and non-Buddhist religious discourses as demonstrated, for instance, by the ascetic groups called "those of the hall" (dōshu 堂衆).Even if these ascetics actually resided at large temples such as K ōfukuji or T ōdaiji they were considered neither fully ordained monks nor laymen.Moreover, although performing ascetic practices on mountains, dōshu never defined themselves as shugenja.The major ritual activities of the dōshu concerned exorcisms as well as apotropaic rituals, which implied only a partial use of Buddhist ritual technologies mostly limited to the recitation of mantras.In order to accumulate ascetic power for performing these religious services for their patrons dōshu had to make two annual self-seclusion rituals on mountains (sanrō 山籠), the first of which was called tōgyō 当行 and the second "mountain-entry" ritual (ny ūbu 入峰).It is plausible that the point of contact between Buddhist and non-Buddhist/autochthonous ascetic practices was constituted by the shared goal of achieving an individual process of empowerment (genriki 験力) by going through a set of psychophysic exercises, which could have been inspired by Buddhism as well as other religious discourses (cf.Kawasaki 2021;Tokunaga 2021;Kikuchi 2020;Castiglioni et al. 2020).
Geza 験者 (literally, "persons with powers") were among those interstitial religious practitioners who can perhaps be defined as "proto-shugenja".Some geza were considered as low-rank esoteric masters (ajari) who specialized in reciting mantras and incantatory formulas for subjugating malicious spirits such as the mono no ke who possessed humans provoking lethal diseases, or the gyakuhei 瘧病 who spread malarial fevers.Until the mid-Heian period, the enraged kami who originated epidemic outbursts were not directly treated with Buddhist ritual technology such as the binding power of mantras (jubaku 呪縛) but were pacified through Onmy ōd ō divinatory practices.The rationale behind this ritual differentiation was to avoid irritating an already furious kami, limiting its agency through the constrictive power of Buddhist spells.After the Insei period this approach to the kami of pandemics changed and geza started conjoining their vocal weapons with those of the onmyōji to subjugate every type of malevolent spirit or kami (Koyama 2021, pp. 399-400).
Tokunaga Seiko argues that, in the Heian period, some geza could also have been included in the category of the "Dharma-seal Yin-Yang masters" (hōshi onmyōji 法師陰 陽師), which brought together mixed religious professionals of Buddhist as well as non-Buddhist incantatory formulas (Tokunaga 2021, pp. 212-13).In other words, geza could be low-rank ajari or low-rank onmyōji and their distinctive mark (shirushi) was constituted by the voice, which became the somatic site where all the merit of the ascetic practices converged. 10Geza probably performed non-Buddhist (or non-exclusive Buddhist) forms of asceticism for obtaining extraordinary powers, which were partially channeled into Buddhist or Onmy ōd ō ritual technologies such as mantras, induced possessions (hyōi 憑依), or divinations for healing rituals and exorcisms (kitō 祈禱) (cf.Tinsley 2014).For instance, the Uji sh ūi monogatari 宇治拾遺物語 (thirteenth century) reports the story of a geza who subjugated the evil spirit (mono no ke) of a fox, which penetrated within the body of a person and caused a disease.In this case, the geza recited mantras with his empowered ascetic voice, forcing the mono no ke to transfer into the vessel-body of his youthful attendant (dōji 童子) for being interrogated and providing temporary relief for the exhausted patient. 11 Once the mono no ke was inside the dōji who acted like a human receptacle for spirits (mono tsuki 物つき), the geza was able to communicate with it and to apprehend that the curse was provoked by a hungry female fox living in a nearby mound (tsuka 塚), which was looking for food to feed her pups.The geza ordered the preparation of some rice cakes to be tied up to the waist of the dōji.As soon as the spirit of the fox became attracted by these food offerings the geza summoned his Dharma-protector spirit (gohō) for chasing the mono no ke of the fox and making sure that it did not come back to haunt the patient (Uji sh ūi monogatari 1996, pp. 146-47).
It is probable that in the early fourteenth century, under the influence of Buddhist reformation movements (bukkyō kakushin undō 仏教革新運動) and precepts restoration move-ments (kairitsu fukkō undō 戒律復興運動), such hybrid ascetics-including the geza who were expert in Buddhist as well as non-Buddhist ascetic practices and ritual technologies-were absorbed into more rigid Buddhist institutional and doctrinal frameworks transforming into shugenja (cf.Tokunaga 2021, pp. 212-13).
An illustration of the Tengu zōshi 天狗草子 (Onj ōji 園城寺 version, 1296) provides an excellent visual description of the inclusion of Shugend ō within what Kuroda Toshio 黒 田俊雄 defined as the medieval exo-esoteric Buddhist system (kenmitsu taisei 顕密体制).This image shows an assembly of ten tengu seated on a tatami ring, dressed in ritual clothes, all facing the central empty space of the floor enclosed within the green mats' braids.Each tengu embodies a sectarian path with its specific set of practices for realizing the buddhahood, which is represented by the empty center in the middle of the room's floor.Although tengu usually symbolize arrogance (kyōman 橋慢), which constitutes an obstacle toward enlightenment, in this painted scroll their negative valence is humorously turned upside down and the flying creatures are compared to diligent Buddhist practitioners.On the highest tatami, there is a tengu called "the lamp of the Dharma of all sects and the exo-esoteric pillar" (shosh ū hottō kennmitsu tōryō 諸宗法燈顕密棟梁) who represents the Shingon school and the fundamental unity of the exoteric as well as esoteric traditions.On his right, another tengu is indicated as the head of Tendai school (Tendai kanju 天台 貫首).Only these two tengu share the highest tatami.On the middle mat, there is a tengu called Atago Tar ōb ō 愛宕護太郎坊 who is considered the chief of all the tengu.On the lowest tatami, there are seven more tengu: the Kegon-master tengu (Kegon sōshō 華厳宗 匠), the enlightened Zen-master tengu (tokuhō zenshi 得法禅師) together with his young acolyte-tengu, the Sanron-scholar tengu (Sanron gakutō 三論学頭), the Hoss ō-eminent monk tengu (Hoss ō sekitoku 法相碩徳), the Shugend ō tengu who bears the title of "Superintendent of the Three [Kumano] Mountains" (Sanzan kengyō 三山検校), the Ritsu precept-master tengu (jikai risshi 持戒律師), and the holy man tengu of the recitation of the Buddha's name (nenbutsu shōnin 念仏上人) (Tengu zōshi 1993, p. 140).The fact that the Shugend ō tengu was explicitly associated with the institutional role of superintendent of the three Kumano Shrines (Hong ū 本宮, Shing ū 新宮, Nachi 那智) indicates that toward the end of the thirteenth century, some shugenja were already operating as proxies of powerful Buddhist temples such as the Onj ōji for strengthening the links between religious institutions and renowned cultic sites in the peripheries.It is also interesting to take into account that the Shugend ō tengu is sitting exactly in front of the "lamp of the Dharma" tengu, denoting a fruitful symbiosis between the central authority represented by the exo-esoteric Buddhist system and the marginal authority embodied by Shugend ō.A written passage of the Tengu zōshi further underlines this intersection between Shugend ō and Buddhist institutions, specifying that Onj ōji offers to all its practitioners the possibility to follow a triple training in exoteric Buddhism, esoteric Buddhism, and the "single way of shugen" (shugen no ichidō 修験ノ一道).This constant dialogue between shugenja and fully ordained Buddhist monks also took place during the mountain-entry rituals, which were performed on sacred mountains such as Mt.Ōmine 大峰山 and Mt.Katsuragi 葛城山 in the Kii peninsula, or Mt.Hikosan 彦山 in northern Ky ūsh ū.For instance, the Shokoku ikken hijiri monogatari 諸国 一見聖物語 (fourteenth century) reports that on the occasion of the mountain-entry ritual at Katsuragawa 葛川 close to the Enryakuji 延暦寺, i.e., the Tendai headquarters on Mt.Hiei 比叡山, there were always groups of "scholar-monks" (gakuryo 学侶) who performed austerities together with the ascetic groups of "practitioners" (gyōnin 行人) (Hasegawa 2020, pp. 71-72).
As in the case of the rivalry between sukuyōshi and onmyōji, there were also numerous episodes in which Buddhist monks tried to limit the penetration of shugenja within monastic institutions or to criticize the validity of Shugend ō teachings from the doctrinal point of view.This is one of the reasons why those shugenja who operated within Buddhist temples were often defined as "low-rank monks" (gesō 下僧) who were only partially ordained and could not have access to the highest echelons of the temple.For example, in the Daigoji sōgō daihōsshira mōshijōan 醍醐寺僧網大法師等申状案 (1273), it is written that Daigoji clerics (shuto 衆徒) strongly opposed the nomination of a certain D ōch ō 道朝 as abbot of the temple because he "has a taste for the 'single way of yamabushi' (yamabushi no ichidō 山伏の一道), but he has not learned the depths of esoteric Buddhism yet".Even more disdainful was the derogative use of the term yamabushi in a passage of the Shasekish ū.Here Muj ū Ichien describes the shabby and sham appearance of an itinerant preacher, defining him as an ambiguous entity, neither a monk nor a layman, and "not even a heap of shit (kuso 屎), but simply a little shit (birikuso びり屎), namely a yamabushi" (Hasegawa 2020, p. 70).Despite these almost inevitable frictions, it is crucial to keep in mind that in the medieval period such clashes among different religious professionals, institutions, and doctrinal as well as ritual systems did not simply produce sterile conflicts but often worked as triggers for a creative tension between social actors, which ultimately exalted the hybridity and porosity of all the constitutive niches of the network.
Synopsis
Medieval religion, which was conveniently reduced to the confrontation or the binary coexistence of Buddhism and Shint ō, or in the best case to a "syncretism", a fusion of Buddha and kami, is in many ways irreducible.In the same way, medieval Buddhism, which was assimilated to the emergence of a few new "reformist" schools (Zen, Pure Land, Nichiren), or even to a few strong personalities such as D ōgen 道元 (1200-1253), Shinran 親 鸞 (1173-1262), or Nichiren 日蓮 (1222-1282), turns out to be a cross-fertilization of diverse practices, dominated by esoteric interpretations.These are some of the practices that the essays gathered here try to describe.
Certainly, the scholastic exercise that consists in finding a posteriori the logic underlying independently conceived essays that reflect the various temperaments and approaches of their authors is somewhat arbitrary, even if none of them is completely insensitive to the influences and spirit of the times.Thus, consciously or not, every researcher is indebted to the historical works of Kuroda Toshio and Amino Yoshihiko 網野善彦 .In the field of Japanese Buddhism, we should also mention the seminal works of Yamamoto Hiroko and Abe Yasur ō.In this regard, we should refer to recent Japanese research on the relationship between Zen and esoteric Buddhism, and in particular the publication of the works of representatives of this trend such as My ōan Y ōsai and Enni Ben'en.Numerous excellent studies have analyzed the monastic institutions (the Five Mountains of Zen, the great monasteries of the Pure Land schools), the collective representations of the Pure Land and the Buddhist hells, the beliefs in the buddhas Shakamuni 釈迦牟尼, Yakushi and Amida 阿弥陀 and the great bodhisattvas (Kannon 観音, Jiz ō 地蔵, Miroku 弥勒, Monju 文殊), the role of meditation and monastic discipline.The articles presented here offer a complementary vision, building on the strengths of their predecessors, while examining new objects and developing new methods.
At the risk of oversimplifying and failing to do justice to the complex array of ideas and realities, one may distinguish several overlapping areas or themes: Cultic sites and the importance of territory, fears of territorial and/or demonic aggressions, possession and exorcism, the resurgence of ancient gods and the emergence of new ones, the role of powerful objects and other forms of material culture (materia medica, talismans), the relations between Buddhism and medicine, the ritual role of music and sexuality.
Cultic Sites and Rituals
One of the major trends in the study of Japanese religion in recent decades has been the recognition of the importance of local cults and sacred sites.This approach has produced monographs on sacred mountains and large temples (Allan Grapard on Kasuga Shrine 春日大社 and Mt.Hiko 英彦山, Heather Blair and Carina Roth on Kinpusen 金峯山, Andrea Castiglioni on Dewa Sanzan 出羽三山, Caleb Carter on Mt.Togakushi 戸隠山, Janine Sawada on Mt.Fuji 富士山, etc.).Several articles in this volume deal -directly or indirectly-with a particular cultic site: Gion 祇園 (Mark Teeuwen), Hieizan 比叡山 (Or Porath), K ōyasan 高野山 (Elizabeth Tinsley), Ise 伊勢 (Talia Andrei), T ōnomine 多武峰 (Benedetta Lomi).Mark Teeuwen's article is a continuation of his fruitful work on the evolution over the longue durée of famous cultic sites such as the Ise Shrines or Hie Shrine.In all his work, Teeuwen shows how, behind the apparent unity of the tradition, one can distinguish fault lines, which sometimes widen until they lead to open conflicts.The Gion shrine was for most of its history under Tendai obedience, but strong separating tendencies have always existed there.The cult of its deity, Gozu Tenn ō, in particular, has been a site of rivalry between Buddhist, Shint ō, and Onmy ōd ō tendencies.However, as Teeuwen shows, the Gion Festival soon acquired its own dynamic, and Gozu Tenn ō, replaced by Susanoo, was eventually forgotten.Starting from the current reconstructions of the Gion Festival, Teeuwen shows how it was constituted as a narrative of continuity that erases all the ruptures and the innovations.In doing so, he strives to find the voices of the forgotten actors who, over the centuries, have contributed to making this event a paradigm of Japanese festivals.
Territorial and Demonic Aggressions
Emily Simpson's article reminds us that Japan is an archipelago, a notion that many scholars have tended to forget in favor of the main island (Honsh ū) and its dual political center (Kyoto and Kamakura).Simpson shows the crucial role of Ky ūsh ū in the relationship between the archipelago and the mainland, and how this importance is reflected even in the ideological productions of the center (the Hachiman gudōkun八幡愚童訓, produced in circles close to Tsurugaoka Hachiman Shrine 鶴岡八幡宮 in Kamakura).The foreign threat (especially after the Mongol invasions in the thirteenth century) is echoed in collective representations by the imaginary Korean invasions that fed the myth of Jing ū K ōg ō 神 功皇后.Foreign invasion was reinterpreted in terms of epidemic threats.It was already in response to demonic aggression by the foreign pestilence deity Gozu Tenn ō that Gion Shrine was established at the turn of the ninth century.Japanese xenophobia is only a reflection of a deeper, albeit unlocalized, fear of demonic aggressions.Simpson shows how the Korean aggression was described in one of the versions of the Hachiman gudōkun as a demonic attack by a red demon named Jinrin 塵輪 ("Dust Wheel"), and how this led to a positive retelling of Ch ūai Tenn ō's 仲哀天皇 role, who had been eclipsed by his consort Jing ū in the classical mythology of the Kojiki 古事記 (712) and Nihon shoki.
Japan is much vaunted as the land of the gods (shinkoku 神国), yet it was also a land of demons.Much of Japanese religious discourse belongs to demonology.The constant fear of demonic attacks haunts Japanese religion.This omnipresence of evil is an aspect little studied until now by all those who seek to see in Kamakura Buddhism the emergence of a "pure Zen" and of Amidist devotion.Illness and other inauspicious events were usually explained by demonic possession.
Iyanaga Nobumi shows to what extent the danger of possession was rooted in medieval mentalities, in large part due to the influence of esoteric Buddhism, which also resorted to induced possession (Skt.aveśa, "entering").The term aveśa (Jp.abisha 阿尾捨) also refers to the identification between the practitioner and the deity, a practice known as "entering oneself and being entered" (ny ūga gany ū 入我我入).Focusing on a ritual text by the Daigoji monk Seigen 成賢 (1162-1231), Iyanaga examines the role played in Shingon exorcisms by "protectors" (gohō 護法) such as the acolytes of the wisdom-king Fud ō 不動明 王.The invisible world adjoins and overflows the human world on all sides, and its shifting borders must constantly be renegotiated.Communication was achieved through divination and (induced) possession, resulting in oracles.Protection against evil forces was achieved primarily through esoteric rituals and exorcisms, and also through the use of talismans.
Gods Old and New
The medieval period is marked by the emergence of new deities, or the assumption of deities that had been relatively obscure until then.Gozu Tenn ō is a good example.
Elizabeth Tinsley discusses the rise to prominence of Niu My ōjin 丹生明神, an ancient goddess, probably of Korean origin, who was linked to the renewal rites associated with cinnabar (an ingredient in the Daoist elixir of immortality).In the Shingon tradition, she is associated with Kariba My ōjin 狩場明神, a hunter deity, as protectors of K ūkai and tutelary deities of Mount K ōya.Tinsley's article describes ritual debates performed by the monks of K ōyasan as offerings to the kami-whose taste for such intellectual games had never been suspected before.This example shows, once again, that the split between intellectual elites and popular devotion exists only in the minds of traditional scholars.
There is one particularly important divine category, that of adolescent gods such as J ūzenji 十禅師 and Uh ō D ōji 雨宝童子.J ūzenji became the representative of Sann ō Shint ō, a religious discourse centered on the Hie Shrine at the foot of Mount Hiei.Uh ō D ōji, who came to be seen as an avatar of the sun-goddess Amaterasu, represents in fact more of a Buddhist countercurrent to the Ise Shint ō, centered on the Shingon (and later Zen) temple Kong ōsh ōji 金剛證寺 on Mount Asama 朝熊ヶ岳 and on the Keik ōin nunnery in Ise.Uh ō Doji is also an astral deity, an emanation of the planetary deity Venus (My ōj ō)-which brought him to play a central role in esoteric Buddhism as a symbol of non-duality and supreme realization (Skt.susiddhi).Using a recently discovered image of this deity, Talia Andrei attempts to reconstruct the history and religious practice of the Keik ōin nuns, of whom little was known until now.Contrary to the widespread idea of the opposition between Buddhist and Shint ō rites in Ise, she shows that the nuns of Keik ōin practiced esoteric rites very similar to those of the priestesses (kora 子良) of the Ise Shrine.The very name kora, according to Yamamoto Hiroko, refers to the cult of the fox and Dakiniten, and we know that in certain medieval texts, the solar goddess Amaterasu was identified with the fox (and Dakiniten).Times have changed, and Amaterasu, having regained a Shint ō virginity in the Edo period, could become a symbol of Japanese imperialism.
Tinsley's and Andrei's essays illustrate how the most concrete institutional issues become intertwined with beliefs in local deities-once again negating the long-held motif of a Buddhism without gods, or the tired argument that the worship of the gods was merely a monastic concession to the beliefs of common people.As Tinsley and Andrei show, the gods are here the partners of the monks and nuns, and their interlocutors in highly doctrinal debates.This was undoubtedly also true of "Zen" monks such as My ōan Y ōsai, Enni Ben'en, and Keizan J ōkin 瑩山紹瑾 (1268-1325).
Sujung Kim introduces the cult of the god known as Chintaku reifujin 鎮宅霊符神, which was heavily influenced by that of the Daoist god Zhenwu 真武 and its esoteric Buddhist version, the Bodhisattva My ōken 妙見菩薩, the deity of the Pole Star.She argues that this cult was not merely a Japanese version of the Daoist cult of the Northern Dipper, imported from Korea, but was directly imported from Ming China during the Muromachi period.Her article sheds light on a significant aspect of religious material culture, the role of talismans, whose study, initiated by Michel Strickmann (1942-1994), has become the focus of recent scholarship in China, Korea, and Japan.
Objects of Power
Objects of power (relics, icons) constitute another important element in medieval Japanese religion.Actually, they are no longer merely objects, but rather "subjects" endowed with agency.Adopting a microhistorical approach that focuses on the "crackings" on two wooden statues of Nakatomi Kamatari 中臣鎌足 (614-669), the founder of the Fujiwara lineage, Benedetta Lomi shows how these crackings, seen as ominous "responses" to current events, cast a long shadow as they were still mentioned in the Edo period in Amano Sadakage's 天野信景 (1663-1733) Shiojiri 塩尻 (1697-1733).Lomi examines the complex dialectic that braids together the links between sectarian and politicized strategies, the animation of icons and the metaphysics of presence, and the eminent concrete concerns of preservation of material objects.She illustrates the way in which material culture and collective representations are intimately linked.As art historians know well, the metaphysical is always embedded in the physical.The material culture is never purely material, nor the religious conceptions purely metaphysical: they are always anchored in concrete reality.Sometimes, as in the present case, a perceived danger led monks to forget for a while the ritual logic that led them to animate an icon by inserting into it another one, perceived as a kind of "soul".
The animation of a statue reminds us in some ways of the above-mentioned phenomenon of aveśa.The difference between deities and demons is a question of function, not nature.Depending on the case, the same supernatural power can be invoked as a deity or exorcised as a demon.
Buddhist Astrology
Scholars have long noted the importance in esoteric Buddhism of astral cults inherited from Indian and Chinese (as well as Western) astrological traditions, especially from the Daoist cult of the Northern Dipper.Yet it is only recently that Buddhist astrology has come to the forefront.Jeffrey Kotyk has already made a significant contribution to this emerging field.Here, he endeavors to show the selectivity with which medieval Japanese adopted and adapted Indian and Chinese systems which, in spite of agreement in principle, contradict each other on many points.In particular, the Daoist cult of the star of fundamental destiny (Honmy ōsh ō), once inserted into the esoteric logic of non-dual bipartition, came to take on new embryological and metaphysical values (as a symbol of ultimate non-duality, as in the case of the planet Venus) that were relatively undeveloped in the Indian and Chinese sources.This evolution is particularly characteristic of the theological speculations of Tendai esotericism (Taimitsu 台密) and in particular, of a trend known as the Genshi kimy ōdan 玄旨帰命壇.
Buddhism and Medicine
Another rapidly emerging field is that of the relations between Buddhism and medicine.Its origins can be traced back to a seminal article by Paul Demiéville -originally published as an entry in the Hōbōgirin 法寶義林 encyclopedic dictionary, and translated into English in monographic form.Andrew Macomber, who co-edited with Pierce Salguero an important book on that theme, has already distinguished himself as one of the leading figures in that field.
The dual aspect of material culture is particularly clear in Macomber's study of the Kissa yōjōki 喫茶養生記, a text by the Shingon priest Y ōsai (usually celebrated as the founder of the Rinzai school of Japanese Zen, despite his credentials in esoteric Buddhism).This text is often put forward as the precursor of the tea ceremony, and as such, it has attracted the attention of various scholars interested in the material culture of Buddhism.Macomber shows, however, that it is also interesting for its emphasis on the medicinal value of mulberry, and he reveals the interweaving of the medicinal virtues and symbolic valences of this plant.The mulberry is thus placed at the center of a vast and rich semiotic network.In passing, Macomber corrects the traditional image of Y ōsai as a "pure" Zen master.We see that Y ōsai's teaching, like that of Enni Ben'en-a Zen monk credited with the introduction to Japan of noodles (soba)-owes to the syncretic tradition of Zen-Mikky ō, which Japanese scholars have just begun to rediscover (better late than never).
By insisting, not only on the material and properly "medical" aspects of the mulberry, but also on its apotropaic, ritual, and symbolic aspects, Macomber also follows in the wake of the research conducted by Strickmann in his groundbreaking work, Chinese Magical Medicine (Strickmann 2002).His essay also complements Iyanaga's work on possession and exorcism.Again, by showing the importance of esoteric Buddhism during the Kamakura period, it dovetails with Gaétan Rappo's description of the expansion of "countryside esotericism" during the Muromachi period.
Music and Sex
What is bred in the bones... Fabio Rambelli brings to light a usually neglected aspect of Buddhism and contributes to bridge an important gap in Buddhist scholarship, namely, the role of music.Despite its ubiquity in ritual practice, music is mostly virtually invisible (or inaudible); or, when it does appear, it is promptly condemned.Certainly, on the fringes of monastic Buddhism, we find a Bodhisattva of music and song, and it is known that Benzaiten 弁財天 is a goddess of music.The Noh tradition is said to trace its Buddhist origins to the sarugaku 猿楽 rites performed at the ushirodo 後戸 ("back door") of temple halls.But precisely, it is through the "back door" that the "entertainment arts" (geinō 芸能) enter the sacred space, perhaps with a thrill of transgression.In contrast, bugaku 舞楽 and gagaku 雅楽, which Rambelli takes as his main focus, were from the outset recognized as official art forms, at the very center of monastic life and imperial liturgy.Rambelli, himself an accomplished musician, is thus able to provide us with an acoustic (and not merely visual) entry into a medieval ritual.
The repressed in Buddhist discourse also returns in the form of sex, or, more precisely, homosexuality.The figure of J ūzenji studied by Or Porath is a complex one: a youthful deity who possesses young novices, who embodies homosexual desire and at the same time the observance of precepts, he is also a cosmic deity described as the warp and woof of Heaven and Earth, identified with the vital breath and the sixth consciousness of people, as well as with King Yama and the demonic god K ōjin in his form as "god of the placenta" (ena kōjin 胞衣荒神).Porath, however, is primarily interested here in showing how, through the cult of J ūzenji, homosexual practice (said to have been introduced in Japan by K ūkai!) moved to the heart of monastic life, before spreading to warrior circles (where it was no doubt not entirely unknown before).
Paradoxically enough, it is another form of sexuality, heterosexuality, which was deemed heterodox, or even branded as a "heresy" (jakyō 邪教).Such is the case of marginal sectarian movements such as the Tachikawa-ry ū 立 川 流 in Shingon and the Genshi kimy ōdan in Tendai.The heretical nature of the Tachikawa-ry ū is said to derive from its syncretism of Shingon with Onmy ōd ō teachings, a syncretism typical to the adaptation of Shingon to peripheral regions.Yet Gaétan Rappo's essay, focusing on the life and work of the Shingon priest Shunkai 俊海 (ca.1389-1454), shows how this adamant critic of the Tachikawa-ry ū's embryological discourse also attempted to adapt Shingon to the local conditions he encountered in Shimotsuke Province, in Northern Japan.Interestingly, Shunkai takes as his target the priest Monkan 文観 (1278-1357) and his Joint Ritual of the Three Worthies (sanzon gogyō-hō 三尊合行法), although the latter was based on perfectly orthodox sources.Scholars including lyanaga Nobumi, Stephan Köck, and Abe Yasur ō, have begun to rehabilitate the Tachikawa-ry ū.At any rate, Rappo shows not only the vitality and expansion of esoteric Buddhism over against the "new schools" of Kamakura Buddhism during the Muromachi period, in areas far from the centers of political power.
This shows, once again, that the exclusive concern with sectarian nominations, which has long dominated the study of Japanese religion, is misleading.Alternative approaches of the kind illustrated by the contributions to this issue seem to indicate that this period is over.Notes 1 After the Tengy ō no ran 天慶の乱 (939-940) the imperial court commissioned the compilation of lists regarding all the major and minor kami associated with the territories administered according to the Ritsury ō system.The aim was to enact new pacificatory rituals with special attention to those potentially dangerous kami such as Hachiman Daibosatsu 八幡大菩薩, whose oracle had inspired Masakado's coup.Buddhist monks played a pivotal role within the religious facilities, which were directly involved in this reorganization of the kami cults (Uejima 2021, p. 577). 2 In this introduction the term Shint ō is broadly adopted to indicate the polyhedric sphere of practices and logic about kami, which developed in symbiosis with Buddhism as well as in reaction to it.The more neutral expression "kami worship" better describes the pre-fifteenth century non-standardized forms of the rituals for the local gods (jingi saishi 神祇祭祀), which were progressively integrated within the realm of gods (jindō 神道) among the six paths of rebirth (rikudō 六道) since the second half of the seventh century.It was only in 1419 that in the Nihon shoki kikigaki 日本書紀聞書 the Hoss ō monk Ry ōhen 良遍 (d.u.) specified, for the first time, that the compound 神道 should not be read jindō (emphasizing the dependency of kami on Buddhist deities) but shintō (underlining the independent power of kami vis-à-vis with buddhas) (Teeuwen 2002, pp. 242-43;It ō 2020, pp. 137-38).A few decades later, Yoshida Kanetomo 吉田兼倶 (1436-1511) expanded these theories on the primacy of the kami over the buddhas within a doctrinal current (ry ūha 流波) known as Yuiitsu Shint ō 唯一神道.It is only from this moment on that it is possible to talk about a non-Buddhist Shint ō.It is significant to keep in mind that most of the creators of these new conceptualizations about kami were standard Buddhist monks such as Ry ōhen or shrine-monks (shasō 社僧) in charge of rituals for the kami.Even in the case of Yuiitsu Shint ō, Kanetomo had never conceived kami as antagonists of buddhas but as universal primordial matrixes from which buddhas and bodhisattvas also derived their power (It ō 2021, pp. 645, 655).3 Among the names of the protagonists there is also Christianity, which had a relevant impact on the late medieval religious discourses from the sixteenth to the seventeenth century.Nevertheless, the present Special Issue is specifically focused on the interactions between Japanese and Central-East Asian religions.
In Japan between the thirteen and fourteenth centuries coins started to circulate as a medium of exchange but neither the shogunate nor the imperial court had ever taken into account the possibility of fostering an autonomous minting of coins.On the contrary, coins were perceived as an extremely dangerous technology, which could quickly lead to the accumulation of enormous wealth but also abrupt death.In fact, the expression "coins of disease" (zeni no yamai 銭の病) came into use to underline the potentially lethal consequences of the contact between the human body and currency.Coins were considered a permanent property of the kami who temporarily lent them to humans.This divine poisonous gift had to be immediately returned, for instance, by burying strings of coins under the earth (mainōsen 埋納銭) or remelting their metal for coating Buddhist statues in order to extinguish the initial debt with the gods and assuring the production of surplus in future exchanges (Amino 2012, pp. 145-50).
5
According to the Mahāyāna teachings, a buddha is said to have three different types of bodies (sanshin 三身): a physical sam .sāric body (keshin 化身), an enjoyment body for dealing with bodhisattvas and advanced Buddhist practitioners (hōjin 報身), and an absolute body overlapping with the Dharma itself (hosshin 法身).Toward the thirteenth century, these three bodies of the buddhas were associated with three categories of kami.Namely, the "real gods" (jitsumeishin 実迷神) embodied the keshin, the "gods of the acquired enlightenment" (shikaku no kami 始覚神) represented the hōjin, and the "gods of the original enlightenment" (hongaku no kami 本覚神) coincided with the hosshin.
6
Michiie composed the Sōshobun until 1250; after this year Enni replaced the regent in recording the events reported in this text.K ōkei was the father of the renowned sculptor of Buddhist statues (busshi 仏師) Unkei 運慶 (?-1223).
8
The three poisons are desire, anger, and nescience.It ō Satoshi points out that the reptilian body (jatai 蛇体) of certain kami such as Amaterasu helped associate them with the Buddhist notion of three poisons as reported, for instance, in various passages of the Keiran sh ūyōsh ū 渓嵐拾葉集 (1318) by K ōsh ū 光宗 (1276-1350) (It ō 2021, p. 651).9 The five hindrances, which afflict the female rebirth, are the inability to become a god of the Brahma heaven, a god of the Indra heaven, a Mara king, a wheel-turning king, and a buddha.It is interesting to note that, in the Buddhist scriptures composed in classical Chinese, ascetics are most often defined as "those of the spells" (jijusha 持呪者) or "those of the mantras" (jimyōsha 持明者) for underlining how the fruits of ascetic practices concentrated in the numinous powers of the practitioner's voice.Iyanaga Nobumi points outs that this synesthetic connection between voice, aural force, and asceticism is also reflected in the Sanskrit term vidyādhara (Jp.jimy ō 持明), which not only denotes the practitioner, but also the court of the Vidyārājas (My ō ō 明王) who are the "masters of the mantras" and, by extension, the archetype of the ascetic (Iyanaga 2021, p. 394). 11 In medieval temples lads and young acolytes (chigo 稚児) could also play pivotal roles in male-male sexual relationships with senior monks, which also had fundamental ritual implications, see (Porath 2015). 10
|
2022-09-25T15:15:32.917Z
|
2022-09-23T00:00:00.000
|
{
"year": 2022,
"sha1": "8db0a567e5c7d7ccbf408d26c9286e18d74f1204",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1444/13/10/894/pdf?version=1663900255",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ec4ddc90a53f5943c3ccf38771d877a1c8420506",
"s2fieldsofstudy": [
"History",
"Philosophy"
],
"extfieldsofstudy": []
}
|
237828287
|
pes2o/s2orc
|
v3-fos-license
|
Ginseng in Hair Growth and Viability
The hair follicle is the unique organ that has the capacity of undergoing cyclic transformations following periods of growth (anagen), regression (catagen), and rest (telogen) regenerating itself to restart the cycle. The dynamic capacity of hair to growth and rest enables mammals to control hair growth and length in different body side and to change their coats. Unlike what is observed in many animals in which the pelage synchronously passes from one phase of the cycle to other all stages of growth cycle are simultaneously found in the human pelage, the growth pattern is a mosaic where the hair cycling staging of one hair root is completely independent of it nearest hair follicle, meaning that each follicular unit (FU) can contain follicles in different stages at any given time. A variety of factors, such as nutritional status, hormones, exposure to radiations, chemotherapy or radiotherapy, environmental pollution or drugs may affect hair growth, and affects the number of hairs, this progressive hair loss has a cosmetic and social impact that often significantly affects social and psychological well-being of the patient that suffers from this hair loss. Although a number of therapies, such as finasteride and minoxidil, are approved medications, a wide variety of classes of phytochemicals and natural products, including those present in ginseng are being testing. The purpose of this chapter is to focus on study the potential of ginseng and its metabolites in hair loss.
Hair structure
Hair is made of several proteins, the principal protein that compound the fibrous structure of the hair is keratin, in addition to keratin, which has a high content of the amino acid cysteine, the hair also contains water, lipids, minerals, and the pigment melanin.
The hair shaft (the visible fiber that is growth above the skin), is a fiber with a variety of color depending of the melanin content that pigmented the keratin fiber. The dermal element in the hair follicle is the dermal papilla, which is majorly former by fibroblast cells, this dermal element controls the hair cycle.
The fiber of the hair, the hair shaft, grows from the hair follicle which is a tubular structure that forms a bulb around the matrix of the hair bulb, specialized dermal stem cell and different types of keratinocytes, from this hair bulb that form the dermal papilla the hair shaft growth by division of proliferative cells, thus cells goes to a process of differentiated, keratinized, and pigmented in the hair follicle to form the hair shaft in a cycling manner. The diameter of the hair shaft is directly related to the size of the papilla, and allows us to define the miniaturized hairs and normal hair.
The hair structure is composed by concentric layers that forms the hair follicle, the medulla which is the center is includes the cortex and outwards the cuticle of the cortex, and is surrounded by the inner and outer root sheath, and all the mini-organ is surrounded by connective tissue.
Hair function
The functional aspect of hair is not only to protect from radiation, heat or cold and any extern agent but also contribute to the appearance and personality. The loss of the hair contributes to psychological, social and psychosocial problems, generating a cosmetic and social impact in our society.
Hair cycle
The hair follicle has the unique capacity of undergoing periods of growth (anagen), regression (catagen), and rest (telogen and exogen) before regenerating itself to restart the cycle [1][2][3][4] (Figure 1). This dynamic cycling capacity enables mammals to change their coats, and for hair length to be controlled on different body sites [5].
Unlike what is observed in many animals in which the pelage synchronously passes from one phase of the cycle to other all stages of growth cycle are simultaneously found in the human, the growth pattern is a mosaic where the hair cycling staging of one hair root is completely independent of it nearest hair follicle, meaning that each follicular unit (FU) can contain follicles in different stages at any given time. In healthy individuals, 80-90% of follicles are in the anagen phase, 1-2% in the catagen phase, and 10-15% in the telogen phase [6]. The hair grows around one centimeter a month, and has a variable growth speed being faster in the summer than in winter. The growth phase, or anagen phase, lasts an average of 3-5 years. This normal hair-growth cycle can be modified or by internal or external factors such as hormones, stress, sun, disease, exposure to environmental pollution, drugs and smoking. This changes in the growth cycle and quality of hair can leads to hair loss by a shortening of the anagen phase, a premature ingression of the catagen phase, the prolongation of the telogen phase or a loss of the hair Hair cycle stages scheme, phase of growth (anagen), regression (catagen), and rest (telogen) before regenerating itself to restart the cycle. DOI: http://dx.doi.org /10.5772/intechopen.96962 follicle function [6,7]. Common hair loss is medically named as alopecia, and can be suffer by men and women.
Hair loss
Research has shown that in hair loss, the percentage of telogen follicles is increased, while the percentage of anagen and catagen follicles is reduced. A healthy individual loses approximately 100-150 hairs per day [6]. Cell-signaling pathways in hair follicular cells resulting in the induction of apoptosis, changes in usual pattern of hair cycling, inducing the hair follicle to turn into regression or resting phase and thinning or fracture of the hair shaft leads to progressive hair loss and alopecia [7].
Hair loss is a universal problem for numerous people in the world, is a disorder in which the hair falls out from skin areas such as scalp, the body and face. Multiples factors contribute to hair loss including genetics, hormones, nutritional status, and environmental exposure (exposure to radiations, environmental toxicants…), medications and nutrition.
Androgenic alopecia can be suffered by women and men and the androgens hormones are the most important of the factors that cause the hair lost patron characterized by a miniaturized of the hair follicles that leads to hair lost in the frontal to parietal area.
Other forms of hair loss are for example caused by immunogenic hair loss, like alopecia areata, this is characterized by a spot of hair lost all around the scalp. The approved therapies such as finasteride and minoxidil, are the traditional medication used for this hair lost diseases, a few others are in progress, like a wide variety of diverse phytochemicals, including those present in ginseng, the ginsenosides which have demonstrated hair growth-promoting effects in a large number of preclinical studies [7].
Androgenic baldness (androgenic alopecia) and circular/spot baldness (alopecia areata) are the most common forms of hair loss. The first is characterized by high sensitivity of the hair follicles to DH, while the second is induced by an autoimmune reaction [8,9]. Hair also possesses its own immune system, the failure of which can lead to spot baldness (alopecia areata).
Alopecia is extended all round the world, reaching nowadays approximately to 10 million patients suffering from alopecia. Considering the pathological background of alopecia and its impact on an individual's health and social value, there is now a growing interest in the development of novel therapeutics for its medical management [7].
Conventional treatment for hair lost
Given the negative psychosocial impact of hair loss, patients follow different therapies, conventional treatments such as the two medications approved by the United States Food and Drug Administration (US-FDA): Minoxidil and Finasteride, for the treatment of alopecia.
Finasteride has a potent effect against androgens, being non-steroidal, it has shown to prevent male and female hair loss through the inhibition of type II 5α-reductase, which affects androgen metabolism avoiding the conversion of free testosterone into 5α-dihydrotestosterone, playing an important role in the pathogenesis of androgenetic alopecia in men and women [10].
The effect of minoxidil as hair growth stimulating has been known over last decades, since it was introduced in the early 1970 as a treatment for hypertension. But yet the basic mechanism of action on the hair follicle is not clearly understood [11,12].
These drugs work improving the quality of the hair follicles and reducing the hair lost but exhibit certain adverse effects, such as allergic contact dermatitis, erythema, and itching, and also stop recommended guideline of minoxidil leads to recurrence of alopecia and a prolonged use of finasteride causes male sexual dysfunction and appears as a major cause of infertility and teratogenicity in females.
Patient that do not see significant hair restoration with conventional therapies or suffer side effects often change from these conventional treatments to alternative medicine trying new treatments from the vast resources of natural products, in an attempt to find safe, natural and efficacious therapies to restore the hair.
Natural products
Natural products as it is known in the market "Dietary supplements" includes diverse subgroups like vitamins, probiotics, minerals, herbs, extracts, gels that do not require Food and Drug Administration (FDA) approval [13].
Ginseng
Ginseng is an ancient herbal remedy that was recorded in The Herbal Classic of the Divine Plowman, the oldest comprehensive Materia Medica, which was scripted approximately 2000 years ago [9].
Among different species which are known as ginseng, Panax ginseng (Korean or Asian red ginseng) is the most frequently used one. The Ginseng is widely appreciated because it promotes health effect improving the immune response, the cardiovascular system, helping with sexual dysfunctions, preventing cancer, inhibiting tumor cell proliferation among others. In Dermatologic diseases, cancer is being investigate for its therapeutic effects in skin wound reparation, reducing immune response in dermatitis, reduces and prevent skin damage due to photo aging and cold hypersensitivity, improves hair growth reducing hair loss in alopecia [14].
Nowadays has gained fame as one of the most popular herbs originating from Eastern Countries, because contemporary science has revealed that ginseng contains a wide variety of bioactive constituents, especially a group of saponin compounds collectively known as ginsenosides, which have been proposed to account for most of the diverse biological activities, including the hair-growth potential of ginseng [9]. Ginsenosides can be classified, depending on the number of hydroxyl groups available for glycosylation via dehydration reactions, as protopanaxadiol (PPD) and protopanaxatriol (PPT). Common PPD-type ginsenosides include ginsenosides Rb1, Rb2, Rc, Rd., Rg3, F2, Rh2, compound K (cK), and PPD, whereas PPT-type ginsenosides include Re, Rf, Rg1, Rg2, F1, Rh1, and PPT [9] and malony ginsenosides mRb1, mRb2 and mRbc [15]. Ginseng extract or its specific ginsenosides have been tested for their potential to promote hair growth.
Ginseng biochemical effects on hair growth promotion
The major bioactive constituents of ginseng are ginsenosides and there has been evidences suggesting that promote hair growth by enhancing proliferation of dermal papilla and preventing hair loss via modulation of various cell-signaling pathways [9,16,17]. DOI: http://dx.doi.org /10.5772/intechopen.96962 The role of 5α-reductase enzyme in the hair-loss process has been welldocumented [18], affects androgen metabolism, and it is the pathway how drugs approved are used nowadays.
Photo aging prevention
Photo aging is skin damage induced by radiation exposure (Sun exposure) characterized by different inflammatory responses to ultraviolet radiation (UVR). Excessive UV irradiation is known to cause skin photo damage by release of oxidative species which leads to skin inflammation, and keratinocyte cell death producing photo aging and carcinogenesis.
There are evidences that suggest that misbalances in the hair-growth cycle, affecting keratinocyte and dermal papilla growth [19] is cause by UVR exposure not only producing the damage of the hair shaft as an extracellular tissue, as it is clearly evident but also alters the molecular growth [19].
The Reactive Oxidative Species (ROS) accumulation and activation of matrix metalloproteinase (MMPs), a tissue-degrading enzymes, produced by UV irradiation compromises dermal and epidermal structural integrity [9].
The inhibitory effect of ginsenosides on UVB-induced activation of MMP2 suggests the potential of these ginseng saponins in hair-growth regulation [9]. Ginsenosides Rb2 [20] and 20 (S) PPD, have been reported to reduce the formation of ROS and MMP-2 secretion in cultured human keratinocytes (HaCaT) cells after exposure to UVB radiation. Ginsenoside Rg3 20 (S), reduced ROS generation in HaCaT cells and human dermal fibroblasts without affecting cell viability. The 20 (S) Rg3 also attenuated UVB-induced MMP-2 levels in HaCaT cells [21]. Ginsenoside Rh2 reduced UVB radiation-induced expression and activity of MMP-2 in HaCaT cells, but UVB-induced ROS formation was only suppressed by 20 (S)-Rh2 [22].
Ginsenosides extracts from the Ginseng radix have shown attenuates radiationinduced cell death in the skin, improving hair growth. Ki67 positive number of cells and Bcl2 protein expression, an antiapoptotic protein, are induced by Total-root saponins and ginsenoside Rb1 diminishing apoptotic cells in UVB-exposed human keratinocytes [9,23]. Ginsenoside F1, an enzymatically modified derivative of ginsenoside Rg1, by maintaining a constant level of the antiapoptotic protein Bcl-2 expression in UVB-irradiated HaCaT cells, protect keratinocytes from radiationinduced apoptosis [9,24].
Ginsenosides reduces skin aging
Skin aging is a multifactorial process consisting of two distinct and independent mechanisms: intrinsic and extrinsic aging.
Ginsenosides, extracted from Ginseng have been tested in several studies in antiaging [25,26]. This antiaging effects, of ginseng extract and ginsenosides is produced by maintaining skin structural integrity and regulating hair-growth by stimulating wound healing cells, collagen and hyaluronic acid.
Lee et all incubates fibroblasts, which are key wound-healing cells, with Panax ginseng, and found that P. ginseng stimulated human dermal fibroblast proliferation and collagen synthesis [27]. Human dermal fibroblast have different functions and are classified as key wound-healing cells because their function includes the production of collagen, growth factors, antioxidants and a balance of matrixproducing proteins and protease enzymes. In the Human fibroblast P. ginseng root extract activates human collagen A2 promotes and induces type-1 pro-collagen via phosphorylation of Smad2 [28].
Wrinkle formation, is associated as marker of dermal aging and present a reduced level of hyaluronan in the dermis [29]. On HaCaT cell treated with major ginseng metabolite (compound K, 20-O-beta-D-glucopyranosyl-20(S)-protopanaxadiol) were report that hyaluronan synthase2 (HAS2) gene is one of the most significantly induced genes [30] and also was tested that topical application of compound K on mouse skin and shows elevated the expression of hyaluronan synthase-2 [30]. The hyaluronan synthase-2 is an enzyme essential to hyaluronan synthesis, hyaluronan is a major component of most extracellular matrices that has a structural role in tissues architectures and regulates cell adhesion, migration and differentiation.
These antiaging effects of ginseng extracts through Src kinase-dependent activation of ERK and AKT/PKB kinases in the dermis and papillary dermis result in improved skin health, thereby ensuring hair-follicle health and a regular hair cycle [9,30].
Ginseng on androgen alopecia
The exposure to androgens is the major triggers for hair loss is which in most cases is genetically predetermined in androgenic alopecia patients [9,31,32].
The androgen that mainly plays a role in altering hair cycling is 5α-dihydrotestosterone (DHT), which is a metabolite of testosterone. The conversion of testosterone to DHT is mediated by the 5α-reductase (5αR) enzyme in each follicle [33,34] (Figure 2). Treatment with 5α-reductase inhibitors, e.g., finasteride, prevents the development of alopecia and increases scalp-hair growth [9]. DOI: http://dx.doi.org /10.5772/intechopen.96962 Topical application of ginseng extract or ginsenosides was reported to enhance hair growth. Rhizomes of P. ginseng (red ginseng) containing a considerable amount of ginsenoside Ro, Ro is the predominant ginsenoside in the rhizome showed greater dose-dependent inhibitory effects against testosterone 5α-reductase (5αR) [35]. Ginsenoside Rg3 (a unique ginsenoside in red ginseng) and Rd. also exhibited similar inhibitory effects against 5αR [36]. Another variety of ginseng, the Parribacus japonicas rhizome extract that contains a larger quantity of ginsenoside Ro also inhibited 5αR enzyme activity. Topical administration of red-ginseng rhizome extracts and ginsenoside Ro onto shaved skin of C57BL/6 mice abrogated testosterone-mediated suppression of hair regrowth [36].
Major components of hair regenerative capacity such as linoleic acid (LA) and β-sitosterol (SITOS) were significantly restored with Red Ginseng Oil (RGO) after testosterone (TES)-induced delay of anagen entry in C57BL/6 mice, also RGO and its major components reduced the protein level of TGF-β and enhanced the expression of anti-apoptotic protein Bcl-2, suggesting that RGO is a potent novel therapeutic natural product for treatment of androgenic alopecia [37].
Red Ginseng Extract (RGE) and ginsenosides protect hair matrix keratinocyte proliferation against dihydrotestosterone (DHT)-induced suppression and affects the expression of androgen receptor.
Moreover, RGE, ginsenoside-Rb1, and ginsenoside-Rg3 at lower levels that have been shown to inhibit 5a-reductase [35] inhibit the DHT-induced suppression of hair matrix keratinocyte proliferation and the DHT-induced upregulation of the mRNA expression of androgen receptor in hDPCs [16].DHT is the product of testosterone and does not require the activity of 5a-reductase to affect hair follicles, and the inhibitory effect of DHT on hair growth is mediated by the androgen receptor in DPCs [38] These results suggest that red ginseng may promote hair growth in humans through the regulation of androgen receptor signaling [16].
Effects of ginsenosides on chemotherapy
Majeed et al. review the recent perspectives of ginseng phytochemicals as therapeutics in oncology and explain the chemotherapeutic effect of ginsenoside as result of its appetites, ant proliferative, anti-angiogenic, anti-inflammatory and anti-oxidant properties [39]. The anticancer effect of ginseng was proven in various types of cancer: breast, lung, liver, colon and skin cancer. It increases the mitochondrial accumulation of apoptosis protein and down regulate the expression of anti-apoptotic protein, reducing cancer development. It also aids in the reduction of alopecia, fatigue and nausea, the known side effects of chemotherapeutic drugs [39].
Alopecia induced by chemotherapy is one of the most distressing side effects for patients undergoing chemotherapy. One drug used as chemotherapy is Cyclophosphamide (CP), also known as cytophosphane. Cyclophosphamide metabolite, 4-hydroperoxycyclophosphamide (4-HC) inhibited human hair growth, induced premature catagen development, and inhibited proliferation and stimulated apoptosis of hair matrix keratinocytes inducing the side effect of alopecia. In human hair follicle organ culture model pre-treatment with Korean Red Ginseng (KRG) before cyclophosphamide metabolite Dong In Keum et all shows that KRG suppress 4-HC-induced inhibition of matrix keratinocyte proliferation and stimulation of matrix keratinocyte apoptosis, playing a protective effect on 4-HC-induced hair growth inhibition and premature catagen development. Moreover, KRG restored 4-HC-induced p53 and Bax/ Bcl2 expression [17].
Activation of dermal papillary cell proliferation
Different intracellular signaling pathways are involving and plays a critical role in stimulating hair growth by promoting dermal papillary-cell proliferation.
Hair growth is promote by Ginsenoside Rg3 upregulating Vascular Endothelial Growth Factor (VEGF) expression [36]. VEGF is a signaling protein which is released from the epithelium and increases the angiogenesis of the hair follicle [9,[40][41][42]. Was also demonstrate by Shin et al. that Rg3 increased the proliferation of human dermal papillary cells, associating this proliferation with an upregulation of mRNA expression of VEGF also stimulated stem cells by upregulating factor-activating CD34 and CD8 [36] and promoted hair growth even more than minoxidil in mouse [43] it was conclude that Rg3 might increase hair growth through stimulation of hair follicle stem cells [36].
RGE and ginsenoside-Rb1 enhanced the proliferation of hair matrix keratinocytes, human hair-follicle dermal papillary cells (hDPCs). Treated hair with RGE or ginsenoside-Rb1 exhibited substantial cell proliferation and the associated phosphorylation of ERK and AKT [16], it was recently demonstrated that ERK activation plays an important role in the proliferation of hDPCs [42] and AKT mediates critical signals for cell survival and also regulates the survival of DPCs as an antiapoptotic molecule [9,16,44] proliferation and the prolongation of the survival in the hDPCs by red ginseng may be mediated by the ERK and AKT signaling pathway [9,16].
Human DPC treatment with Gintonin-enriched fraction (GEF) stimulated vascular endothelial growth factor release. Topical application of GEF and minoxidil promoted hair growth in a dose-dependent manner. Histological analysis showed that GEF and minoxidil increased the number of hair follicles and hair weight [45].
The Bcl-2 family proteins is notable for their regulation of apoptosis machinery, a form of programmed cell death, the member of this family either acts as antiapoptotic or pro apoptotic in nature. During the hair cycle, the dermal papillary cells (DPC) is the only region where Bcl-2 is expressed consistently and is considered to resist apoptosis [9,[46][47][48]. In mice Fructus Panax ginseng extract (FPG) increases the expression of Bcl-2 and decreases Bax expression, a pro apoptotic species, in cultured DPCs [49]. Parks et all concluded that FPG extract improves the cell proliferation of human DPCs through anti apoptotic activation. Topical administration of FPG extract might have hair regeneration activity for the treatment of hair loss [49].
Wingless-type integration-site (WNT) signaling plays a key role in hair-follicle development. Activation of WNT signaling is necessary for initiation of follicular develop, the blockade of WNT signaling by overexpression of the WNT inhibitor, Dickkopf Homolog 1 (DKK1), prevents hair-follicle formation in mice [50] and inhibited hair growth [9,50].
β-catenin signaling is essential for epithelial stem-cell fate since keratinocytes adopt an epidermal fate in the absence of β-catenin [51], and this signaling pathway is related to WNT [52] affecting hair follicle placodes formation, when β-catenin is mutated during embryogenesis, formation of placodes that generate hair follicles is blocked [53].
The role of TGF-β in hair loss has been documented through the study revealing that treatment with a TGF-β antagonist can promote hair growth via preventing catagen progression [57]. Also through the activation of TGF-β and brain-derived DOI: http://dx.doi.org /10.5772/intechopen.96962 neurotrophic factor (BDNF), it was describe that it was enhanced the transition from the anagen to the catagen phase [58].
Since TGF-β1 induces catagen in hair follicles and it is closely related to alopecia progression it can be say that acts as a pathogenic mediator of androgenic alopecia [57,59] and red ginseng extract can delay the catagen phase and holds the potential to promote hair growth, thought downregulation or inhibition of the TGF-β pathway.
On Young Go Kim investigation was concluded that on ultraviolet B (UVB)irradiated skin aging in mice, oral administration of Red Ginseng extract protects from skin damage induced by ultraviolet B (UVB)-irradiation, increases of skin thickness and pigmentation, reduction of skin elasticity, inhibited the increases of epidermis and corium thickness. The administration of Red Ginseng extract exert the protective action on UVB-radiation skin aging inhibiting the increase of skin TGF-beta1 content induced by UVB irradiation [60].
Furthermore on Zheng Li the hair-growth-promoting effects of Protopanaxatiol type ginsenoside Re were associated with the downregulation of TGF-β-pathwayrelated genes, which are involved in the control of hair-growth phase-transitionrelated signaling pathways [61]. On their study shows that topical administration of ginsenoside Re on to the back skin of nude mice for up to 45 days significantly increased hair-shaft length and hair existent time, and stimulated hair-shaft elongation in the ex vivo cultures of hair follicles isolated from C57BL/6 mouse [61].
The hyper activation of the c-Jun-N-terminal kinase (JNK) pathway in associate with an activation of TGF-β-induced hair loss. Korean red ginseng has been attributed to exert protective effects onTGF-β-induced hair loss by the inhibition of JNK on radiation-induced apoptosis of HaCaT cells [62].
By promoting telogen-to-anagen transition of follicular cells and epidermal growth, Shh/Gli regulates hair-follicle development, growth and cycling [54,55]. Shh −/− mice develop have abnormal hair follicular cells in the dermal papillae and blocking Shh activity mice diminished hair growth, this results indicates the importance of Shh signaling in hair-growth promotion [56].
Androgenetic alopecia is related to testosterone (TES)-induced delay of anagen phase and hair loss. In C57BL/6 mice Red-ginseng oil (RGO) reversed testosteroneinduced suppression of hair regeneration through early inducing anagen phase by up-regulating the expression of Shh/Gliand Wnt/β pathway-related proteins, Shh, Smoothened (Smo), β-catenin, Cyclin D1 Cyclin E and Gli1. Additionally, RGO reduced the protein level of TGF-β but enhanced the expression of anti-apoptotic protein Bcl-2 [37] suggesting that RGO is a potent therapeutic natural product for treatment of androgenic alopecia possibly through hair re-growth activity [37].
The signaling pathway and anagen induction effect of ginsenoside F2 were investigated and compared with finasteride on the effect of hair growth induction in Heon-Sub Shin at all paper [43] where MTT assay results indicated cell proliferation in human DPC increased a 30% with ginsenoside F2 treatment compared to finasteride [43]. Studding the expression of β-catenin and its transcriptional coactivator Lef-1, the Ginsenoside F2 compared to finasteride group, increased the expressions while decreased the expression of DKK-1. Tissue histological analysis shows that administration of ginsenoside F2 promoted hair growth as compared to finasteride, increase in the number of hair follicles, thickness of the epidermis, and follicles of the anagen phase [43]. Heon-Sub Shin conclude that ginsenoside F2 might be a potential new therapeutic compound for anagen induction and hair growth through the Wnt signal pathway [43]. In another study by Matsuda et al., ginsenosides Rg3 and Rb1 [63] extracted from red ginseng stimulates hair growth activity in an organ culture of mouse vibrissa follicles. No detailed explanations are given in this paper about the mechanism of hair growth, but the results presented by Matsuda et al. [63] indicated that Ginseng Radix possesses hair growth promoting activity.
Panax ginseng (PG) has diverse pharmacological effects such as anti-aging and anti-inflammation it exert this effects thought stimulating the proliferation and inhibiting the apoptosis [64]. PG extract treatment affected the expression of apoptosis-related genes in HFs, Bcl-2 and Bax, through this regulation reversed the effect of DKK-1 on ex vivo human hair organ culture, antagonizes DKK-1-induced catagen-like changes [9,64].
Growth factors and cytokines have been proved to influence hair follicle development or cycling [65] overexpression and/or secretion of Cytokines, such as interleukins (ILs) and interferons (IFN), cause skin inflammation, TGF beta 1 partially inhibited hair growth and EGF, TNF alpha and IL-1 beta completely abrogated it [66] There is an aberrant expression pattern of cytokines in alopecia areata hair follicles.
The presence of CD8+ T cells and NKG2D+ cells around the peri-bulbar area of the affected hair follicles [67] and upregulation of several ILs, such as IL-2, IL-7, IL-15, and IL-21, and IFN-γ leads to immune activation area where're main suppressed natural killer (NK) cells [68] and is defined as immune-tolerated area. Loss of immune tolerance [68] or immune activation [67], leads to hair-follicle dystrophy and acceleration of the catagen phase [9] by the activation of a cytotoxic cluster of differentiation 8-positive (CD8+) and NK group 2D-positive (NKG2D+) T cells. In alopecia Areata (AA) are found more CD57 − CD16+ NK cells and there is a association between NK cells and the collapse of HF-IP (immune privilege) while normal human scalp skinthat indeed there is no sign of an NK attack on normal anagen VI HFs [69].
Phosphorylate Stat3 in the Janus Kinase (JAK)/Signal transducer and activator of transcription-3 (STAT3) pathway regulate the activation of CD8+ and the NKG2D+ CD8+ T cells [70]. The inhibition of the upstream pathway JAK appears as a plausible target for developing a therapy for hair loss [67]. In fact, a number of JAK inhibitors, such as tofacitinib, ruxolitinib, baricitinib, CTP-543, PF-06651600 and PF-06700841 are in the progress of developing a therapy for alopecia [71,72] more often in alopecia areata a common form of non-scarring hair loss that usually starts abruptly with a very high psychological impact [73], it is a T-cell-mediated disease which produces circular patches of non-scarring hair loss and nail dystrophy [72].
Ginsenoside Rk1 inhibited the lipopolysaccharide-stimulated phosphorylation of JAK2 and STAT3 in murine macrophage cells [74] and Ginsenoside 20(S)-Rh2 exerts anti-cancer activity through targeting IL-6-induced JAK2/STAT3 [75]. Topical application of ginsenoside F2 by inhibiting the production of IL-17 and ROS, ameliorated dermal inflammation skin [69]. In the pathogenesis of alopecia areata is believed to be an imbalance of inflammatory cytokines IL-17. Monoclonal antibodies against IL-17A leads to hair regrowth in human volunteers [76]. Treatment with Panax ginseng saponins diminished the proliferation and differentiation of Th17 cells and decreased IL-17 expression [77]. This regulating IL-17 secretion ginsenosides may enhance hair growth in alopecia areata [69,77]. It would be interesting to investigate whether ginsenoside Rk1 or other ginsenosides can target JAK/STAT3 signaling in dermal papilla and diminish activation of inflammation and immune cells.
Conclusion
Ginseng may be a multipurpose natural medicine with an extended history of medical application throughout the globe, particularly in Eastern countries.
The beneficial effects of Ginseng cover a good spectrum from immune to cardiovascular, cancer and sexual diseases. New advances in the science leads elucidate new pharmacological activity of the ginseng and its ginsenosides. There are some studies of the use of Ginseng in dermatology investigating its effects from molecular to physiological in a skin cancer, dermatitis, alopecia wound
|
2021-09-28T01:09:33.199Z
|
2021-07-09T00:00:00.000
|
{
"year": 2021,
"sha1": "9a6b92034320895df568ab58ac06e1e07a825f79",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/76168",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "05f3e15e8437d986d4b17c8d6425c83b41ef321e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
118520857
|
pes2o/s2orc
|
v3-fos-license
|
Observation of the Thermal Casimir Force is Open to Question
We discuss theoretical predictions for the thermal Casimir force and compare them with available experimental data. Special attention is paid to the recent claim of the observation of that effect, as predicted by the Drude model approach. We show that this claim is in contradiction with a number of experiments reported so far. We suggest that the experimental errors, as reported in support of the observation of the thermal Casimir force, are significantly underestimated. Furthermore, the experimental data at separations above $3\,\mu$m are shown to be in agreement not with the Drude model approach, as is claimed, but with the plasma model. The seeming agreement of the data with the Drude model at separations below $3\,\mu$m is explained by the use of an inadequate formulation of the proximity force approximation.
Introduction
During the last ten years much attention has been given to the Casimir force at nonzero temperature. This physical phenomenon is described by the Lifshitz theory 1 which presents the Casimir free energy and force between two parallel plates as a functional of the dielectric permittivity of plate materials calculated along the imaginary frequency axis, ε(iξ). The optical data for the complex index of refraction 2 extrapolated to low frequencies allow the calculation of ε(iξ) using the Kramers-Kronig relation. Surprisingly, for metal test bodies the use of most natural extrapolation by means of the Drude model was shown to be in violation of the Nernst heat theorem 3,4 and in contradiction with experimental data. 5-8 On the other hand, the extrapolation of the optical data below the edge of the absorption bands by means of the plasma model, which disregards dissipation of conduction electrons, turned out to be in agreement with the Nernst theorem, and consistent with the experimental results. This created a serious problem because in accordance with classical Maxwell equations the dielectric permittivity in the quasistatic regime is inversely proportional to the frequency in accordance with the Drude model, whereas the plasma model is an approximation applicable only at sufficiently high (infrared) frequencies. Currently the use of the Drude and plasma models in the Lifshitz formula is customarily called the Drude 9-11 and plasma 8,12-14 model approaches, respectively.
In this paper we discuss present experimental status of the Drude and plasma model approaches to the thermal Casimir force. In Sec. 2 main characteristic features of measurements of the Casimir pressure between metal test bodies by means of a micromechanical oscillator are considered. These measurements exclude the Drude model approach but are consistent with the plasma model. Section 3 briefly reviews experiments with semiconductor 15,16 and dielectric 17,18 test bodies, where the Drude model approach was excluded as a description of the dc conductivity of dielectrics. In Sec. 4 experiments 19-21 using a torsion balance and spherical lenses with centimeter-size curvature radii performed before 2010 are discussed. The first of them 19 was interpreted 22 as being in disagreement with the Drude model, but later this conclusion was cast in doubt. 23 Our main attention here is devoted to the recent experiment 24 claiming the observation of the thermal Casimir force, as predicted by the Drude model approach (see Sec. 5). We demonstrate that at separations below 3 µm the interpretation of this experiment is in fact uncertain because surface imperfections were ignored, and these are invariably present on surfaces of lenses of centimeter-size curvature radius. 25 At separations above 3 µm the experimental data are shown to agree with the plasma model approach, as opposed to what is claimed in Ref. 24. Section 6 contains our conclusions and discussion.
Experiments Between Metal Test Bodies Using a Micromachined Oscillator
These three experiments conducted with increased precision 5-8 are independent measurements of the gradient of the Casimir force acting between a sphere of 300 or 150 µm radius and a plate of a micromechanical torsional oscillator, both covered with Au. Using the proximity force approximation (PFA), which leads to negligibly small errors for perfectly spherical surfaces of sufficiently large curvature radii, the gradient of the Casimir force was reexpressed as the Casimir pressure between two parallel plates. Different voltages were applied between the sphere and the plate, and the electric force was measured and found in agreement with exact theoretical results in a sphere-plate geometry. This was used to perform electrostatic calibrations. Specifically, the residual potential difference between the sphere and the plate in the absence of applied voltages was determined and found to be independent of separation. (The details of these experiments are described in Refs. 26 It should be stressed that the third, and most precise, experiment using a micromachined oscillator 7,8 is of metrological quality, because the random error of the measured Casimir pressures was made much smaller than the systematic (instrumental) error. The comparison of the measurement results with different theoretical approaches was performed taking into account all possible undesirable systematic effects. Thus, the role of patch potentials was investigated 6 and found to be negligibly small (here only small patches are possible with a maximum size less than the thickness of the Au layer, i.e., less than 300 nm).
The experimental results exclude the predictions of the Drude model approach over the entire measurement range from 162 to 746 nm at a 95% confidence level. The same results were found to be consistent with the plasma model approach. For the purpose of comparison with the large separation experiment 24 in Sec. 5, in Fig. 1 we present the measurement data indicated as crosses in comparison with theoretical predictions in the region from 700 to 746 nm. In Fig. 1(a) and 1(b) the arms of the crosses indicate the total experimental errors determined at a 95% and 70% confidence levels, respectively. As can be seen in Fig. 1, the prediction of the plasma model approach (shown by the black line) is in excellent agreement with the data, whereas the prediction of the Drude model (the grey line) is experimentally excluded. It should be stressed that the experimental and theoretical results shown in Fig. 1 are independent, and the comparison between experiment and theory has been performed with no fitting parameters.
Experiments with Semiconductor and Dielectric Test Bodies
The Drude-type dielectric permittivity is also used to describe dc conductivity in dielectrics and semiconductors of dielectric type. It is traditional 1 to disregard dc conductivity when dealing with the van der Waals and Casimir forces between dielectric test bodies. This was justified by the presumed smallness of this effect. It was shown, 28 however, that the inclusion of dc conductivity into the Lifshitz theory leads to a violation of the Nernst heat theorem and significantly increases the magnitude of the Casimir force. This effect was tested in the experiment 15,16 measuring the Casimir force difference between an Au sphere of 100 µm radius and a Si plate illuminated with laser pulses performed using an atomic force microscope.
In the absence of laser light, the Si plate was in a dielectric state with a density of free charge carriers 5 × 10 14 cm −3 . In the presence of light, the density of free charge carriers was increased by almost 5 orders of magnitude (semiconductor of metallic type). The experimental results for the Casimir force difference F diff C (in the presence minus in the absence of light) exclude the dc conductivity described by the Drude model at a 95% confidence level, but are consistent with the Lifshitz theory with dc conductivity in the absence of light disregarded. In Fig. 2(a) the experimental data for the difference Casimir force are indicated as crosses. The theoretical results are shown by the black and grey lines with dc conductivity in the dark phase disregarded and included, respectively. In this experiment, the experimental and theoretical results were also obtained independently, and their comparison has been made with no fitting parameters (see Refs. 26 and 27 for details).
Another important experiment is the measurement of the thermal Casimir-Polder force between ground state 87 Rb atoms, belonging to a Bose-Einstein condensate, and a fused silica plate at separations from about 7 to 10 µm. This experiment was repeated three times, in equilibrium, when the plate temperature was the same as that of environment, and out of equilibrium, when the plate temperature was higher than in the environment. In all cases the measurement data were in agreement with theory disregarding dc conductivity of fused silica, 17 but were in contradiction with theory taking this conductivity into account. 18 As an example, Fig. 2(b) shows the measured fractional shift of the trap frequency γ z (crosses) due to the Casimir-Polder force as a function of separation. The arms of the crosses are plotted at a 70% confidence level (an environment was at 310 K and the plate at 605 K). The black and grey lines show theoretical results computed disregarding and including dc conductivity of fused silica, respectively. We emphasize that, as in the previous two cases, this experiment is an independent measurement and no fitting parameters have been used when comparing the data with theory.
Torsion Balance Experiments
Experiments measuring the Casimir force with a torsion pendulum 19-21,24 use the configuration of a spherical lens of greater than 10 cm radius of curvature R in close proximity to a plate. The first such experiment 19 was performed in 1997 with a lens of R = 12.5 cm and a plate both coated with Au, and was criticized 29,30 for overestimation of the level of agreement between the measurement data and theory. The results of this experiment at about 1 µm separation were used 22 to exclude the Drude model approach to the Casimir force. At d = 1 µm the latter predicts a -18.9% thermal correction to the force which was not observed. Recently the possibility of a systematic correction due to time-dependent fluctuations in the distance between the lens and the plate was discussed. 23 It was speculated 23 that such a correction, if it is relevant to the experiment of Ref. 19, might bring the data into agreement with the Drude model approach.
Another torsion balance experiment 20 used a lens of R = 20.7 cm curvature radius and a plate also coated with Au. The measured data over the separation region from 0.48 to 6.5 µm demonstrated a high level of agreement with the standard theory of the Casimir force and did not support the existence of large thermal corrections predicted by the Drude model. The comparison between the data and the standard theory of the Casimir force in Ref. 20 is characterized by the χ 2 = 513 with the number of degrees of freedom equal to 558. From this it follows 31 that the probability of obtaining a larger value of the reduced χ 2 in the next individual measurement is as large as 91%, that is a high level of agreement.
One more torsion balance experiment 21 used a Ge lens and a Ge plate. The experimental precision of this experiment was not sufficient to discriminate between different theoretical approaches to the Casimir force.
The characteristic feature of the torsion pendulum experiments listed above is that all of them use fitting parameters, such as the force offset, the offset of the voltage describing a noncompensated electric force, etc. The values of these parameters are found from the best fit between the experimental data and different theories. This means that torsion balance experiments are not independent measurements, and the comparison of their results with theory is not as definitive as for the independent measurements considered in Secs. 2 and 3.
An advantage of torsion pendulum experiments is the use of lenses with centimeter-size radii of curvature, which significantly increases the magnitude of the Casimir force and allows measurements at separations of a few micrometers.
The use of large lenses, however, leads to a problem: the experimental results in Refs. 19-21 were compared with theory using the simplest version of the PFA for the Casimir force acting between a lens and a plate 26 where F (d, T ) is the free energy (per unit area) at temperature T in the configuration of two parallel plates. Recently it was shown 32 that for lenses of centimeter-size curvature radius at separations below a few micrometers, Eq. (1) Calculations show 32 that the Casimir force between a perfectly spherical lens and a plate described by the Drude model at d < 3 µm can be made approximately equal to the force between a lens with some surface imperfections and a plate described by the plasma model, and vice versa. This makes measurements of the Casimir force by means of torsion pendulum experiments uncertain at separations below 3 µm.
Purported Observation of the Thermal Casimir Force
Recently, one more experiment measuring the thermal Casimir force has been performed 24 using the torsion pendulum technique. The attractive force between an Au-coated spherical lens of R = (15.6 ± 0.31) cm radius of curvature and an Au-coated plate was measured over a wide range of separations from 0.7 to 7.3 µm. As in the case for the earlier torsion pendulum experiments mentioned in Sec. 4, the experiment of Ref. 24 is not an independent measurement. It uses two phenomenological parameters determined from the best fit between the experimental data and different theoretical approaches (see below for details). Furthermore, similar to earlier experiments exploiting large spherical lenses, the experiment of Ref. 24 ignores surface imperfections that are invariably present on surfaces of real lenses, as discussed in Sec. 4, and uses Eq. (1) in computations notwithstanding the fact that it is applicable only for perfectly spherical surfaces. It should be emphasized that the experiment of Ref. 24 measures not the thermal Casimir force in itself, but up to an order of magnitude larger total attractive force between a lens and a plate. The total force is assumed to be the sum of the Casimir force and the electrostatic force from the large patches. As the authors themselves recognize, 24 "an independent measurement of this electrostatic force with the required accuracy is currently not feasible". That is why it is hypothesized that there are large patches on Au-coated surfaces due to absorbed impurities or oxides whose size λ satisfies the condition text Observation of the Thermal Casimir Force Is Open to Question 7 Note that small patches due to spatial changes in surface crystalline structure, also mentioned in Ref. 24, satisfy a condition λ ≪ d because the thickness of the Au films used is only 70 nm. The electric force due to such small patches is exponentially small. 33 Keeping in mind the parameters of the configuration used in Ref. 24, for the size of large patches satisfying Eq. (3) one obtains λ ≈ 50 µm. The electric force due to large patches was modelled by the term 24 where ǫ 0 is the permittivity of free space, and the parameter V rms describes the magnitude of the voltage fluctuations across the test bodies. The value of this parameter was not measured and it was used as one of the fitting parameters (we assume that an attractive force is negative).
In the presence of a voltage V applied between the lens and the plate the measured force was represented in the form 24 where V m ≈ 20 mV is the residual potential difference between the lens and the plate and F C (d) is the Casimir force at laboratory temperature T = 300 K. It was found 24 that V m is nearly independent of separation where it was determined (the variation was 0.2 mV between 0.7 and 7 µm). The measurement data for the total force at different separations were taken with an applied potential equal to V m in order to cancel the force from the residual potential difference. The data were corrected for the presence of fluctuations in separation and for a long-term drift of a vibration-isolation slab. The mean experimental data for the total measured force (obtained from 383 sweeps) multiplied by separations are shown on a logarithmic scale in Fig. 2 of Ref. 24 together with their errors as a function of separation. We reproduce these data in our Fig. 3 plotted on a linear scale. As can be seen in Fig. 3, the errors are unexpectedly small. For example, at the largest separation, d = 7.29 µm, the measured total force is equal to F = (19.54 ± 0.28) pN, leading to the relative error 1.4%. For the remaining 20 data points the relative errors vary from 0.86% to 2.2% (note that according to Ref. 24 the total force "is measured at 30 logarithmically spaced plate separations," but for some unexplained reason the data at only 21 separations are shown 24 in Figs. 2 and 3). According to the caption to Fig. 2 in Ref. 24, "the vertical error bars include contributions from the statistical scatter of the points as well as from uncertainties in the applied corrections." Such an important contribution of the total experimental error as a systematic (instrumental) error, which is understood as an error of a calibrated device used in force measurements (i.e., the smallest fractional division of the scale of the device), is not mentioned. The omission of a systematic error may explain the claimed smallness of all errors in Figs. 2 and 3 of Ref. 24. The point is that the absolute instrumental errors are typically constant, and this leads to a quick increase of the total relative error with increasing separation distance. If Fig. 2 were the total experimental errors. One should also mention that the confidence level at which the errors are found is not indicated in Ref. 24. We assume that it is on the level of one sigma. The corrected mean data were fitted to the theoretical expression of the form 24 where in comparison with (5) one more fitting parameter a was introduced as a constant force offset due to voltage offsets in the measurement electronics. 24 Here, F C (d) is the theoretical thermal Casimir force which can be computed in the framework of the Lifshitz theory 1 using either the Drude or the plasma model approach.
From the fitting of mean experimental data for the total force to the theoretical force in Eq. (6) with two fitting parameters V rms and a, it was concluded in Ref. 24 that the data are in excellent agreement with the thermal Casimir force calculated using the Drude model approach. The plasma model was excluded in the measured separation range. Below we demonstrate that these conclusions are in fact not supported.
Computations of the thermal Casimir force within the Drude model approach were performed in Ref. 24 using the tabulated 34 optical data for the complex index of refraction of Au extrapolated to low frequencies by means of the Drude model. The claimed excellent agreement of these computations with the data manifested itself as the value of reduced χ 2 = 1.04 with the fitting parameters a = −3.0 pN and V rms = 5.4 mV. It is easily seen, however, that in the experiment under consideration (the number of degrees of freedom is equal to 21 − 2 = 19) this value of the reduced χ 2 implies that the probability of obtaining a larger value of the reduced χ 2 in the next individual measurement is equal 31 to 41%. For the results of an individual measurement fitted to some model, such a χ 2 -probability could be considered as being in favor of this model. If, however, the mean measured force over a large number of repetitions is used for the fit, as in Ref. 24, the χ 2 -probability should be larger than 50% in order the measured data could be considered as supporting the theoretical model.
The next important fact is that at separations d > 3 µm the experimental data of Ref. 24 are not in agreement with the Drude model approach, as is claimed, but with the plasma model. At such large separations the theoretical predictions of the Drude and plasma model approaches to a large extent do not depend on the values of plasma frequency and relaxation parameter. Furthermore, at d > 5 µm the Casimir forces calculated using the Drude and plasma model approaches are approximately given by 26,27 where ζ(z) is the Riemann zeta function, i.e., the predictions of both approaches differ by a factor of two.
To compare the experimental data for the total force with the Drude and plasma model approaches at d > 3 µm (six experimental points in Fig. 2 of Ref. 24 and in our Fig. 3), we have repeated the fitting procedure to Eq. (6) with all the same corrections as were introduced by the authors. Keeping in mind that Eq. (6) contains two fitting parameters, the number of degrees of freedom is equal to four. When the Drude model approach at T = 300 K is used in the fit, the best agreement with Eq. (6) is achieved at a = −0.29 pN and V rms = 5.45 mV. The respective reduced χ 2 = 1.65 leads 31 to the probability of obtaining a larger reduced χ 2 in the next measurement equal to 16%. This signifies a poor agreement of the data with the predictions of the Drude model.
To check the predictions of the plasma model approach at d > 3 µm, we used the generalized plasma-like model 8,14,26,27 to calculate the thermal Casimir force. In this case at T = 300 K the best agreement between Eq. (6) and the data is achieved with a = 3.6 pN, V rms = 4.5 mV and the reduced χ 2 = 0.67. Thus, in the next measurement a larger reduced χ 2 will be obtained 31 with the probability 61%. This confirms that the plasma model is in a good agreement with the large separation data of Ref. 24. In Fig. 4 the magnitudes of the total theoretical forces (electric plus Casimir) multiplied by separations are shown by the grey and black lines. They are obtained from the best fit to the experimental data of Ref. 24, indicated as crosses, using the Drude and plasma model approaches, respectively. It is seen that the force data at d > 3 µm are in a good agreement with the plasma model approach.
A seeming agreement of the fit performed in Ref. 24 with the Drude model approach at separations d < 3 µm may be explained by an unjustified use of the PFA in the simplest form of Eq. (1). As explained in Sec. 4, this formulation of the PFA is not applicable to the configuration of a large lens and a plate spaced
Conclusions and Discussion
In the foregoing, we have discussed the comparison between experiment and theory in connection with claimed observation of the thermal Casimir force in Ref. 24 Our second conclusion is that the experiment 24 is not an independent measurement of the thermal Casimir force, but a fit using two fitting parameters and a phenomenological expression for the electric force, presumably arising from large surface patches. According to the authors, this electric force cannot be measured with sufficient precision, although it is up to an order of magnitude greater than the thermal Casimir force. Keeping in mind that the experiments 5-8,15-17 are independent measurements of the Casimir and Casimir-Polder force or the Casimir pressure, one may cast doubt on the results of the experiment of Ref. 24. Note also that experiments 19,20 use a fitting procedure similar to Ref. 24, but arrive at the opposite conclusion that the Drude model is not supported.
According to our third conclusion, the experimental errors in the mean total force in Ref. 24 are significantly underestimated. This is seen from the fact that the systematic (instrumental) errors, which are typically separation-independent, were not addressed and taken into account in the balance of errors. This resulted in surprisingly small relative experimental errors in the mean measured forces (equal to, e.g., 1.4% at the separation 7.29 µm).
The fourth conclusion is that at separations above 3 µm the measurement data 24 are in agreement not with the Drude model approach to the thermal Casimir force, as is claimed, 24 but with the plasma model. This is readily demonstrated by the application of the fitting procedure 24 with all the same corrections, as made by the authors of Ref. 24, to the last six experimental points measured at separations above 3 µm.
The last, fifth, conclusion is that a seeming agreement of the experimental data for the mean total force 24 with the Drude model at separations below 3 µm can be explained by the use 24 of an inadequate formulation of the PFA. The formulation that was employed is applicable only to perfectly shaped spherical surfaces, whereas lenses of centimeter-size radius of curvature (such as exploited in Ref. 24) invariably contain surface imperfections that must be taken into account in computations.
To conclude, the results of Ref. 24 cannot be considered as a reliable confirmation of the predictions of the Lifshitz theory combined with the Drude model. Therefore, the problem of thermal Casimir force remains to be solved.
|
2011-08-29T18:40:06.000Z
|
2011-08-29T00:00:00.000
|
{
"year": 2011,
"sha1": "6996e83f04a23ee1fb8bd3c081ca5a7bf17ca3b9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1108.5696",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6996e83f04a23ee1fb8bd3c081ca5a7bf17ca3b9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
224974022
|
pes2o/s2orc
|
v3-fos-license
|
The P-T Probability Framework for Semantic Communication, Falsification, Confirmation, and Bayesian Reasoning
Many researchers want to unify probability and logic by defining logical probability or probabilistic logic reasonably. This paper tries to unify statistics and logic so that we can use both statistical probability and logical probability at the same time. For this purpose, this paper proposes the P-T probability framework, which is assembled with Shannon's statistical probability framework for communication, Kolmogorov's probability axioms for logical probability, and Zadeh's membership functions used as truth functions. Two kinds of probabilities are connected by an extended Bayes' theorem, with which we can convert a likelihood function and a truth function from one to another. Hence, we can train truth functions (in logic) by sampling distributions (in statistics). This probability framework was developed in the author's long-term studies on semantic information, statistical learning, and color vision. This paper first proposes the P-T probability framework and explains different probabilities in it by its applications to semantic information theory. Then, this framework and the semantic information methods are applied to statistical learning, statistical mechanics, hypothesis evaluation (including falsification), confirmation, and Bayesian reasoning. Theoretical applications illustrate the reasonability and practicability of this framework. This framework is helpful for interpretable AI. To interpret neural networks, we need further study.
Example 1. There are five labels: y1 = "child", y2 = "youth", y3 = "middle aged", y4 = "elder", and y5 = "adult". Notice that some youths and all middle-aged people and elders are also adults. Suppose that ten thousand people go through a door. For everyone denoted by x, entrance guards judge if x is adult, or if "x is adult" is true. If 7000 people are judged to be adults, then the logical probability of y5 = "x is adult" is 7000/10,000 = 0.7. If the task of entrance guards is to select one from the five labels for every person, there may be only 1000 people who are labeled "adult". The statistical probability of "adult" should be 1000/10,000 = 0.1.
Why is the selected probability of y5 is less than its logical probability? The reason is that other 6000 adults are labeled "youth", "middle aged", or "elder". In other words, the reason is that a person can only make one of five labels' selected probabilities increase 1/10,000. In contrast, a person can make two or more labels' logical probabilities increase 1/10,000. For example, a 20-year-old man can make both logical probabilities of "youth" and "adult" increase 1/10,000.
An extreme example is that the logical probability of a tautology, such as "x is adult or not adult", is 1. In contrast, its statistical probability is almost 0 in general, because a tautology is rarely selected.
From this example, we can find: 1. A hypothesis or label yj has two probabilities: a logical probability and a statistical or selected probability. If we use P(yj) to represent its statistical probability, we cannot use P(yj) for its logical probability. 2. Statistical probabilities are normalized (the sum is 1), whereas logical probabilities are not. The logical probability of a hypothesis is bigger than its statistical probability in general. 3. For given age x, such as x = 30, the sum of the truth values of five labels may be bigger than 1. 4. The logical probability of "adult" is related to the population age distribution P(x), which is a statistical probability distribution. Clearly, the logical probabilities of "adult" obtained from the door of a school and the door of a hospital must be different.
The logical probability of yj calculated above accords with Reichenbach's frequentist definition of logical probability [9]. However, Reichenbach has not distinguished logical probability and statistical probability.
In the popular probability systems, such as the axiomatic system defined by Kolmogorov [10], all probabilities are represented by "P" so that we cannot use two kinds of different probabilities at the same time. Nor can we distinguish whether "P" represents a statistical probability or a logical probability, such as in popular confirmation measures [11].
Popper complained about Kolmogorov's probability system, saying: "he assumes that, in 'P(a, b)' …a and b are sets; thereby excluding, among others, the logical interpretation according to which a and b are statements (or 'propositions', if you like)." ( [7], pp. 330-331, where P(a, b) is a conditional probability and should be written as P(a|b) now. ) Popper hence proposed his own axiomatic system in New Appendices of Conjectures and Refutations ( [7], pp. 329-355). In Popper's probability system, a and b in P(a|b) may be sets, predicates, propositions, and so on. However, Popper also only uses "P" to denote probability, and hence, his axiomatic system does not use statistical and logical probabilities at the same time. For this reason, Popper's probability system is not more practical than Kolmogorov's.
To distinguish statistical probability and logical probability and use both at the same time, I [12] propose the use of two different symbols "P" and "T" to identify statistical probabilities (including likelihood functions) and logical probabilities (including truth function). In my earlier papers [13][14][15], I used "P" for objective probabilities and "Q" for both subjective probability and logical probability. As subjective probability is also normalized and defined with "=" instead of "ϵ (belong to)" (see Section 2.1), I still use "P" for subjective probability now.
A Problem with the Extension of Logic: Can a Probability Function Be a Truth Function?
Jaynes ([16], pp. 651-660) made a similar effort as Popper to improve Kolmogorov's probability system. He is a famous Bayesian, but he admits that we also need frequentist probability, such as the probability for sample distributions. He has proved that there is an equivalence between Boltzmann's entropy and Shannon's entropy, in which the probability is frequentist or objectively statistical. He summarizes that probability theory is the extension of logic. In his book [16], he uses many examples to show that much scientific reasoning is probabilistic. However, his probability system lacks truth functions that are multivalued or fuzzy, as reasoning tools.
A proposition has a truth value; all truth values of propositions with the same predicate and different subjects form a truth function. In the classical logic (binary logic), the characteristic function of a set is also the truth function of a hypothesis. Suppose that there is a predicate (or propositional function): yj(x) = "x has property aj", and set Aj includes all x that have property aj. Then, the characteristic function of Aj is the truth function of yj(x). For example, x represents an age or a x-yearold person, and people with age ≥ 18 are defined as adults. Then, the set that includes all x with property age ≥ 18 is Aj = [18, ∞). Its characteristic function is also the truth function of predicate "x is adult" (see Figure 1). Figure 1. Solving the truth functions of "adult" and "elder" using prior and posterior probability distributions. The human brain can guess T("adult"|x) (cyan dashed line) or the extension of "adult" from P(x|"adult") (dark green line) and P(x) (dark blue dashdotted line) and estimate T("elder"|x) (purple thin dashed line) or the extension of "elder" from P(x|"elder") (grey thin line) and P(x). Equations (12) and (22) are new formulas for calculating T("adult"|x) and T("elder"|x).
If we extend binary logic to continuous value logic, the truth function should change between 0 and 1. Suppose that the number of a group of 60-year-old people is N60, and N60* people among the N60 people are judged to be elderly. Then, the truth value of proposition "a 60-year-old person is elderly" is N60*/N60. This truth value is about 0.8. If x = 80, the truth value should be 1.
According to the logical interpretation of probability, a probability should be a continuous logical value, and a conditional probability function P(yj|x) with variable x as the condition should be a truth function (e.g., fuzzy truth function). Shannon calls P(yj|x) the Transition Probability Function (TPF) ( [3], p. 11). In the following, we use an example in natural language to explain why TPFs are not truth functions and how the human brain thinks using conceptual extensions (or denotations), which can be represented by truth functions.
Example 2. (refer to Figure 1). Suppose we know the population age prior distribution P(x) and the posterior distributions P(x|"adult") and P(x|"elder"), where the extension of "adult" is crisp and that of "elder" is fuzzy. Please solve the truth functions or the extensions of labels "adult" and "elder".
According to the existing probability theories, including Jaynes' probability theory, we cannot obtain the truth function from P(x) and P(x|yj). We cannot get TPF P(yj|x) either. Because of P(yj|x) = P(x|yj)P(yj)/P(x) according to Bayes' theorem, without P(yj), we cannot obtain P(yj|x). However, the human brain can estimate the extension of yj from P(x) and P(x|yj) without P(yj) (see Figure 1). With the extension of a label, even if P(x) is changed, the human brain can still predict the posterior distribution and classify people with different labels.
This example shows:
The existing probability theories lack the methods used by the human brain (1) to find the extension of a label (an inductive method) and (2) to use the extension as the condition for reasoning or prediction (a deductive method).
The TPF and the truth function are different, because a TPF is related to how many labels are used, whereas a truth function is not.
Distinguishing (Fuzzy) Truth Values and Logical Probabilities-Using Zadeh's Fuzzy Set Theory
Dubois and Prade [17] pointed out that there was a confusing tradition-researchers often confused the degree of belief (logical probability) in a hypothesis and the truth value of a proposition; we could only do logical operations with the truth values of propositions instead of logical probabilities. I agree with them. Section 7.2 will discuss why we cannot directly do logical operations with logical probabilities. Now, we further distinguish the logical probability and the (continuous or fuzzy) truth value.
The sentence "x is elderly" (where x is one of a group of people) is a propositional function; "this man is elderly" is a proposition. The former has a logical probability, whereas the latter has a truth value. As a propositional function includes a predicate "x is elderly" and the universe X, we also call a propositional function as a predicate. Therefore, we need to distinguish the logical probability of a predicate and the truth value of a proposition.
However, in the probability systems mentioned above, either we cannot distinguish the logical probability of a predicate and the truth value of a proposition (such as in Kolmogorov's probability system), or the truth value is binary (such as in the probability systems of Carnap [18] and Reichenbach [9]).
In Carnap's probability system [18], the logical probability of a predicate is irrelevant to the prior probability distribution P(x). From Example 1, we can find that it is relevant.
Reichenbach [9] uses the statistical result of binary logic to define the logical probability of the logical expression of two proposition sequences (e.g., two predicates) without using continuous truth functions. Additionally, logical probability and statistical probability are not distinguished.
Therefore, the probability systems mentioned above are not satisfactory for the extension of logic. Fortunately, Zadeh's fuzzy set theory [19,20] provides more that we need than the abovementioned probability systems. As the membership grade of an instance xi in fuzzy set θj is the truth value of proposition yj(xi) = "xi is in θj", the membership function of a fuzzy set is equivalent to the truth function of a predicate yj(x). The probability of a fuzzy event proposed by Zadeh [20] is just the logical probability of the predicate "x is in θj". Zadeh [21] thinks that the fuzzy set theory and the probability theory are complementary rather than competitive; fuzzy sets can enrich our concepts of probability.
However, it is still difficult for a machine to obtain truth functions or membership functions.
Can We Use Sampling Distributions to Optimize Truth Functions or Membership Functions?
Although the fuzzy set theory has made significant achievements in knowledge representation, fuzzy reasoning, and fuzzy control, it is not easy to use the fuzzy set theory for statistical learning, because membership functions are usually defined by experts rather than obtained from statistics.
A sample includes some examples. For instance, (x: age, y: label) is an example; many examples, such as (5, "child"), (20, "youth"), (70, "elder") …, form a sample. The posterior distribution P(x|yj) represents a sampling distribution. The core method of statistical learning is to optimize a likelihood function P(x|θj) (θj is a model or a set of parameters) with a sampling distribution P(x|yj). We also want to use sampling distributions to optimize truth functions or membership functions so that we can connect statistics and logic.
A significant advance in this direction is that Wang [22,23] developed the Random Set Falling Shadow Theory. According to this theory, a fuzzy set is produced by a random set; a set value taken by a random set is a range, such as ages 15-30 or 18-28 with label "youth". With many different ranges, we can calculate the proportion in which x belongs to the random set. The limit of this proportion is the membership grade of x in the corresponding fuzzy set.
However, it is still difficult to obtain membership functions or truth functions from the statistics of the random set. The cause is that there are a significant number of labeled examples with a single instance from real life, such as (70, "elder"), (60, "elder"), and (25, "youth"), but there are only very few examples with labeled sets or ranges, such as (15-30, "youth"). Therefore, the statistical method of random sets is not practical.
Due to the above reasons, fuzzy mathematics can perform well in expert systems, but not in statistical learning; many statisticians refuse to use fuzzy sets.
Purpose, Methods, and Structure of This Paper
Many researchers want to unify probability and logic. The natural idea is to define a logical probability system, as Keynes [24], Reichenbach [9], Popper [7], and Carnap [18] did. Some researchers use probabilities as logical variables to set up probability logic [25], such as two types of probability logic proposed by Reichenbach [9] and Adams [26]. Additionally, many researchers want to unify probabilities and fuzzy sets [27][28][29] following Zadeh. Many valuable efforts for the probabilification of the classical logic can be found in [25].
My effort is a little different. I also follow Zadeh to probabilify (e.g., fuzzify) the classical logic, but I want to use both statistical probabilities and logical probabilities at the same time and make them compatible, which means that the consequences of logical reasoning are equal to those of statistical reasoning when samples are enormous. In short, I want to unify statistics and logic, not only probability and logic.
A probability framework is a mathematical model that is constructed by the probabilities of some random variables taking values from some universes. For example, Shannon's communication model is a probability framework. The P-T probability framework includes statistical probabilities (represented by "P") and logical probability (represented by "T"). We may call Shannon's mathematical model for communication the P probability framework.
The purposes of this paper are to propose the P-T probability framework and test this framework through its uses in semantic communication, statistical learning, statistical mechanics, hypothesis evaluation (including falsification), confirmation, and Bayesian reasoning.
The main methods are 1. using the statistical probability framework, e.g., the P probability framework, adopted by Shannon for electrocommunication as the foundation, adding Kolmogorov's axioms (for logical probability) and Zadeh's membership functions (as truth functions) to the framework, 2. setting up the relationship between truth functions and likelihood functions by a new Bayes Theorem, called Bayes' Theorem III, so that we can optimize truth functions (in logic) with sampling distributions (in statistics), and 3. using the P-T probability framework and the semantic information Formulas (1) to express verisimilitude and testing severity and (2) to derive two practical confirmation measures and several new formulas to enrich Bayesian reasoning (including fuzzy syllogisms).
The P-T probability framework was gradually developed in my previous studies for the semantic information G theory (or say the G theory; "G" means the generalization of Shannon's information theory) [12,14,15] and statistical learning [12]. It is the first time in this paper that I name it the P-T probability framework and provide it with philosophical backgrounds and applications.
The rest of this paper is organized as follows. Section 2 defines the P-T probability framework. Section 3 explains this framework by its uses in semantic communication, statistical learning, and statistical mechanics. Section 4 shows how this framework and the semantic information measure support Popper's thought about scientific progress and hypothesis evaluation. Section 5 introduces two practical confirmation measures related to the medical test and the Raven Paradox, respectively. Section 6 summarizes various formulas for Bayesian reasoning and introduces a fuzzy logic that is compatible with Boolean Algebra. Section 7 discusses some questions, including how the theoretical applications exhibit the reasonability and practicality of the P-T probability framework. Section 8 ends with conclusions.
The Probability Framework Adopted by Shannon for Electrocommunication
Shannon uses the frequentist probability system [30] for the P probability framework, which includes two random variables and two universes. Definition 1. (for the P probability framework adopted by Shannon): X is a discrete random variable taking a value x ϵ X, where X is the universe {x1, x2, …, xm}; P(xi) = P(X = xi) is the limit of the relative frequency of event X = xi. In the following applications, x represents an instance or a sample point. Y is a discrete random variable taking a value y ϵ Y = {y1, y2, …, yn}; P(yj) = P(Y = yj). In the following applications, y represents a label, hypothesis, or predicate. P(yj|x) = P(Y = yj|X = x) is a Transition Probability Function (TPF) (named by Shannon [3]).
Shannon names P(X) the source, P(Y) the destination, and P(Y|X) the channel (see Figure 2a). A Shannon channel is a transition probability matrix or a group of TPFs: m m n n n m n P y x P y x P y x P y x P y x P y x P y x P y x P Y X P y x P y x P y x P y x where means equivalence. A Shannon channel or a TPF can be treated as a predictive model to produce the posterior distributions of x: P(x|yj), j = 1,2, …, n (see Equation (6)).
(a) (b) Figure 2. The Shannon channel and the semantic channel. (a) The Shannon channel described by the P probability frame, (b) the semantic channel described by the P-T probability frame. A membership function ascertains the semantic meaning of yj. A fuzzy set θj may be overlapped or included by another.
The P-T Probability Framework for Semantic Communication
Definition 2. (for the P-T probability framework):
The yj is a label or a hypothesis, yj(xi) is a proposition, and yj(x) is a propositional function. We also call yj or yj(x) a predicate. The θj is a fuzzy subset of universe X, which is used to explain the semantic meaning of a propositional function yj(x) = "x ϵ θj" = "x belongs to θj" = "x is in θj". The θj is also treated as a model or a set of model parameters. A probability that is defined with "=", such as P(yj) = P(Y = yj), is a statistical probability. A probability that is defined with "ϵ", such as P(X ϵ θj), is a logical probability. To distinguish P(Y = yj) and P(X ϵ θj), we define T(yj) = T(θj) = P(X ϵ θj) as the logical probability of yj. T(yj|x) = T(θj|x) = P(x ϵ θj) = P(X ϵ θj|X = x) is the truth function of yj and the membership function of θj. It changes between 0 and 1, and its maximum is 1.
According to Tarski's truth theory [31], P(X ϵ θj) is equivalent to P("X ϵ θ" is true) = P(yj is true). According to Davidson's truth condition semantics [32], the truth function of yj ascertains the (formally) semantic meaning of yj. A group of truth functions, T(θj|x), j = 1, 2, …, n, forms a semantic channel: The Shannon channel and the semantic channel are illustrated in Figure 2. Now, we explain the relationship between the above logical probability and Kolmogorov's probability. In Kolmogorov's probability system, a random variable takes a set as its value. Let 2 X be the power set (Borel set), including all possible subsets of X, and let Θ denote the random variable taking a value θ ϵ {θ1, θ2, …, θ|2 X |}. Then, Kolmogorov's probability P(θj) is the probability of event Θ = θj. As Θ = θj is equivalent to X ϵ θj, we have P(Θ = θj) = P(X ϵ θj) = T(θj). If all sets in 2 X are crisp, then T(θj) becomes Kolmogorov's probability.
According to the above definition, we have the logical probability of yj: The logical probability is the average truth value, and the truth value is the posterior logical probability. Notice that a proposition only has its truth value without its logical probability in general. For example, "every x is white", where x is a swan, has a logical probability, whereas "this swan is white" only has a truth value. Statistical probability P(yj) is equal to logical probability T(θj) for every j only when the following two conditions are tenable:
The universe of θ only contains some subsets of 2 X that form a partition of X, which means any two subsets in the universe are disjoint. The yj is always correctly selected.
For example, the universe of Y is Y = {"Non-adult", "Adult"} or {"Child", "Youth", "Middle age", "Elder"}; Y ascertains a partition of X. Further, if these labels are always correctly used, then P(yj) = T(θj) for every j. Many researchers do not distinguish statistical probability and logical probability, because they suppose the above two conditions are always tenable. However, in real life, the two conditions are not tenable in general. When we use Kolmogorov's probability system for many applications without distinguishing statistical and logical probabilities, the two conditions are necessary. However, the latter condition is rarely mentioned.
The logical probability defined by Carnap and Bar-Hillel [33] is different from T(θj). Suppose there are three atomic propositions a, b, and c. The logical probability of a minimum term, such as ab c (a and b and not c), is 1/8; the logical probability of = bc abc abc is 1/4. This logical probability defined by Carnap and Bar-Hillel is irrelevant to the prior probability distribution P(x). However, the logical probability T(θj) is related to P(x). A less logical probability needs not only a smaller extension, but also rarer instances. For example, "x is 20-year-old" has a less logical probability than "x is young", but "x is over 100 years old" with a larger extension has a still less logical probability than "x is 20-year-old", because people over 100 are rare.
In mathematical logic, the true value of a universal predicate is equal to the logical multiplication of the truth values of all propositions with the predicate. Hence, Popper and Carnap conclude that the logical probability of a universal predicate is zero. This conclusion is bewildering. However, T(θj) is the logical probability of a predicate itself yj(x) itself without the quantifier (such as ∀x). It is the average truth value. Why do we define the logical probability in this way? One reason is that this mathematical definition conforms to the literal definition that logical probability is the probability in which a hypothesis is judged to be true. Another reason is that this definition is useful for extending Bayes' Theorem.
Bayes' Theorem I. It was proposed by Bayes and can be expressed by Kolmogorov Notice that there are one random variable, one universe, and two logical probabilities.
Bayes' Theorem II. It is expressed by frequentist probabilities and used by Shannon. There are two symmetrical formulas: j j j j j j P y x P x y P y P x P x P x y P y (7) Notice there are two random variables, two universes, and two statistical probabilities.
Bayes' Theorem III. It is used in the P-T probability framework. There are two asymmetrical formulas: The two formulas are asymmetrical, because there is a statistical probability and a logical probability on the left sides. Their proofs can be found in Appendix A.
T(θj) in Equation (8) is the horizontally normalizing constant (which makes the sum of P(x|θj) be 1), whereas T(θj) in Equation (9) is the longitudinally normalizing constant (which makes the maximum of T(θj|x) be 1).
The Matching Relation between Statistical Probabilities and Logical Probabilities
How can we understand a group of labels' extensions? We can learn from a sample. Let D be a sample {(x(t), y(t))|t = 1 to N; x(t) ϵ X; y(t) ϵ Y}, where (x(t), y(t)) is an example, and the use of each label is almost reasonable. All examples with label yj in D form a sub-sample denoted by Dj.
Using Bayes' Theorem II, we further obtain T*(θj|x) = P(yj|x)/max(P(yj|x)), j = 1, 2, …, n, Which means that the semantic channel matches the Shannon channel. A TPF P(yj|x) indicates the using rule of a label yj, and hence, the above Formulas (12) and (13) In my mind, "-" in "P-T" means Bayes' Theorem III and the above two formulas, which serves as a bridge between statistical probabilities and logical probabilities and hence as a bridge between statistics and logic. Now, we can resolve the problem in Example 2 (see Figure 1). According to Equation (12), we have two optimized truth functions T*("adult"|x) = [P(x|"adult")/P(x)]/max(P(x|"adult")/P(x)), T*("elder"|x) = [P(x|"elder")/P(x)]/max(P(x|"elder")/P(x)), no matter whether the extensions are fuzzy or not. After we obtain T*(θj|x), we can make new probability predictions P(x|θj) using Bayes' Theorem III when P(x) is changed.
Next, we prove that Equation (13) is compatible with the statistical method of random sets for membership functions proposed by Wang [23] (refer to Figure 3). with different x in Dj, and the number of (xj*, yj) is Nj*. Then, we divide all examples with yj into Nj* rows. Every row can be treated as a set Sk. It is easy to prove [37] that the truth function obtained from Equation (13) is the same as the membership function obtained from the statistics of a random set.
Therefore, the so obtained truth functions accord with Reichenbach's frequency interpretation of logical probability [9]. However, we should distinguish two kinds of probabilities that come from two different statistical methods. Popper was aware that the two kinds of probability were different, and there should be some relation between them ( [7], pp. 252-258). Equation (13) should be what Popper wanted.
However, the above conclusions provided in Equations (12) and (13) need Dj to be big enough. Otherwise, P(yj|x) is not smooth, and hence P(x|θj) is meaningless. In these cases, we need to use the maximum likelihood criterion or the maximum semantic information criterion to optimize truth functions (see Section 3.2).
The Logical Probability and the Truth Function of a GPS Pointer or a Color Sense
The semantic information is conveyed not only by natural languages, but also by clocks, scales, thermometers, GPS pointers, stock indexes, and so on, as pointed out by Floridi [38]. We can use a Gaussian truth function (a Gaussian function without normalizing coefficient) as the truth function of yj = "x is about xj," where xj is a reading, x is the actual value, and σ is the standard deviation. For a GPS device, xj is the pointed position (a vector) by yj, x is the actual position, and σ is the Root Mean Square (RMS), which denotes the accuracy of a GPS device.
Example 3.
A GPS device is used in a train, and P(x) is uniformly distributed on a line (see Figure 4). The GPS pointer has a deviation. Try to find the most possible position of the GPS device, according to yj. Using the statistical probability method, one might think that the GPS pointer and the circle for the RMS tell us a likelihood function P(x|θj) or a TPF P(yj|x). It is not a likelihood function, because the most possible position is not xj pointed by yj. It is also not TPF P(yj|x), because we cannot know its maximum. It is reasonable to think that the GPS pointer provides a truth function.
Using the truth function, we can use Equation (8) to make a semantic Bayes prediction P(x|θj), according to which the position with the star is the most possible position. Most people can make the same prediction without using any mathematical formula. It seems that human brains automatically use a similar method: predicting according to the fuzzy extension of yj and the prior knowledge P(x).
A color sense can also be treated as a reading of a device, which conveys semantic information [15]. In this case, the truth function of a color sense yj is also the similarity function or the confusion probability function between x and xj; logical probability T(θj) is the confusion probability of other colors that are confused with xj by our eyes.
We can also explain the logical probability of a hypothesis yj with the confusion probability. Suppose that there exists a Plato's idea xj for every fuzzy set θj. Then, the membership function of θj or the truth function of yj is also the confusion probability function between x and xj [14].
Like the semantic uncertainty of a GPS pointer, the semantic uncertainty of a microphysical quantity may also be expressed by a truth function instead of a probability function.
From Shannon's Information Measure to the Semantic Information Measure
Shannon's mutual information is defined as: (15) where the base of the log is 2. If Y = yj, I(X; Y) becomes the Kullback-Leibler (KL) divergence: If X = xi, I(X; yj) becomes Using likelihood function P(x|θj) to replace posterior distribution P(x|yj), we have (the amount of) semantic information conveyed by yj about xi: The above formula makes use of Bayes' Theorem III. Its philosophical meaning will be explained in Section 4. If the truth value of proposition yj(xi) is always 1, then the above formula becomes Carnap and Bar-Hillel's semantic information formula [33]. The above semantic information formula can be used to measure not only the semantic information of natural languages, but also the semantic information of a GPS pointer, a thermometer, or a color sense. In the latter cases, the truth function also means the similarity function or the confusion probability function.
To average I(xi; θj) for different xi, we have average semantic information where P(xi|yj) (i = 1,2, …) is the sampling distribution. This formula can be used to optimize truth functions.
To average I(X; θj) for different yj, we have semantic mutual information: Both I(X; θj) and I(X; Θ) can be used as the criterion of classifications. We also call I(X; θj) generalized Kullback-Leibler (KL) information and I(X; Θ) generalized mutual information.
If the truth function is a Gaussian truth function, I(X; Θ) is equal to the generalized entropy minus the mean relative squared error: Therefore, the maximum semantic mutual information criterion is similar to the Regularized Least Square (RLS) criterion that is getting popular in machine learning. Shannon proposed the fidelity evaluation function for data compression in his famous paper [3]. A specific fidelity evaluation function is the rate-distortion function R(D) [39], where D is the upper limit of the average of distortion d(xi, yj) between xi and yj, and R is the minimum mutual information for given D. R(D) will be further introduced in relation to random events' control in Section 3.3.
Replacing d(xi, yj) with I(xi; θj), I developed another fidelity evaluation function R(G) [12,15], where G is the lower limit of semantic mutual information I(X; Θ), and R(G) is the minimum Shannon mutual information for given G. G/R(G) indicates the communication efficiency, whose upper limit is 1 when P(x|θj) = P(x|yj) for all j. R(G) is called the rate-verisimilitude function (this verisimilitude will be further discussed in Section 4.3). The rate-verisimilitude function is useful for data compression according to visual discrimination [15] and the convergence proofs of mixture models and maximum mutual information classifications [12].
Optimizing Truth Functions and Classifications for Natural Language
An essential task of statistical learning is to optimize predictive models (such as likelihood functions and logistic functions) and classifications. As by using a truth function T(θj|x) and the prior probability distribution P(x) we can produce a likelihood function; a truth function can also be treated as a predictive model. Additionally, a truth function as a predictive model has the advantage that it still works when P(x) is changed. Now, we consider Example 2 again. When sample Dj is not big enough, and hence P(x|yj) is unsmooth, we cannot use Equations (12) or (13) to obtain a smooth truth function. In this case, we can use the generalized KL formula to get an optimized continuous truth function. For example, the optimized truth function is where "arg max ∑" means to maximize the sum by selecting parameters θelder of the truth function.
In this case, we can assume that T(θelder) is a logistic function: , where u and v are two parameters to be optimized. If we know P(yj|x) without knowing P(x), we may assume that P(x) is constant to obtain sampling distribution P(x|yj) and logical probability T(θelder) [12]. It is called Logical Bayesian Inference [12] to (1) obtain the optimized truth function T*(θj|x) from sampling distributions P(x|yj) and P(x) and (2) make the semantic Bayes prediction P(x|θj) using T*(θj|x) and P(x). Logical Bayesian Inference is different from Bayesian Inference [5]. The former uses prior P(x), whereas the latter uses prior P(θ).
In statistical learning, it is called label learning to obtain smooth truth functions or membership functions with parameters. From the philosophical perspective, it is the induction of labels' extensions.
In popular statistical learning methods, the label learning of two complementary labels, such as "elder" and "non-elder", is easy, because we can use a pair of logistic functions as two TPFs P(θ1|x) and P(θ0|x) or two truth functions T(θ1|x) and T(θ0|x) with parameters. However, multi-label learning is difficult [40], because it is impossible to design n TPFs with parameters. Nevertheless, using the P-T probability framework, multi-label learning is also easy, because every label's learning is independent [12].
With optimized truth functions, we can classify different instances into different classes using a classifier yj = f(x). For instance, we can classify people with different ages into classes with labels "child", "youth", "adult", "middle aged", and "elder". Using the maximum semantic information criterion, the classifier is This classifier changes with the population age distribution P(x). With the prolongation of the human life span, v in the truth function T*(θelder|x) and the division point of "elder" will automatically increase [12].
The maximum semantic information criterion is compatible with the maximum likelihood criterion and is different from the maximum correctness criterion. It encourages us to reduce the underreports of small probability events. For example, if we classify people over 60 into the elder class according to the maximum correctness criterion, then we may classify people over 58 into the elder class according to the maximum semantic information criterion, because the elder people are less numerous than non-elder or the middle-aged people. To predict earthquakes, if one uses the maximum correctness criterion, he may always predict "no earthquake will happen next week". Such predictions are meaningless. If one uses the maximum semantic information criterion, he will predict "the earthquake will be possible next week" in some cases, even if he will make more mistakes.
The P-T probability framework and the G theory can also be used for improving maximum mutual information classifications and mixture models [12].
Truth Functions Used as Distribution Constraint Functions for Random Events' Control
A truth function is also a membership function and can be treated as the constraint function of the probability distribution of a random event or the density distribution of random particles. We simply call this function the Distribution Constraint Function (DCF).
The KL divergence and the Shannon mutual information can also be used to measure the control amount of controlling a random event. Let X be a random event, and P(x) = P(X = x) be the prior probability distribution. X may be the energy of a gas molecule, the size of a machined part, the age of one of the people in a country, and so on. P(x) may also be a density function.
If P(x|yj) is the posterior distribution after a control action yj, then the KL divergence I(X; yj) is the control amount (in bits), which reflects the complexity of the control. If the ideal posterior distribution is P(x|θj), then the effective control amount is Notice that P(xi|θj) is on the left of "log" instead of the right. For generalized KL information I(X; θj), when prediction P(x|θj) approaches fact P(x|yj), I(X; θj) approaches its maximum. In contrast, for effective control amount Ic(X; θj), as fact P(x|yj) approaches ideality P(x|θj), Ic(X; θj) approaches its maximum. For an inadequate P(x|yj), Ic(X; θj) may be negative. P(x|yj) may also have parameters.
A truth function T(θj|x) used as a DCF means that there should be If θj is a crisp set, this condition means that x cannot be outside of θj. If θj is fuzzy, it means that x outside of θj should be limited. There are many distributions P(x|yj) that meet the above condition, but only one needs the minimum KL information I(X; yj). For example, assuming that xj makes T(θj|xj) = 1, if P(x|yj) = 1 for x = xj and P(x|yj) = 0 for x ≠ xj, then P(x|yj) meets the above condition. However, this P(x|yj) needs information I(X; yj) that is not minimum. I studied the rate-tolerance function R(Θ) [41] that is the extension of the complexity-distortion function R(C) [42]. Unlike the rate-distortion function, the constraint condition of the complexitydistortion function is that every distortion d(xi, yj) = (xi − yj) 2 is less than a given value C, which means the constraint sets possess the same magnitude. Unlike the constraint condition of R(C), the constraint condition of R(Θ) is that the constraint sets are fuzzy and possess different magnitudes. I have concluded [14,15]:
For given DCFs T(θj|x) (j = 1, 2, …, n) and P(x), when P(x|yj) = P(x|θj) = P(x)T(θj|x)/T(θj), the KL divergence I(X; yj) and Shannon's mutual information I(X; Y) reach their minima; the effective control amount Ic(X; yj) reaches its maximum. If every set θj is crisp, I(X; yj) = −logT(θj) and I(X; Y) = −∑j P(yj)logT(θj). A rate-distortion function R(D) is equivalent to a rate-tolerance function R(Θ), and a semantic mutual information formula can express it with truth functions or DCFs (see Appendix B for details). However, an R(Θ) function may not be equivalent to an R(D) function, and hence, R(D) is a special case of R(Θ).
Let dij = d(xi, yj) be the distortion or the loss when we use yj to represent xi. D is the upper limit of the average distortion. For given P(x), we can obtain the minimum Shannon mutual information I(X; Y), e.g., R(D). The parameterization of R(D) ( [43], P. 32) includes two formulas: where parameter s = dR/dD ≤ 0 reflects the slope of R(D). The posterior distribution P(y|xi) is which makes I(X; Y) reach its minimum. As s ≤ 0, the maximum of exp(sdij) is 1 as s = 0. An often-used distortion function is d(xi, yj) = (yjxi) 2 . For this distortion function, exp(sdij) is a Gaussian function (without the coefficient). Therefore, exp(sdij) can be treated as a truth function or a DCF T(θxi|y), where θxi is a fuzzy set on Y instead of X; λi can be treated as the logical probability T(θxi) of xi. Now, we can find that Equation (27) is actually a semantic Bayes formula (in Bayes' Theorem III). An R(D) function can be expressed by the semantic mutual information formula with a truth function that is equal to an R(Θ) function (see Appendix B for details).
In statistical mechanics, a similar distribution to the above P(y|xi) is the Boltzmann distribution [44] ( | ) exp( ) / , exp( ) where P(xi|T) is the probability of a particle in the ith state with energy ei, or the density of particles in the ith state with energy ei; T is the absolute temperature, k is the Boltzmann constant, and Z is the partition function. Suppose that ei is the ith energy, Gi is the number of states with ei, and G is the total number of all states. Then, P(xi) = Gi/G is the prior distribution. Hence, the above formula becomes Now, we can find that exp[−ei/(kT)] can be treated as a truth function or a DCF, Z' as a logical probability, and Equation (29) as a semantic Bayes formula in Bayes' Theorem III.
For local equilibrium systems, the different areas of a system have different temperatures. I ( [14], pp. 102-103) derived the relationship between minimum Shannon mutual information R(Θ) and thermodynamic entropy S with the help of the truth function or the DCF (see Appendix C for details): This formula indicates that the maximum entropy principle is equivalent to the minimum mutual information principle.
How Popper's Thought about Scientific Progresses is Supported by the Semantic Information Measure
As early as 1935, in The Logic of Scientific Discovery [7], Popper put forward that the less the logical probability of a hypothesis, the easier it is to be falsified, and hence the more empirical information there is. He says: "The amount of empirical information conveyed by a theory, or its empirical content, increases with its degree of falsifiability." (p. 96) "The logical probability of a statement is complementary to its degree of falsifiability: it increases with decreasing degree of falsifiability. "(p. 102) However, Popper had not provided a proper semantic information formula. In 1948, Shannon proposed the famous information theory with statistical probability [3]. In 1950, Carnap and Bar-Hillel proposed a semantic information measure with logical probability: where p is a proposition, mp is its logical probability. However, this formula is irrelevant to the instance that may or may not make p true. Therefore, it can only indicate how severe the test is, not how well p survives the test.
In 1963, Popper published his book Conjectures and Refutations [45]. In this book, he affirms more clearly that the significance of scientific theories is to convey information. He says: "It characterizes as preferable the theory which tells us more; that is to say, the theory which contains the greater amount of experimental information or content; which is logically stronger; which has greater explanatory and predictive power; and which can therefore be more severely tested by comparing predicted facts with observations. In short, we prefer an interesting, daring, and highly informative theory to a trivial one." ( [45], p. 294).
In this book, Popper proposes using ( [45], p. 526) P(e, hb)/P(e, b) or log[P(e, hb)/P(e, b)] to represent how well a theory survives a severe test, where e is interpreted as the supporting evidence of the theory h, and b is background knowledge. P(e, hb) and P(e, b) in [45] are conditional probabilities, whose modern forms are P(e|h, b) and P(e|b). Different from Carnap and Bar-Hillel's semantic information formula, the above formula can reflect how well evidence e supports theory h. However, P(e, hb) is the conditional probability of e. It is not easy to explain P(e, hb) as a logical probability. Hence, we cannot say that log[P(e, hb)/P(e, b)] represents the amount of semantic information.
There have been several semantic or generalized information measures [38,46,47]. My semantic information measure supports Popper's theory about hypothesis evaluation more than others. We use an example to show its properties.
Suppose that yj is a GPS pointer or a hypothesis "x is about xj" with a Gaussian truth function exp[−(x − xj) 2 /(2σ 2 )] (see Figure 5). Then, we have the amount of semantic information: Figure 5 shows that the truth function and the semantic information change with x. Small P(xi) means that xi is unexpected; large P(xi|θj) means that the prediction is correct; log[P(xi|θj)/P(xi)] indicates how severe and how well yj is tested by xi. Large T(θj|xi) means that yj is true or close to the truth; small T(θj) means that yj is precise. Hence, log[T(θj|xi)/T(θj)] indicates the verisimilitude of xj reflecting xi. Unexpectedness, correctness, testability, truth, precision, verisimilitude, and deviation are all reconciled in the formula. Figure 5 indicates that the less the logical probability is, the more information there is; the larger the deviation is, the less information there is; a wrong hypothesis will convey negative information. According to this measure, the information conveyed by a tautology or a contradiction is 0. These conclusions accord with Popper's thought.
How the Semantic Information Measure Supports Popper's Falsification Thought
Popper affirms that a hypothesis with less logical probability is more easily falsified. If it survives tests, it can convey more information. The semantic information formula for I(x; θj) reflects this thought. Popper affirms that a counterexample can falsify a universal hypothesis. The generalized KL information (Equation (19)) supports this point of view. The truth function of a universal hypothesis only takes value 0 or 1. If there is an instance xi that makes T(θj|xi) = 0, then I(xi; θj) is −∞. The average information I(X; θj) is also −∞, which falsifies the universal hypothesis.
Lakatos partly accepted Kuhn's thought about falsification. He pointed out [48] that in scientific practices, scientists do not readily give up a theory (or a hypothesis) when minor observed facts falsify it. They often add some auxiliary hypotheses to the theory or its hard core so that the theory with the auxiliary hypotheses accords with more observed facts. Lakatos hence describes his improved falsification as sophisticated falsification.
Lakatos is correct. For example, according to the hard core of science, distances from a GPS device to three satellites ascertain a GPS device's position. However, to provide accurate positioning, the manufacturers also need to consider additional factors, including satellite geometry, signal blockage, atmospheric conditions, and so on. However, Popper is not necessarily wrong. We can plead for Popper with the following reasons:
Popper claims that scientific knowledge grows by repeating conjectures and refutations. Repeating conjectures should include adding auxiliary hypotheses. Falsification is not the aim. Falsifiability is only the demarcation criterion of scientific and nonscientific theories. The aim of science is to predict empirical facts with more information.
Scientists hold a scientific theory depending on if it can convey more information than other theories. Therefore, being falsified does not means being given up.
However, there is still a problem. That is, how can a falsified hypothesis convey more information than others? According to the average semantic information formula or the Generalized KL formula, we can raise the average semantic information by increasing the fuzziness or decreasing the predictive precision of the hypothesis to a proper level and reducing the degree of belief in a rule or a major premise.
We may change a universal hypothesis into a universal hypothesis that is not strict. For example, we may change "all swans are white" into "almost all swans are white," "all swans may be white," or "swans are white; the degree of confirmation is 0.9".
A GPS device is a quantitative example. A GPS device uses RMS or the like to indicate its accuracy (correctness and precision). A larger circle around the GPS pointer on a GPS map (see Figure 4) means a larger RMS and lower accuracy. If a GPS device shows a few more actual positions beyond the circle, but not too far, we may not give up the GPS device. We can assume that it has a slightly larger RMS and continue to use it. We choose the best hypothesis, as we want a GPS device with the highest accuracy. If a GPS device may direct wrong directions, we may reduce the degree of belief in it to decrease average information loss [12].
For Verisimilitude: To Reconcile the Content Approach and the Likeness Approach
In Popper's earlier works, he emphasizes the importance of hypotheses with less logical probabilities, without stressing hypotheses' truth. In contrast, in Popper's later book Realism and the Aim of Science [49], he explains that science aims at better explanatory theories, which accord with facts and hence are true, even if we do not know or cannot prove that they are true. Lakatos [50], therefore, points out that Popper's game of science and his aim of science are contradictive. The game of science consists of bold conjectures and severe refutations; the conjectures need not be true. However, the aim of science consists in developing true or truth-like theories about the mindindependent world. Lakatos' criticism is related to Popper's concept of verisimilitude. Popper where a is a proposition, 1 − P(a) means the information content. This formula means that the less the logical probability, the more the information content there is, and hence, the higher the verisimilitude is. Popper wishes that Vs(a) changes from −1 to 1. However, according to this formula, Vs(a) cannot appear between −1 and 0. The more serious problem is that he only makes use of logical probability P(a) without using the truth value of proposition a or without using the consequence that may be closer to truth than another [51], so that it cannot express likeness between the prediction and the consequence. Popper later admitted this mistake and emphasized that we need true explanatory theories that accord with the real world [49]. Now, researchers use three approaches to interpret verisimilitude [52]: the content approach, the consequence approach, and the likeness approach. As the latter two are relevant and different from the content approach, theories of verisimilitude have routinely been classified into two rival camps: the content approach and the likeness approach [52]. The content approach emphasizes tests' severity, unlike the likeness approach that emphasizes hypotheses' truth or closeness to truth. Some researchers think that the content approach and the likeness approach are irreconcilable [53], just as Lakatos thinks that Popper's game of science and his aim of science are contradictive. There are also researchers, such as Oddie [52], who try to combine the content approach and the likeness approach. However, they admit that it is not easy to combine them.
This paper continues Oddie's effort. Using the semantic information measure as verisimilitude or truth-likeness, we can combine the content approach and the likeness approach easily.
In Equation (32) and Figure 5, the truth function T(θj|x) is also the confusion probability function; it reflects likeness between x and xj. The xi (or X = xi) is the consequence, and the distance between xi and xj in the feature space reflects the likeness. The log[1/T(θj)] represents the testing severity and potential information content. Using Equation (32), we can easily explain an often-mentioned example: why "the sun has 9 satellites" (8 is true) has higher verisimilitude than "the sun has 100 satellites" [52].
Another often-mentioned example is how to measure the verisimilitude of weather predictions [52]. Using the semantic information method, we can assess more practical weather predictions. We use a vector x = (h, r, w) to denote weather, where h is temperature, r is rainfall, and w is wind speed. Let xj = (hj, rj, wj) be the predicted weather and xi = (hi, ri, wi) be the actual weather (consequence). The prediction is yj = "x is about xj". For simplicity, we assume that h, r, and w are independent. The Gaussian truth function may be: This truth function accords with the core of the likeness approach [52]: that the likeness of a proposition depends on the distance between two instances (x and xj). If the consequence is xi, then the truth value T(θj|xi) of proposition yj(xi) is the likeness. Additionally, the information I(xi; θj) = log[T(θj|xi)/T(θj)] is the verisimilitude, which has almost all desirable properties for which the three approaches are used. This verisimilitude I(xi; θj) is also related to prior probability distribution P(x). The correct prediction of unusual weather has much higher verisimilitude than that of common weather if both predictions are right. Now, we can explain that the Regularized Least Squared Error criterion is getting popular, because it is similar to the maximum average verisimilitude criterion (refer to Equation (21)).
The Purpose of Confirmation: Optimizing the Degrees of Belief in Major Premises for Uncertain Syllogisms
Confirmation is often explained as assessing the evidential impact on hypotheses (incremental confirmation) [11,54] or the evidential support for hypotheses (absolute confirmation) [55,56]. The degree of confirmation is the degree of belief that is supported by evidence or data [57]. Researchers only use confirmation measures to assess hypotheses. There have been various confirmation measures [11]. I also proposed two different confirmation measures [58]. In the following, I briefly introduce my study on confirmation related to scientific reasoning.
In my study on confirmation, the task, purpose, and method of confirmation are a little different from those in the popular confirmation theories.
The task of confirmation: Only major premises, such as "if the medical test is positive, then the tested person is infected" and "if x is a raven, then x is black", need confirmation. The degrees of confirmation are between −1 and 1. A proposition, such as "Tom is elderly", or a predicate, such as "x is elderly" (x is one of the given people), needs no confirmation. The truth function of the predicate reflects the semantic meaning of "elderly" and is determined by the definition or the idiomatic usage of "elderly". The degree of belief in a proposition is a truth value, and that in a predicate is a logical probability. The truth value and the logical probability are between 0 and 1 instead of −1 and 1.
The purpose of confirmation: The purpose of confirmation is not only for assessing hypotheses (major premises), but also for probability predictions or uncertain syllogisms. A syllogism needs a major premise. However, as pointed out by Hume and Popper, it is impossible to obtain an absolutely right major premise for an infinite universe by induction. However, it is possible to optimize the degree of belief in the major premise by the proportions of positive examples and counterexamples. The optimized degree of belief is the degree of confirmation. Using a degree of confirmation, we can make an uncertain or fuzzy syllogism. Therefore, confirmation is an important link in scientific reasoning according to experience.
The method of confirmation: I do not directly define a confirmation measure, as most researchers do. I derive the confirmation measures by optimizing the degree of belief in a major premise with the maximum semantic information criterion or the maximum likelihood criterion. This method is also the method of statistical learning, where the evidence is a sample.
From the perspective of statistical learning, a major premise comes from a classifier or a predictive rule. We use the medical test as an example to explain the relationship between classification and confirmation (see Figure 6). The binary signal detection and the classification with labels "elder" and "non-elder" are similar. Classifications for watermelons and junk emails are also similar. Figure 6. Illustrating the medical test and the binary classification for explaining confirmation. If x is in E1, we use e1 as prediction "h is h1"; if x is in E0, we use e0 as prediction "h = h0".
In Figure 6, h1 denotes an infected specimen (or person), h0 denotes an uninfected specimen, e1 is positive, and e0 is negative. We can treat e1 as a prediction "h is infected" and e0 as a prediction "h is uninfected". The x is the observed feature of h; E1 and E2 are two sub-sets of the universe of x. If x is in E1, we select e1; if x is in E0, we select e0. For the binary signal detection, we use "0" or "1" in the destination to predict 0 or 1 in the source according to the received analog signal x.
The two major premises to be confirmed are "if e1 then h1", denoted by e1→h1, and "if e0 then h0", denoted by e0→h0. A confirmation measure is denoted by c(e→h).
The x' ascertains a classification. For a given classification or predictive rule, we can obtain a sample including four examples (e1, h1), (e0, h1), (e1, h0), and (e0, h0). Then, we can use the four examples' numbers a, b, c, and d (see Table 1) to construct confirmation measures. Information measures and confirmation measures are used for different tasks. To compare two hypotheses and choose a better one, we use an information measure. Using the maximum semantic information criterion, we can find the best x' for the maximum mutual information classifications [12]. To optimize the degree of belief in a major premise, we need a confirmation measure. An information measure is used for classifications, whereas a confirmation measure is used after classifications.
Channel Confirmation Measure b* for Assessing a Classification as a Channel
A binary classification ascertains a Shannon channel, which includes four conditional probabilities, as shown in Table 2. We regard predicate e1(h) as the combination of believable and unbelievable parts (see Figure 7). The truth function of the believable part of e1 is T(E1|h) ϵ {0,1}. There are T(E1|h1) = T(E0|h0) = 1 and T(E1|h0) = T(E1|h0) = 0. The unbelievable part is a tautology, whose truth function is always 1. Then, we have the truth functions of predicates e1(h) and e0(h): Model parameter b1 is the proportion of the believable part, and b1′ = 1 − |b1| is the proportion of the unbelievable part and also the truth value of y1(h0), where h0 is a counter-instance. The b1′ may be regarded as the degree of disbelief in the major premise e1→h1. The four truth values form a semantic channel, as shown in Table 3. Table 3. The semantic channel ascertained by two degrees of disbelief b1′ and b0′.
In the medical test, the likelihood ratio measure LR is used for assessing how reliable a testing result (positive or negative) is. Measure F proposed by Kemeny and Oppenheim [55] is Like measure F, measure b* is also the function of LR, and hence both can be used for assessing the medical test. Compared with LR, b* and F can indicate the distance between a test (any b*) and the best test (b* = 1) or the worst test (b*= −1) better. Compared with F, b* is better for probability predictions. For example, from b1* > 0 and P(h), we have This formula is simpler than the classical Bayes formula (see Equation (6)). If b1* = 0, then P(h1|θe1) = P(h1). If P(h1|θe1) < 0, then we can make use of Consequent Symmetry to make the probability prediction [58]. So far, it is still problematic to use b*, F, or another measure to assess how well a probability prediction or clarify the Raven Paradox.
Prediction Confirmation Measure c* for Clarifying the Raven Paradox
Statistics not only uses the likelihood ratio to indicate how reliable a testing method (as a channel) is, but also uses the correct rate to indicate how possible the predicted event is. Measures F and b*, like LR, cannot indicate the quality of a probability prediction. For example, b1* > 0 does not mean P(h1|θe1) > P(h0|θe1). Most other confirmation measures have similar problems [58].
We now treat probability prediction P(h|θe1) as the combination of a believable part with proportion c1 and an unbelievable part with proportion c1′, as shown in Figure 8. We call c1 the degree of belief in rule e1→h1 as a prediction. When the prediction accords with the fact, e.g., P(h|θe1) = P(h|e1), c1 becomes c1*. Then, we derive the prediction confirmation measure (45) In like manner, we obtain It is easy to prove that c*(e1→h1) also possesses Consequence Symmetry. Making use of this symmetry, we can obtain c*(e1→h0) = −c*(e1→h1) and c*(e0→h1) = -c*(e0→h0).
When c1* > 0, according to Equation (45), we have the correct rate of rule e1→h1: When c*(e1→h1) < 0, we may make use of Consequence Symmetry to make the probability prediction. However, when P(h) is changed, we should still use b* with P(h) for probability predictions.
For the medical test, we need both conformation measures. Measure b* tells the reliability of a test as means or the channel in comparison with other tests, whereas measure c* tells the possibility that a person is infected. For scientific predictions, such as earthquake predictions, b* and c* have similar meanings. Now, we can use measure c* to clarify the Raven Paradox. Hemple [59] proposed the confirmation paradox or the Raven Paradox. According to the Equivalence Condition in the classical logic, "if x is a raven, then x is black" (Rule I) is equivalent to "if x is not black, then x is not a raven" (Rule II). A piece of white chalk supports Rule II; hence it also supports Rule I. However, according to the Nicod Criterion [60], a black raven supports Rule I, a nonblack raven undermines Rule I, and a non-raven thing, such as a black cat or a piece of white chalk, is irrelevant to Rule I. Hence, there exists a paradox between the Equivalence Condition and the Nicod Criterion.
To clarify the Raven Paradox, some researchers, including Hemple [59], affirm the Equivalence Condition and deny the Nicod Criterion; some researchers, such as Scheffler and Goodman [61], affirm the Nicod Criterion and deny the Equivalence Condition. Some researchers do not fully affirm or deny the Equivalence Condition or the Nicod Criterion. Figure 9 shows a sample for the major premise "if x is a raven, then x is black". First, we consider measure F to see if we can use it to eliminate the Raven Paradox. The difference between F(e1→h1) and F(h0→e0) is that their counterexamples are the same (c = 1), yet their positive examples are different. When d increases to d + Δd, F(e1→h1) = (ad − bc)/(ad + bc + 2ac) and F(h0→e0) = (ad − bc)/(ad + bc + 2dc) unequally increase. Therefore, though measure F denies the Equivalence Condition, it still affirms that Δd affects both F(e1→h1) and F(h0→e0), and hence, measure F does not accord with the Nicod Criterion.
Among all confirmation measures, only measure c* can ensure that f(a, b, c, d) is only affected by a and c. There is c*(e1→h1) = (6 − 1)/6 = 5/6. However, many researchers still think that the Nicod Criterion is incorrect; this criterion accords with our intuition only because a confirmation measure c(e1→h1) can evidently increase with a and slightly increase with d. For example, Fitelson and Hawthorne [62] believe that measure LR may be used to explain that a black raven can confirm "ravens are black" more strongly than a non-black non-raven thing. Is it true?
For the above example, LR, F, and b* do ensure that the increment Δf caused by Δa = 1 is bigger than Δf caused by Δd = 1. However, when a = d = 20 and b = c = 10, except for c*, no measure can ensure Δf/Δa > Δf/Δd (see Table 13 in [58] for details), which means that all popular confirmation measures cannot be used to explain that a black raven can confirm "ravens are black" more strongly than a piece of white chalk.
As c*(e1→h1) = (a − c)/max(a, c) and c*(h0→e0) = (d − c)/max(d, c), the Equivalence Condition does not hold, and measure c* accords with the Nicod Criterion very well. Therefore, the Raven Paradox does not exist anymore according to measure c*.
How Confirmation Measures F, b*, and c* are Compatible with Popper's Falsification Thought
Popper affirms that a counterexample can falsify a universal hypothesis or a major premise. However, for an uncertain major premise, how do counterexamples affect its degree of confirmation? Confirmation measures F, b*, and c* can reflect that the existence of fewer counterexamples is more important than the existence of more positive examples.
Popper affirms that a counterexample can falsify a universal hypothesis, according to which we can explain that for the falsification of a strict universal hypothesis, it is important to have no counterexample. Now, for the confirmation of a major premise that is not strict universal hypothesis, we can explain that it is important to have fewer counterexamples. Therefore, confirmation measures F, b*, and c* are compatible with Popper's falsification thought.
Scheffler and Goodman [61] proposed selective confirmation based on Popper's falsification thought. They believe that black ravens (whose number is a) support "ravens are black", because black ravens undermine "ravens are not black". Their reason why non-black ravens (whose number is c) support "ravens are not black" is that non-black ravens undermine the opposite hypothesis "ravens are black". Their explanation is significant. However, they did not provide the corresponding confirmation measure. Measure c*(e1→h1) is what they need. Now, we can find that it is confirmation that allows us not to give up a falsified major premise and to keep it with an optimized degree of belief for obtaining more information.
More discussions about confirmation can be found in [58].
Viewing Induction from the New Perspective
From the perspective of statistical learning, induction includes: induction for probability predictions: to optimize likelihood functions with sampling distributions, induction for the semantic meanings or the extensions of labels: to optimize truth functions with sampling distributions, and induction for the degrees of confirmation of major premises: to optimize the degrees of belief in major premises with the proportions of positive examples and counterexamples after classifications.
From the above perspective, induction includes confirmation, and confirmation is part of induction. Only rules or if-then statements with two predicates need confirmation.
The above inductive methods also include the selections or conjectures of predictive models (likelihood functions and truth functions) with the semantic information criterion. The iteration algorithms for mixture models and maximum mutual information classifications [12] reflect repeating conjectures and refutations. Therefore, these inductive methods are compatible with Popper's thought that scientific progress comes from conjectures and refutations.
The Different Forms of Bayesian Reasoning as Syllogisms
According to the above analyses, Bayesian reasoning has different forms, as shown in Table 4. The interpretations with blue words indicate newly added contents because of the use of the P-T probability framework. xi (X = xi) P(yj|xi) Conditional SP yj, P(x) P(x|yj) = P(x)P(yj|x)/P(yj) P(yj) = ∑i P(yj|xi) P(xi)
Bayes' Theorem II (Bayes' prediction)
Between two sets Bayes' Theorem I (θ' is the complement of θ) Between an instance and a set (or model) It is important that the reasoning with truth functions or degrees of confirmation is compatible with statistical reasoning (using Bayes' Theorem II). When T*(θj|x)∝P(yj|x) or T(θej|h)∝P(ej|h), the consequences are the same as those from the classical statistics.
On the other hand, the fuzzy syllogisms are compatible with the classical syllogism, because when b1* = 1 or c1* = 1, for given e1, the consequence is P(h1) = 1 and P(h0) = 0. However, the above fuzzy syllogisms have their limitations. The syllogism with b*(e1→h1) is the generalization of a classical syllogism. The fuzzy syllogism is:
When b1* = 1, if the minor premise becomes "x is in E2, and E2 is included in E1", then this syllogism becomes Barbara (AAA-1) [63], and the consequence is P(h1|θe1) = 1. Hence, we can use the above fuzzy syllogism as the fuzzy Barbara.
As the Equivalence Condition does not hold for both channels' confirmation and predictions' confirmation, if the minor premise is h = h0, we can only use a converse confirmation measure b*(h0→ e0) or c*(h0→e0) as the major premise to obtain the consequence (see [58] for details).
Fuzzy Logic: Expectations and Problems
Truth functions are also membership functions. Fuzzy logic can simplify statistics for membership functions and truth functions.
Suppose that the membership functions of three fuzzy sets A, B, and C are a(x), b(x), and c(x), respectively. Let the logical expression of A, B, and C be F(A, B, C) with three operators ∩, ∪, and c and the logical expression of three truth functions be f(a(x), b(x), c(x)) with three operators ∧, ∨, and . The operators ∩ and ∧ can be omi ed. There are 2 8 different expressions with A, B, and C. To simplify statistics, we expect If the above equation is tenable, we only need to optimize three truth functions a(x), b(x), and c(x) and calculate the truth functions of various expressions F(A, B, C). Further, we expect that the fuzzy logic in f(...) is compatible with Boolean algebra and Kolmogorov's probability system. However, in general, the logical operations of membership functions do not follow Boolean algebra. Zadeh's fuzzy logic is defined with [19]: a(x)∧b(x) = min(a(x), b(x)), a(x) = 1 − a(x). According to this definition, the law of complementarity does not hold (e.g., A∩A c ≠ ϕ and A∪A c ≠ X ) since There are also other definitions. To consider correlation between two predicates "x is in A" and "x is in B", Wang et al. [64] define: min( ( ), ( )), positively correlated, If two predicates are always positively correlated, then the above operations become Zadeh's operations. According the above definition, since A and A c are negatively correlated, there are a(x) ∧a(x) = max(0, a(x) + 1 − a(x) − 1) ≡ 0, a(x) ∨a(x) = min(1, a(x) + 1 − a(x)) ≡ 1.
Thus, the law of complementarity is tenable.
To build a symmetrical model of color vision in the 1980s, I [65,66] proposed the fuzzy quasi-Boolean algebra on X, which is defined with the Boolean algebra on cross-set [0,1]XX. This model of color vision is explained as a fuzzy 3-8 decoder, and hence is called the decoding model (see Appendix E). Its practicability can be proved by the fact that the International Commission on Illumination (CIE) recommended a symmetrical model, which is almost the same as mine, for color transforms in 2006 [67]. Using the decoding model, we can easily explain color blindness and color evolution by spitting or merging three sensitivity curves [68].
We can use the fuzzy quasi-Boolean algebra to simplify statistics for the truth functions of natural languages. In these cases, we need to distinguish atomic labels and compound labels and assume that the random sets for the extensions of these atomic labels become wider or narrower at the same time.
However, the correlation between different labels or statements is often complicated. When we choose a set of operators of fuzzy logic, we need to balance reasonability and feasibility.
How the P-T Probability Framework has Been Tested by Its Applications to Theories
The P-T Probability Framework includes statistical probabilities and logical probabilities. Logical probabilities include truth functions, which are also membership functions and can reflect the extensions of hypotheses or labels. Using truth function T(θj|x) and prior distribution P(x), we can produce likelihood function P(x|θj) and train T(θj|x) and P(x|θj) with sampling distributions P(x|yj). Then, we can let a machine reason like the human brain with the extensions of concepts.
By semantic information methods with this framework, we can use logical probabilities to express semantic information: I(xi; θj) = log[T(θj|x)/T(θj)]. We can also use the likelihood function to express predictive information, because I(xi; θj) = log[T(θj|x)/T(θj)] = log[P(xi|θj)/P(xi)]. When we calculate the average semantic information I(X; θj) and I(X; Θ), we also need statistical probabilities, such as P(x|yj), to express sampling distributions.
In the popular methods of statistical learning, only statistical probabilities, including subjective probabilities, are used. For binary classification, we can use a pair of logistic functions. However, for multi-label learning and classification, there is no simple method [40]. With (fuzzy) truth functions now, we can easily solve the extensions of multi-labels for classifications by Logical Bayesian Inference [12]. Using Shannon's channels with statistical probabilities and semantic channels with logical probabilities, we can let two channels mutually match to achieve maximum mutual information classifications and to speed up the convergence of mixture models with better convergence proofs [12]. Section 3.3 shows that the semantic Bayes formula with the truth function and the logical probability has been used in rate-distortion theory and statistical mechanics. Therefore, the P-T probability framework can be used not only for communication or epistemology but also for control or ontology.
For the evaluation of scientific theories and hypotheses, with logical probabilities, we can use semantic information I(x; θj) as verisimilitude and testing severity, both of which can be mutually converted. With statistical probabilities, we can express how sampling distributions test hypotheses.
For confirmation, using truth functions, we can conveniently express the degrees of belief in major premises. With statistical probabilities, we can derive the degrees of confirmation, e.g., the degrees of belief optimized by sampling distributions.
For Bayesian reasoning, with the P-T probability framework, we can have more forms of reasoning (see Table 4), such as: two types of reasoning with Bayes' Theorem III, Logical Bayesian Inference from sampling distributions to optimized truth functions, and fuzzy syllogisms with the degrees of confirmation of major premises.
In short, with the P-T probability framework, we can resolve many problems more conveniently.
How to Extend Logic to the Probability Theory?
Jaynes [16] concludes that the probability theory is the extension of logic. This conclusion is a little different from that probability is the extension of logic. The former includes the latter. The former emphasizes that probability is an important tool for scientific reasoning, which includes Bayesian reasoning, the maximum likelihood estimation, the maximum entropy method, and so on. Although Jaynes interprets probabilities with average truth values ( [16], p. 52), this interpretation is not the extension. When an urn contains N balls with only two colors: white and red, we may use truth value Ri to represent a red ball on the ith draw and R i = Wi to represent a white ball on the ith draw. However, if the balls have more colors, this interpretation will not sound so good. In this case, the frequentist interpretation is simpler.
Zadeh's fuzzy set theory is an essential extension of the classical logic to probability for logical probability. Using (fuzzy) truth functions or membership functions, we can let a machine learn human brains' reasoning with fuzzy extensions. This reasoning can be an essential supplement to the probabilistic reasoning that Jaynes summarized.
This task is difficult, because it is hard to avoid the contradiction between the probability logic and statistical reasoning. According to Kolmogorov's axiomatic system, we have
T(A∪B) = T(A) + T(B) − T(A∩B).
However, we cannot obtain T(A∩B) from T(A) and T(B), because T(A∩B) is related to P(x) and the correlation between two predicates "x is in A" and "x is in B". The operators "∧" and "∨" can only be used for the operations of truth values instead of logical probabilities. For example, A is a set {youths}, and B is a set {adults}. Suppose that when people include high school students and their parents, the prior distribution is P1(x); when people include high school students and soldiers, the prior distribution is P2(x). T1(A) is obtained from P1(x) and T(A|x), T2(A) from P2(x) and T(A|x), and so on. T1(A∩B) should be much smaller than T2(A∩B), even if T1(A) = T2(A) and T1(B) = T2(B). A better method for T(A∩B) than Equation (52) is first to obtain truth function, such as T(A∩B|x) = T(A|x)∧T(B|x). Then we have Similarly, to obtain T(F (A, B, …)), we should first obtain truth function f(T(A|x), T(B|x), …) using fuzzy logic operations.
Various forms of Bayesian reasoning in Table 4 for the extension of the classical logic are compatible with statistical reasoning. For example, Bayes' Theorem III is compatible with Bayes' Theorem II. If we use a probability logic for a fuzzy or uncertain syllogism, we had better check if the reasoning is compatible with statistical reasoning.
In the two types of probability logic proposed by Reichenbach [9] and Adams [26], there is P(p => q) = 1 − P(p) + P(pq), as the extension of p => q (p impies q) in mathematical logic, where p and q are two proposition sequences or predicates. My extension is different. I used confirmation measures b*(e1→h1) and c*(e1 → h1), which may be negative, to represent the extended major premises. The most extended syllogisms in Table 4 are related to Bayes' formulas. The measure c*(p→q) is the function of P(q|p) (see Equation (44)); they are compatible. It should also be reasonable to use P(q|p) (p and q are two predicates) as the measure for assessing a fuzzy major premise. However, P(q|p) and P(p => q) are different. We can prove P(q|p) ≤ P(p => q) = 1-P(p) + P(pq).
Equation (55) is used because there is ( However, no matter how many examples support p => q, the degree of belief in "if p then q" cannot be 1, as pointed by Hume and Popper [7]. When counterexamples exist, p => q becomes P(p => q) = 1 − P(p) + P(pq), which is also not suitable as the measure for assessing fuzzy major premises.
In addition, it is much simpler to obtain P(q) or P(pq) from P(q|p) and P(p) = 1 than from P(p => q) and P(p) = 1 [9,26]. If P(p) < 1, according to statistical reasoning, we also need ( | ) P q p to obtain P(q) = P(p)P(q|p) + [1 − P(p)] ( | ) P q p . Using a probability logic, we should be careful about whether the results are compatible with statistical reasoning.
Comparing the Truth Function with Fisher's Inverse Probability Function
In the P-T probability framework, the truth function plays an important role. It is called Likelihood Inference to use the likelihood function P(x|θj) as the inference tool (where θj is a constant). It is called Bayesian Inference to use Bayesian posterior P(θ|D) as the inference tool (where P(θ|D) means parameters' posterior distribution for given data or a sample). It is called the Logical Bayesian Inference to use the truth function T(θj|x) as the inference tool [12].
IPF P(θj|x) can make use of the prior knowledge P(x) well. When P(x) is changed, P(θj|x) can still be used for probability predictions. However, why did Fisher and other researchers give up P(θj|x) as the inference tool?
When n = 2, we can easily construct P(θj|x), j = 1,2, with parameters. For instance, we can use a pair of logistic functions as the IPFs. Unfortunately, when n > 2, it is hard to construct P(θj|x), j = 1,2, …, n, because there is normalization limitation ∑j P(θj|x) = 1 for every x. That is why a multi-class or multi-label classification is often converted into several binary classifications [40]. P(θj|x) and P(yj|x) as predictive models also have a serious disadvantage. In many cases, we can only know P(x) and P(x|yj) without knowing P(θj) or P(yj) so that we cannot obtain P(yj|x) or P(θj|x). Nevertheless, we can get truth function T*(θj|x) in these cases. There is no normalization limitation, and hence it is easy to construct a group of truth functions and train them with P(x) and P(x|yj), j = 1,2, …, n, without P(yj) or P(θj).
We summarize that the truth function has the following advantages: We can use an optimized truth function T*(θj|x) to make probability prediction for different P(x) as well as we use P(yj|x) or P(θj|x). We can train a truth function with parameters by a sample with a small size as well as we train a likelihood function.
With the truth function, we can indicate the semantic meaning of a hypothesis or the extension of a label. It is also the membership function, which is suitable for classification. To train a truth function T(θj|x), we only need P(x) and P(x|yj), without needing P(yj) or P(θj). Letting T*(θj|x)∝P(yj|x), we can bridge statistics and logic.
Answers to Some Questions
If probability theory is the logic of science, may any scientific statement be expressed by a probabilistic or uncertain statement? The P-T probability framework supports Jaynes' point of view. The answer is yes. The Gaussian truth function can be used to express a physical quantity with a small uncertainty, as well as it is used to express the semantic meaning of a GPS pointer. The semantic information measure can be used to evaluate the result predicted by a physical formula. Guo [69] studied the operations of fuzzy numbers and concluded that the expectation of the function of fuzzy numbers is equal to the function of the expectations of the fuzzy numbers. For example, (fuzzy 3)*(fuzzy 5) = (fuzzy 15) = (fuzzy 3*5). According to this conclusion, the probabilistic or uncertain statements may be compatible with certain statements about physical laws.
Regarding the question of whether the existing probability theory, the P-T probability framework, and the G theory are scientific theories and if they can be tested by empirical fact. My answer is that they are not scientific theories tested by empirical facts but tools for scientific theories and applications. They can be tested by their abilities to resolve problems with scientific theories and applications.
Regarding the question of whether there are identifiable logical constraints for assessing scientific theories: I believe that a scientific theory needs not only self-consistency, but also compatibility with other accepted scientific theories. Similarly, a unified theory of probability and logic as a scientific tool should also be compatible with other accepted mathematical theories, such as statistics. For this reason, in the P-T probability framework, an optimized truth function (or a semantic channel) is equivalent to a TPF (or a Shannon channel) when we use them for probability predictions. Additionally, the semantic mutual information has its upper limit: Shannon's mutual information; the fuzzy logic had better be compatible with Boolean algebra.
Some Issues that Need Further Studies
Many researchers are talking about interpretable AI, but the problem of meaning is still very much with us today [70]. The P-T probability framework should be helpful for interpretable AI. The reasons are:
The human brain thinks using the extensions (or denotations) of concepts more than interdependencies. A truth function indicates the (fuzzy) extension of a label and reflects the semantic meaning of the label; Bayes' Theorem III expresses the reasoning with the extension. The new confirmation methods and the fuzzy syllogisms can express the induction and the reasoning with degrees of belief that the human brain uses, and the reasoning is compatible with statistical reasoning. The Boltzmann distribution has been applied to the Boltzmann machine [71] for machine learning. With the help of the semantic Bayes formula and the semantic information methods, we can better understand this distribution and the Regularized Least Square criterion related to information.
However, it is still not easy to interpret neural networks with semantic information for machine learning. Can we interpret a neural network as a semantic channel that consists of a set of truth functions? Can we apply the Channels' Matching algorithm [12] to neural networks for maximum mutual information classifications? These issues need further studies.
This paper provides a bridge between statistics and logic. It should be helpful for the semantic dictionary based on statistics that AI needs. However, there is much work to do, such as to design and optimize the truth functions of terms in natural languages, such as "elder", "heavy rain", "normal body temperature", and so on. The difficulty is that the extensions of these terms change from area to area. For example, the extension of "elder" depends on the life span of people in the area; the extensions on rainfalls of "heavy rain" in coastal areas and in desert areas are different. These extensions are related to prior distribution P(x). For unifying logical methods and statistical methods for AI, the efforts of more people are needed.
When samples are not big enough, the degrees of confirmation we obtain are not reliable. In these cases, we need to combine the hypothesis-testing theory to replace the degrees of confirmation with the degree intervals of confirmation. We need further study. This paper only extends a primary syllogism with the major premise "if e = e1 then h = h1" to some effective fuzzy syllogisms. It will be complicated to extend more syllogisms [63]. We need further study for the extension.
We may use truth functions as DCFs and the generalized information/entropy measures for random events' control and statistical mechanics (see Section 3.3). We need further studies for practical results.
Conclusions
As pointed by Popper, a hypothesis has both statistical probability and logical probability. This paper has proposed the P-T probability framework that uses "P" to denote statistical probability and "T" to denote logical probability. In this framework, the truth function of a hypothesis is equivalent to the membership function of a fuzzy set. Using the new Bayes theorem (Bayes' Theorem III), we can convert a likelihood function and a truth function one to another so that we can use sampling distributions to train truth functions. The maximum semantic information criterion used is equivalent to the maximum likelihood criterion. Statistics and logic are hence connected.
I have introduced how the P-T probability framework is used for the semantic information G theory or the G theory. The G theory is a natural generalization of Shannon's information theory. It can be used to improve statistical learning and explain the relationship between information and thermodynamic entropy, in which the minimum mutual information distribution is equivalent to the maximum entropy distribution.
I have shown how the P-T probability framework and the G theory support Popper's thought about scientific progress, hypothesis evaluation, and falsification. The semantic information measure can reflect Popper's testing severity and verisimilitude. The semantic information approach about verisimilitude can reconcile the content approach and the likeness approach [52].
I have shown how to use the semantic information measure to derive channel confirmation measure b* and prediction confirmation measure c*. Measure b* is compatible with the likelihood ratio and changes between -1 and 1. It can be used to assess medical tests, signal detections, and classifications. Measure c* can be used to assess probability predictions and clarify the Raven Paradox. Two confirmation measures are compatible with Popper's falsification thought.
I have provided several different forms of Bayesian reasoning, including fuzzy syllogisms with confirmation measures b* and c*. I have introduced a fuzzy logic that was used to set up a symmetrical model of color vision. This fuzzy logic is compatible with Boolean algebra, and hence compatible with the classical logic.
The above theoretical applications of the P-T probability framework illustrate its reasonability and practicability. We should be able to find the wider applications of the P-T probability framework. However, to combine the logical and statistical methods for AI, there is still much work to do. In order to apply the P-T probability framework and the G theory to deep learning with neural networks and to random events' control for practical results, we need further studies.
Funding: This research received no external funding.
Acknowledgments: I thank the editors of Philosophies and its special issue Science and Logic for giving me a chance to introduce this study systematically. I also thank the two reviewers for their comments, which vastly improve this paper. I should also thank people who influenced my destiny so that I have technical and theoretical research experience for this article. where N is the number of particles. For a local equilibrium system, where Tj is the temperature of the jth area (yj), and Nj is the number of particles in the jth area. We now consider minimum mutual information R(Θ) for given distribution constraint functions T(θj|x) = exp[−ei/(kTj)] (j = 1, 2, …). The logical probability of yj is T(θj) = Zj/G, and the statistical probability is P(yj) = Nj/N. From Appendix B and the above equation for S, we derive where ej = Ej/Nj is the average energy of a particle in the j-th area.
(A10)
Appendix E. Illustrating the Fuzzy Logic in the Decoding Model of Color Vision From cones' responses b, g, and r, we can use the fuzzy decoding to produce the eight outputs as mental color signals, as shown in Figure A11. Because of P(p) = P(pq) + P(p q ) ≤ 1, we have P(pq) ≤ 1 − P(p q ), and thus P pq P pq P pq P q p P p P pq P pq P pq P pq P pq P P Pq P p P pq P p q
|
2020-10-19T18:06:29.465Z
|
2020-10-02T00:00:00.000
|
{
"year": 2020,
"sha1": "115a42642f4d1c1d5a8b5fe150ee0ccb04f73a4a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2409-9287/5/4/25/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "c111a1f80478bee274d663fa2833a2f3584dc22f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
2676339
|
pes2o/s2orc
|
v3-fos-license
|
Describing the Chinese HIV Surveillance System and the Influences of Political Structures and Social Stigma
China’s public health surveillance system for HIV was established in late 1980s and has evolved significantly during the past three decades. With the gradually changing mode of HIV transmission from sharing of intravenous injecting equipment to sexual exposure and the rapid spread of HIV infection among Chinese homosexual men in recent years, an efficient and comprehensive population-level surveillance system for describing epidemics trends and risk behaviours associated with HIV acquisition are essential for effective public health interventions for HIV. The current review describes the overall strength of the Chinese HIV surveillance system and its structural weaknesses from a political and social perspective. The HIV surveillance system in China has undergone substantial revamping leading to a comprehensive, timely and efficient reporting system. However, large data gaps and lack of quality control and sharing of information obstruct the full performance of the system. This is largely due to fragmented authoritarianism brought about by the underlying political structure. Social stigma and discrimination in health institutes are also key barriers for further improvements of HIV diagnosis and surveillance in China.
BACKGROUND
By the end of 2009 it was estimated that 740,000 (0.56 -0.92 million) people in China were living with HIV [1]. This estimate was obtained by using the Workbook Method recommended by UNAIDS based on the numbers of newly diagnosed HIV cases and applying related mortality rates reported by the current HIV sentinel surveillance system. The overall national HIV prevalence is estimated to be 0.057% (0.042-0.071%) among the Chinese population [2,3]. However, data from the national sentinel surveillance for HIV/AIDS indicated magnitudes and trends in HIV prevalence vary substantially across different at-risk populations: e.g. 0.6% HIV prevalence among female sex workers (FSW) in 2009 [2] and increases from 1.4% in 2001 to 5.3% in 2009 among men who have sex with men (MSM) [4,5], and from 5.9% in 2002 to 9.3% in 2009 among injecting drug users (IDU) [2,6]. The profile of HIV epidemics in China have also been gradually changing in mode of transmission. In the early 1990s, the main driver of HIV transmission was needle sharing among IDU [6][7][8][9]; however, sexual transmission has now become the primary mode of transmission in recent years [2,[10][11][12]. The proportion of newly diagnosed HIV cases due to sexual transmission increased from 49.8% in 2005 to 74.7% in 2009 and newly diagnosed cases attributed to homosexual contact has substantially increased from 12.2% in 2007 to 32.5% in 2009 [1,10].
Effective public health interventions for HIV rely on accurate and timely information on the extent and patterns of *Address correspondence to this author at the CFI Building, Corner of West and Boundary Streets, Darlinghurst, Sydney NSW 2010, Australia; Tel: +61 2 9385 0900; Fax: +61 2 9385 0920; E-mail: lzhang@nchecr.unsw.edu.au spread of infection. As such, an efficient and comprehensive population-level surveillance system for describing epidemics trends and risk behaviours associated with HIV acquisition are essential [13]. Previous studies have provided reviews of the current HIV surveillance system in China, mainly from a functional and epidemiological perspective [14][15][16]. In contrast, the current review aims to describe the overall strength of the Chinese HIV surveillance system and its structural weaknesses from a political and social perspective.
Development of HIV Surveillance Systems in China
Prior to 2004, newly diagnosed HIV/AIDS cases were reported through the National Disease Reporting System (NDRS) (see Table 1 for a list of reportable items in China). The system consists of multiple level of reporting. Villages, being the lowest level of administrative, collected HIV epidemiological data in their allocated areas and reported to prevention units in township hospitals. From the prevention units, HIV data were further sent through county health and epidemic-prevention stations (EPS) to provincial centers and forwarded to the Chinese Academy of Preventive Medicine. With the support from the Ministry of Health and cooperation of provincial centers of health and epidemic prevention, a computerized system had worked effectively to network the monitoring of HIV epidemics at various levels in China and information transfer experienced no substantial changes for about 25 years. However, all communicable disease surveillance systems in China underwent significant upgrade in 2004.
The severe acute respiratory syndrome (SARS) outbreak in 2003-2004 exposed the great deficiencies in both timeliness and completeness of infectious disease reporting in NDRS [17]. This substantially changed the landscape of infectious diseases surveillance in China. Just 6 months after the SARS epidemic subsided, the China Center for Disease Control (China CDC) launched the China Information System for Disease Control and Prevention (CISDCP) [18]. The system is an information platform constructed mainly with an infectious diseases orientation, aiming to improve timeliness of case reports, completeness and accuracy of surveillance data, ability for early detection of outbreaks and to provide direct information sharing among various levels of CDCs and health authorities, replacing the previous NDRS. The upgrade of overall infectious disease surveillance systems in China directly led to the construction of web-based HIV/AIDS case reporting system. Under this reporting system, grassroot hospitals remain the primary source of HIV epidemiological information. All diagnosed and confirmed HIV-infected cases admitted to hospitals are obliged to report through the web-based reporting system to China CDC.
Improving from the previous step-wise hierarchical reporting framework, the new web-based reporting system provided the ability for direct reporting of diagnosed HIV cases from the grassroot hospitals to the centralised database in Chinese MOH through CISDCP (Fig. 1). That is, once disease cases are entered into the system, all levels of CDC can acquire the data 'live' according to their assigned access privileges [19]. This substantially reduced the average reporting period from 7-8 days to less than 2 days. In addition to their assumed responsibility for authenticating the HIV disease information in their administrative regions, municipal and provincial CDCs are required to report to their corresponding Department of Health (DOHs) and Bureau of Health (BOHs) and to form networks with local research bodies, universities and other health organisations. At the top of the hierarchy, the National Center for HIV/AIDS Prevention and Control (NCAIDS) of China CDC is the overseeing organisation for assembling and analysing all HIV disease information and then presenting a final report to China MOH. China MOH remains the only legitimate office to disseminate HIV/AIDS-related information to the public, but provincial and municipal CDCs are also authorised to release information under the supervision and approval of China MOH [20,21]. China MOH enacts health policies, designs prevention guidelines and intervention strategies based on the surveillance reports provided by NCAIDS, once a consensus agreement is established between MOH and the Chinese government, feedback recommendations and policies are issued to all lower levels of MOH for execution. Since the feedback information does not go through the webbased information pool, hospitals at grassroots level may have to wait for a considerable period of time to receive recommendations and guidelines from China MOH.
By the year 2009, 8563 laboratories (8273 screening laboratories, 254 confirmatory laboratories, 35 confirmatory central laboratories and 1 National AIDS Reference Laboratory) and 84 laboratories hospitals are capable of carrying out CD4 and HIV viral load testing in China [22]. Confirmed diagnosed HIV cases were reported back to referring clinical sites then further integrated into the central database through CISDCP. The hospital-based surveillance in China has cumulatively diagnosed 326,000 people living AIDS and a total of 65,481 people are currently receiving ART in 2009 [23], which accounted for 62.4% of the AIDS cases [2].
Sentinel Surveillance of HIV in China
The hospital-based surveillance relies on PLHIV to attend hospital to be diagnosed and cases subsequently reported. Complementing this passive reporting, active surveillance for HIV was adopted in 1995 and further expanded in 2003 to include second generation behavioral surveillance in China according to WHO recommendations [24,25]. While the existing hospital-based diagnosis system provides HIV diagnosis and treatment data, sentinel surveillance collects information on HIV prevalence, incidence and the risk behaviours of at-risk populations. In China, HIV sentinel surveillance is collaboratively supervised by the Chinese MOH and WHO. In 2000, surveillance pilot sites targeting IDU, FSW, male clients and the general female population were established in Fujian, Xinjiang, Guangxi, Shanxi, Yunnan and Sichuan. The approach utilised 21 surveillance points in these provinces and from every surveillance point 400 people are sampled twice each year to estimate the prevalence, incidence rate and risk behaviours of the targeted population [26]. The sampling and reporting for each round is completed within 4 weeks. In 2002, China issued "Standards for HIV Surveillance and HIV/AIDS" and "STD Comprehensive Surveillance Guideline" to provide guidelines to standardise the practice of HIV surveillance. In 2009, the number of sentinel sites in China increased to 1318, covering 14 at-risk population groups [2,14,27]. Since the establishment of the web-based HIV reporting system in 2004, sentinel surveillance data is integrated with disease information reported from hospitals. This approach established a comprehensive database for data collection, analysis and sharing and enables high-level comparison and integration of epidemiological and behavioural surveillance data [14]. This further allows the conduct of evaluation and forecasting activities for guiding the formulation of public health intervention strategies.
HIV sentinel surveillance has detected the changing trends and patterns of HIV epidemics in China, from transmission related to injecting practices to sexual transmission, especially homosexual transmission. Further, surveillance data have revealed that the proportion of IDU among all drug users has increased from 42% in 1994 [28,29] to 53% in 2001 [29,30] and 69% in 2009 [31]. In addition, during 1997-2009, the proportion of IDU who report sharing needles increased from 25% to 38% [32]. Among FSW, consistent condom use in commercial sexual contacts in the last month increased from 10% in 1995 to 62% in 2009, and the proportion who reported never using condoms decreased from 70.6% to 2.0% during the same period [14,33].
Fragmented Authoritarianism and HIV Surveillance in China
The current HIV surveillance system in China appears to have improved in its timeliness and effectiveness for providing useful HIV disease information and regular behavioural monitoring of at-risk populations in China. However, under-utilisation and lack of transparency of information outside certain authorised channels are possibly the greatest hindrances to optimal use of the current system. Since accessibility to health data in China is largely limited to a small group of authorised personnel at the top of the administrative hierarchy, it often requires extensive bargaining and bureaucratic procedures to obtain epidemiological and behavioural indicators that are commonly accessibly in other countries. Hence, large amount of surveillance data remains in CISDCP and cannot be utilised widely for the purpose of HIV public health research, broader program evaluation and planning of control strategies. A transparent HIV surveillance system across all stakeholders, including policy makers, health officials, community health organisation leaders and HIV researchers, is beneficial for the entire HIV sector in the evaluation and implementation of HIV prevention and intervention strategies [34]. To understand barriers to optimal utilisation of available systems it is essential to first understand the political structure in China and its influences on surveillance. Although HIV surveillance is considered as a purely public health issue in many countries, issues related to HIV in China are highly sensitive and politically motivated [13,35,36].
Authoritarianism describes any political structure in which overall authority is concentrated in the hands of a single leader or a small group of elite. Prior to economic reform in China, the ultimate political powers were mastered by the chairman of state, who is simultaneously the president of the party and chief commander of the army. Political Fig. (1). Schematic diagram of flow of information for HIV surveillance in China. decisions and national policies were implemented strictly "from top to bottom" through a pyramidal administrative structure (Fig. 2a). During the process of political decentralisation initiated by the economic reform in China, provincial governments and ministerial institutions were given high degrees of autonomy to stimulate economic growth. Although the central government remains at the peak of the power hierarchy, opportunities for competition and bargaining between governments and institutions became possible. This phenomenon was coined "fragmented authoritarianism" by Lampton in 1987 [37]. As a result of fragmented authoritarianism, government authority below the very peak of the Beijing central government is fragmented and disjointed [37,38]. As the highest administrative body, the central government does not generally dictate the execution of policies by provincial governments. Between equal level governments or governmental institutions, and even between local and central governments, there exists a large space for competition and cooperation, which leads to extensive bargaining concerning policy execution and project development of common interests (Fig. 2b). Fragmented authoritarianism in China can be viewed as the mixed product of traditional authoritarianism and demands for increase in political freedom triggered by the economic reform.
The fragmented authoritarian political structure directly affects the HIV surveillance system in China. First, fragmented authoritarianism leads to the absence of accountability in vertical administration of CDCs and lack of cooperation among CDCs on the same level. In this structure, the central China MOH cannot exercise direct supervision over the provincial MOHs, which retain the authority of personnel appointment, organisation structure and budget distribution. Conversely, policy immobility in inferior governments can only be overcome by the intervention of a superior government with the authority to resolve conflicting interests. In other words, lower-level governments tend to shift their policy overload to the upper levels in order to avoid assuming accountabilities [38]. Consequently, a large number of responsibilities and agenda items accumulate at the upper end of the political hierarchy. Therefore, China CDC, as a subordinate organisation of China MOH, is caught in an uneasy position whereby it is unable to exercise its designated authority and simultaneously overloaded with responsibilities which it is not designated to bear. This is reflected by the large amount of collected but unanalysed HIV data accumulated in NCAIDS in central China CDC. As pointed out by Sun, et al., the amount of HIV surveillance data from diagnoses, treatment, laboratory tests and behavioural surveillance overflows the organizational capacity of NCAIDS [14]. Apart from key indicators that require to be distributed to MOH and the general public, most of the surveillance data is under-utilised [14]. Although the web-based HIV reporting system provides an advanced technology base for data reporting and collection, the dissemination of related summary and policy to direct HIV prevention and care strategies remain slow and inefficient. In addition, such a fragmented system makes it difficult to authenticate the quality of the data, as intermediate levels of CDCs do not assume accountability for the accuracy of reported data. An evaluation of HIV surveillance systems by Loo et al., indicated that although the Chinese HIV surveillance system obtained high scores for flexibility and timeliness across in its surveillance activities, representativeness and completeness of data are poor in its case reporting [34]. Discrepancies are found between the HIV laboratory testing algorithm stated in protocols and the actual procedures implemented, which results in gaps in the data and reductions in quality [39].
Second, a fragmented authoritarian political structure inevitably leads to a fragmented information system with little openness and systematic mechanisms for synthesis. As Fig. (2). Change of political relationship between institutions at the same and different administrative levels in China prior to, and post, economic reform. previously analysed, lower-level CDCs can only access HIV information within its own jurisdiction, which represents only a subset of the information pool of the higher-level CDCs. As there is little information sharing, even between neighbour CDCs on the same level, these information subsets become isolated information islands which can only be integrated by their superiors. In this way, the central China CDC at the top of the administration pyramid has the ultimate authority of data collection, assembling, analysis and distribution. Hence, information openness is strongly and solely dependent on the decision of central China CDC and there exists little monitoring mechanisms to ensure the accuracy of both the reported and published information. In the current political structure, all levels of government are only accountable to their upper-level administrations, but not to the general public. The absence of genuine engagement of civil society groups, including the media and the scientific communities, and the expectations and pressures from higher-level authorities, often results in a results-oriented policy implementation, such that local officials tend to manipulate nonscientific and arbitrary results to satisfy their superiors perfunctorily [40]. In this view, ensuring the accuracy of reported disease information should be a high priority in the future development of HIV surveillance in China.
Social Stigma and HIV Surveillance in China
Social stigma against PLHIV is common in China. Misconceptions about HIV transmission and prejudice towards PLHIV are widespread among the general population. HIV-positive individuals are often denied employment, education and necessary medical services [41] and are thus more likely not to disclose their HIV status [42,43]. A recent study by Zhou in 2008 shows that the healthcare expenses for HIV-positive individuals were greatly elevated by social discrimination, and their available social resources were substantially reduced in the presence of social stigma [44]. The study further indicated that PLHIV often perceive healthcare institutions as an indispensable source of social support and one of the few places where they may be willing to disclose their HIV-positive status [44]. However, a separate survey among health workers in hospitals demonstrated that HIV stigmatisation remains as the major health-seeking barriers for PLHIV in China [45]. According to a UNAIDS study in 2009, 15% of surveyed participants indicated that their HIV status was disclosed by health-care professionals to others without their consent [46]. According to the latest available statistics, there have been 326,000 cumulative HIV/AIDS cases in China, including 54,000 deaths, by the end of 2009 [23]. This accounts for only 44% of the estimated number of PLHIV at that time. The percentage of undiagnosed cases cannot be improved upon if the barrier of stigma against PLHIV from health institutions is not removed. In addition, PLHIV that belongs to at-risk groups, especially MSM, are subjected to greater social stigma due their identity and sexual orientation. Recent study indicated that only 14-45% of Chinese MSM underwent HIV testing in the past 12 months during 2007-2009 [2,[47][48][49][50][51][52][53]. Also, it is estimated that only 12-15% of HIV-positive MSM are diagnosed based on an estimate among MSM in Yunnan province in 2009 [54]. Due to the hidden nature of MSM, sentinel surveillance targeting MSM was also required to substantial scale-up. Although the number of sentinel surveillance sites targeting MSM has increased from 4 in 2006 to 25 sites in 2009 [1,14], both epidemic and behavioural surveillance for MSM remains insufficient and limited in many parts of China. Reducing social stigma and discrimination is necessary in improving HIV surveillance in China.
CONCLUSION
In summary, the HIV surveillance system in China has undergone substantial revamping that leading to a comprehensive, timely and efficient reporting system. However, large data gaps and lack of quality control and sharing of information obstruct the full performance of the system. This is largely due to fragmented authoritarianism brought about by the underlying political structure. Social stigma and discrimination in health institutes are also key barriers for further improvements of HIV diagnosis and surveillance in China. The insufficient biological and behavioural surveillance for Chinese MSM may result in underestimation of the actual prevalence and disease burden among this population.
ACKNOWLEDGEMENT
The study was funded by Vice Chancellor Fellowship, the University of New South Wales, 2010-2012.
|
2016-05-04T20:20:58.661Z
|
2012-09-07T00:00:00.000
|
{
"year": 2012,
"sha1": "39a01111cbf00eb9bf55c7713c1edec443d3e97e",
"oa_license": "CCBYNC",
"oa_url": "http://benthamopen.com/contents/pdf/TOAIDJ/TOAIDJ-6-163.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "39a01111cbf00eb9bf55c7713c1edec443d3e97e",
"s2fieldsofstudy": [
"Medicine",
"Political Science",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4956664
|
pes2o/s2orc
|
v3-fos-license
|
Hierarchy of one-dimensional models in nonlinear elasticity
. By using formal asymptotic expansions, we build one-dimensional models for slender hyperelastic cylinders submitted to conservative loads. According to the order of magnitude of the applied loads, we obtain a hierarchy of models going from the linear theory of flexible bars to the nonlinear theory of extensible strings. R´esum´e. On construit, `a l’aide de d´eveloppements asymptotiques formels, des mod`eles unidimensionnels de cylindres hyper´elastiques ´elanc´es soumis `a des forces conservatives. Suivant l’ordre de grandeur des forces appliqu´ees, on obtient une hi ´ erarchie de mod ` eles allant de la th ´ eorie des poutres flexibles jusqu’ ` a th ´ eorie des fils ´ elastiques.
Introduction
Engineers use various one-dimensional models to describe the elastostatic response of slender three-dimensional structures subjected to conservative loads. The most used are the theory of elastic strings, the theory of inextensible strings, the nonlinear theory of elastic straight beams and the linear theory of elastic straight beams, see (Love, 1927), (Timoshenko, 1983) and (Antman, 1972). The way of choosing the model seeming to be the most appropriate to the considered situation does not proceed of a thorough analysis and remains very intuitive, because a hierarchy between these various models is not clearly established. Furthermore, the question of knowing whether this list of models is comprehensive, or whether there is not any intermediate model, remains unanswered. The only way to answer would be to follow a deductive process in order to construct one-dimensional models starting from three-dimensional elasticity.
Unfortunately, the first concern of the scientific researchers has been to separately justify each existing model by introducing appropriate hypotheses, in general explicitly or implicitly connected to specific choices of the scaling of the applied forces. These justifications, which start from the three-dimensional theory, are made by using either formal asymptotic methods such as the method of asymptotic expansions or rigorous mathematical arguments such as Γ-convergence techniques. For further informations, the reader could refer to (Trabucho and Viaño, 1996) for a broad review of the bibliography (up to 1996). Let us give a brief overview of those most important justifications. Starting with the work of (Rigolot, 1972) for beams and the works of (Ciarlet and Destuynder, 1979) for plates, a first complete asymptotic analysis of linearly elastic straight beams based on formal asymptotic expansions can be found in (Bermúdez and Viaño, 1984). (Geymonat et al., 1987a) give a rigorous proof of convergence (based on classical tools of functional analysis) in the context of partially anisotropic, heterogeneous and linearly elastic straight beams. That work was first extended by (Geymonat et al., 1987b), then by (Sanchez-Hubert and Sanchez-Palencia, 1991) to full anisotropic and heterogeneous beams. In the framework of nonlinear elasticity, (Cimetière et al., 1988) provides the first attempt to justify nonlinear models of elastic straight beams by formal asymptotic methods. A derivation of a nonlinear bending-torsion model for inextensible rods by Γconvergence was proposed by (Mora and Müller, 2002). The models of extensible or inextensible strings have been justified by rigorous mathematical arguments by (Acerbi et al., 1991) or (Mora and Müller, 2004). There is also a great number of works devoted to the justification of various models of membranes, plates or shells associated to thin structures, see for instance (Ciarlet, 1988) and (Fox et al., 1993).
Very few works have been devoted to a hierarchical organization of these models in order to help the engineers make the right choice. To our knowledge, the work by (Marigo et al., 1998) was the first in which such a hierarchy of rod models appeared. Unfortunately the details of the proofs were never published. Since this publication, a few works have been completed with the same objective of hierarchization. Let us particularly quote (Friesecke et al., 2005) which obtain by Γ-Convergence a hierarchy of plate models. There are also the works by (Meunier, 2003) and (Meunier, 2004) in which, following the work by (Pantz, 2001), is also obtained, formally, a hierarchy of rod models completely similar to the hierarchy introduced by (Marigo et al., 1998). The approach in (Meunier, 2003) and (Meunier, 2004) is based on the resolution of a sequence of recursive minimization problems and presents the advantage of admitting a larger set of admissible deformation, thus removing an unnecessary hypothesis of (Fox et al., 1993) and obtaining an expression for the string energy that agrees with that found via Γ-Convergence. The goal of the present paper is to detail the results announced by (Marigo et al., 1998).
The main assumptions of our analysis are as follows: (i) the body is homogeneous, elastic and isotropic; (ii) its natural reference configuration is a cylinder; (iii) the applied forces are conservative (but not necessarily dead loads); (iv) the displacements are not necessarily infinitesimal and the elastic potential is non-linear. In this context, as it has been announced by (Marigo et al., 1998), we obtain a hierarchy of four asymptotic one-dimensional models depending on the order of magnitude of the applied loads.
To do so, we first introduce two dimensionless parameters, namely and η. The parameter is the traditional parameter of slenderness, i.e. , the ratio between the cross-section diameter and the length of the cylinder. The parameter η is the ratio between a parameter characteristic of the intensity of the applied forces and a parameter representative of the cylinder rigidity (in practice the product of the Young modulus of the material by the area of the cross-section). Then the parameter η is compared with the small parameter . Finally, denoting by n the order of magnitude of the dimensionless load parameter η compared to the slender parameter , η ∼ n , we obtain the following different models 1. When n ≥ 3, the slender elastic cylinder behaves like a linear inextensible, flexible beam.
The displacements are infinitesimal (of order n − 2), the strain energy and the potential of the external forces are infinitesimal and both of order 2n − 2. The leading term in the strain energy corresponds to the bending energy of inextensible infinitesimal Navier-Bernoulli displacements; of the external forces are both of order 2. The leading term of the strain energy corresponds to the bending energy (eventually coupled with a torsion energy) of inextensible finite Navier-Bernoulli displacements; 3. When n = 1, the response of the slender elastic cylinder is similar to that of an inextensible string. The displacements are finite and inextensional. The potential of the external forces is of order 0 while the strain energy is negligible; 4. Finally, when n = 0, we have to find the stable equilibrium of an elastic string. The displacements are finite, the potential of the external forces and the strain energy are of order 0. The leading term of the strain energy corresponds to an extensional energy, the bending energy is negligible.
Thus, we see that these models are in conformity with those used by the engineers. Moreover, they reveal two great families of models: elastically flexible beams on the one hand, and perfectly flexible strings on the other hand. However, the originality of the present work, compared to what one can find in the "classical" literature is that these models are associated with levels of intensity of the loading. Contrary to the daily experiment, the concept of a string or of a beam is not an intrinsic quality to a given slender body, but a "type of behavior" induced by the intensity of the loads. The same object (made of the same material and with the same geometry) will behave as a flexible beam if the intensity of the loading is sufficiently weak and as an extensible string if the loading is rather strong. In the everyday life, what misleads our senses is that the objects of use are always subjected to the same type of loading and thus the same type of behavior is always observed. All these results are obtained by using techniques of formal asymptotic expansions and consequently they could appear to be less interesting than those obtained by Γ-convergence. However, apart from the postulate that the solution can be expended in powers of , all the procedure is rigorous and deductive. Compared to the techniques of Γ-convergence, this method provides a construction process, in the sense that the orders of magnitude of the different energies and the shape of the optimal displacement fields are built "gradually" during the estimate process. Moreover the analysis is strongly based on Hypothesis 3.3 relating to the type of applied forces. Roughly speaking this condition requires that the applied forces produce work in inextensional displacements of the Navier-Bernoulli type. This assumption guarantees that each model obtained is not degenerated, in other words that the leading term of the energy is genuinely not vanishing. Here, one can still regret that all the works of justification based on the asymptotic methods overshadow in a quasi-systematic way this question that however always arises. Should the condition not be satisfied, the model thus obtained does not apply any more and the analysis should be refined. This question reminds the issue of the completeness of the models list. We will not treat it in this article and shall reserve it for future publications. The reader interested in this question should consult the work of (Marigo and Madani, 2004) to have an idea of the extent of the task. What clearly appears is that the number of asymptotic models can vary "ad infinitum" if one exploits the various parameters relating to the loading, the geometry or the behavior. What one can hope for is that the method used here is rather flexible and general enough to be adapted to any situation.
Specifically, the paper is organized as follows: In Section 2, we fix the notations and the context of the work. In Section 3, we introduce the ingredients of the asymptotic analysis. Section 4 is devoted to the construction of the different types of displacements and to the classification of the energies which will appear in the asymptotic models. In Section 5 we obtain the right order of 05_JE_JJM&NM(VR).tex; 17/11/2005; 10:49; p.3 hal-00551082, version 1 -3 Jan 2011 magnitude of the displacements and of the energies in relation with the order of magnitude of the loading. Section 6 is devoted to the presentation of the hierarchy of one-dimensional models so obtained. The long proof of Theorem 4.1 is reported to an appendix. In general, the intrinsic notation is preferred to the component notation. However, when the component notation is chosen, we use the summation convention on repeated indices. Latin indices i,j,k,... take their value in the set {1, 2, 3} while Greek indices α, β, γ, ... (except ) in the set {1, 2}. Moreover, we denote by u ,i the derivative of u with respect to x i , the components of the vector u are denoted u i . The group of the rotations is denoted SO(3) and the linear space of skew-symmetric 3 × 3 matrices is denoted Sk(3).
The three-dimensional problem
Letω be an open, bounded and connected subset in R 2 with diameter R. Given L > 0, we denote byΩ the cylinderΩ =ω × (0, L) the generic point of which is denoted byx = (x 1 ,x 2 ,x 3 ). The volume element ofΩ is denoted dx = dx 1 dx 2 dx 3 and the gradient with respect tox is denoted by∇. We assume that the sectionΓ 0 =ω × {0} is clamped -the role of this condition is simply to rule out any rigid displacement -, while everywhere else, the body is submitted to a system of body or surface conservative forces which are supposed to derive from the potentialL. Thus, the setĈ of admissible displacement fields reads aŝ In (1) the condition detF > 0, withF = I +∇v, ensures that the deformation preserves the orientation and it is unnecessary to precise the regularity of the fields in our formal approach. The Green-Lagrange strain tensorÊ is given by 2Ê =F TF − I. Let us note that, contrary to usual assumptions made by (Fox et al., 1993), (Pantz, 2001) or (Meunier, 2003), we do not suppose that the loads are dead loads. Thus, the potential of the external forcesL is not necessarily a linear form onĈ, but enjoys the following properties : 1.L is a smooth function defined onĈ, which vanishes when the body is in its reference configuration,L(0) = 0.
2. The derivative ofL at 0,L (0), is a non-vanishing continuous linear form on the set of displacements of H 1 (Ω; R 3 ) satisfying the clamping condition. Its norm F = L (0) will be used to define the order of magnitude of the loading.
The domainΩ is the natural reference configuration of an elastic isotropic homogeneous body. The elastic potentialŴ enjoys the following properties: 1.Ŵ is a smooth, isotropic, non negative function that only depends onÊ.
2.Ŵ vanishes only atÊ = 0, and nearÊ = 0 it admits the following development where the Lamé coefficients λ and µ are related to Young's modulus E and Poisson's ratio ν via the classical formulae These elastic coefficients satisfy the usual inequalities 3.Ŵ satisfies the following growth condition near infinity: 4.Ŵ satisfies the "right" poly or quasi convexity together with the coercivity conditions, see (Ball, 1976) or (Morrey, 1952).
The research of stable equilibrium configuration of this body leads to the minimization of the potential energy, which is the difference between the strain energy and the potential of the external loads. So, the problem reads as REMARK 2.1. We assume that all the previous conditions are such that this minimization problem admits (at least) one solution, see (Ball, 1976), (Ball and Murat, 1984), (Dacorogna and Marcellini, 1995) and (Morrey, 1952) for more precise statements.
Setting of the asymptotic procedure
We will assume that the body is slender in the sense that its natural length L can be considered large in comparison to R. Therefore, we introduce the slenderness parameter = R L and consider that it is small with respect to 1. We intend to study the behavior of the sequence of the displacements of the body at equilibrium as goes to zero by the asymptotic procedure summarized below. The key steps in the analysis are: (i) A rescaling which transports the problem to a domain Ω that does not depend on ; (ii) An asymptotic expansion of the rescaled loads and equilibrium displacements in power of the small parameter ; (iii) The computation of the asymptotic expansion of the Green-Lagrange tensor and of the energies. Each of these steps is described in details below. We consider the dimensionless ratio η = F ER 2 as the parameter characterizing the relative intensity of the loading. The loading parameter η must be compared to the slenderness parameter and we assume that HYPOTHESIS 3.1. There exists η > 0 and n ∈ N such that η = η n . 05_JE_JJM&NM(VR).tex; 17/11/2005; 10:49; p.5
THE THREE-DIMENSIONAL RESCALED PROBLEMS
For each > 0, the displacementv of the body in the equilibrium configuration is defined on an open setΩ which depends on . In order to perform an asymptotic analysis, it is useful to consider an open set which is independent of . We adopt the following straightforward change of coordinates, in the fashion introduced by (Ciarlet and Destuynder, 1979). We associate to any displacement field defined onΩ a dimensionless displacement field v defined on The clamped section becomes REMARK 3.1. In the sequel, dimensionless quantities associated with a given physical quantity are denoted without hat.
The research of the stable equilibrium configuration of the body in the new system of coordinates leads to the minimization problem parameterized by , for given n and η: In (6), W (·) =Ŵ (·)/E and L (v) =L(v)/FL are the dimensionless energies. The Green-Lagrange strain tensor associated to v now reads where F (v) represents the dimensionless deformation gradient associated to v and ∇ denotes the differential operator The set of admissible displacement fields associated with Ω is now defined by and the total strain energy stored in the body is denoted W (v), Near E = 0, the dimensionless strain potential admits the following development: where the fourth-order dimensionless rigidity tensor is given by
ASYMPTOTIC EXPANSION
The parameter is assumed to be small with respect to 1 and will be used to define asymptotic expansions of the fields and the energies. Let us begin this section with some convention. Let f be a function of which can be expanded in power of , that is We denote by O(f ) the order of magnitude (called also the order) of f , the power of the first non-vanishing term in its expansion. By convention, the order is infinite when f = 0. Therefore, if f = 0 can be expanded in power of , we obtain: In the three-dimensional rescaled problems, the solution u ∈ C and the load potential L depend parametrically on . The next step of the asymptotic technique consists in assuming that the data of the problem admit an asymptotic expansion.
HYPOTHESIS 3.2. The potential of the applied forces L admits the following expansion in powers of :
Moreover, the growth of L near infinity is at most linear
Finally, we introduce an hypothesis that plays a fundamental role to obtain the right energy estimates. It will be called the non-degenerated loading condition. HYPOTHESIS 3.3. Let C f l be the set of admissible infinitesimal inextensional Navier-Bernoulli displacements, The applied forces produce work for at least one such displacement fields, i.e. the restriction of 0 to C f l is not 0.
We now make a fundamental Ansatz concerning the solution of the problem P . In terms of the displacement field it says: ANSATZ 3.1. The displacement u of the rescaled body in the equilibrium configuration can be expanded in powers of as hal-00551082, version 1 -3 Jan 2011 REMARK 3.3. Since F (u ) = I + 1 (u ,1 |u ,2 |0) + (0|0|u ,3 ), then F (u ) admits an expansion in powers of and similarly E (u ) = 1 2 (F T (u )F (u ) − I) does.
REMARK 3.4. Since u is the displacement of the rescaled body in the equilibrium configuration, it minimizes P , on the other hand, thanks to the clamping condition, 0 ∈ C , therefore, Using the hypothesis made on L together with the non negative character of W , it follows that PROPOSITION 3.1. The order of magnitude of the displacement is greater or equal to zero: Proof. (Notation: Throughout the proof one uses simplified notations, F = F (u ) and E = E (u ).) Let us assume that O(u ) = q < 0. Using the fact that u ∈ C together with the clamping condition, we obtain that (u q ,1 |u q ,2 |u q ,3 ) = (0|0|0). Recalling that , then the first term of the expansion of E is: If u q ,α = 0, thanks to the clamping condition we deduce that u q ,3 = 0. Then, we observe that E 2q−1 ij = 0 and E 2q 33 = 1 2 |u q ,3 | 2 = 0. So, in any case, O(E ) ≤ 2O(u ) < 0. Consequently, thanks to (3) and (16), it follows that which is not possible.
Next, we give the expansion of the strain components. The following Lemma is obtained by expanding the displacement u together with a direct computation.
LEMMA 3.2. The Green-Lagrange tensor E (u ) admits the following asymptotic expansion More precisely, the components E m ij have the following expression with the convention that u q = 0 if q < 0.
Classification of the displacements and of the energies
In this section, we study the relation between the different orders of magnitude of a displacement v of C and its corresponding Green We distinguish the following types of displacement fields the name of which is justified by Theorem 4.1: DEFINITION 4.1 (Displacements of order 0 without strain at negative order). We say that DEFINITION 4.2 (Displacements of order 1 without strain at order 0).
DEFINITION 4.4 (Displacements of order 0 with strain at order 1). We say that v ∈ C is of hal-00551082, version 1 -3 Jan 2011 DEFINITION 4.5 (Displacements of order q, q ≥ 1, with strain at order q + 1).
The following theorem is the main result of this section, it gives the orders of magnitude of the strain tensor in function of the order of the displacement field. For clarity we report its proof in the appendix. We easily deduce from Theorem 4.1 useful estimates of the strain energy.
Proof. As soon as O(E (v )) > 0, it is enough to use the development (12) of W near E = 0 to obtain the desired estimate: The remainder of the proof is a direct consequence of Theorem 4.1, since it was shown there that O(E (v )) ≤ O(v ) + 1.
The right orders of magnitude
Let us go back to the full nonlinear problem (6): the displacement field at equilibrium u ∈ C must satisfy We have to find the orders of magnitude of u , E (u ), W (u ), L (u ) and P (u ) for a given order of magnitude n of the exterior load. First, we construct admissible displacements whose energies have the "right" order of magnitude. This Lemma is based on the not degenerated loading condition, i.e. Hypothesis 3.3.
LEMMA 5.1. For any n ∈ N, there exists an admissible displacement v ∈ C such that
Proof. ( Notation: Throughout the proof, when
The proof requires to consider separately the various values of n. In any case V is chosen in C f l in such a manner that 0 (V ) = 0, which is possible thanks to hypothesis 3.3.
Let us first consider the cases where 0 ≤ n ≤ 2. We set For h = 0, let us note that R 0 = I, hence V 0 = 0 and v 0 = 0. Otherwise, when h = 0, v h is of Class 1. Its associated Green-Lagrange strain field is of order 1 and its leading term E 1 h is given by: Moreover, = T e 3 = V . Integrating this equality with respect to x 3 hal-00551082, version 1 -3 Jan 2011 hence there exists h such that L 0 (V h ) = 0. If n = 0 or n = 1, thanks to Theorem 4.2, it follows that we can choose h = 0 such that If n = 2, because of Theorem 4.2, we now get O(W (v h )) = 2 ≥ O( 2 L (v h )), for any h = 0. To conclude, it should be shown that one can choose h so that the two energies are of same order and not compensated. Let W 2 h , L 2 h and P 2 h be the term of order 2 of, respectively, W (v h ), 2 L (v h ) and P (v h ). They are given by Let us now consider the case where n ≥ 3. Let us set, So, v 0 = 0, v h is of order n − 2 and of Class 3 for any h = 0. Developing L (v ) leads to which implies that, for any h = 0, n L (v h )) is of order 2n − 2 and its leading term is L 2n−2 h = h 0 (V ). By virtue of Theorem 4.1, E (v h ) is of order n − 1 and its leading term E n−1 h is given by: W hen n > 3 : E n−1 h = −hV · x e 3 ⊗ e 3 .
Consequently W (v h ) is of order 2n − 2 and its leading term is W 2n−2 The following theorem is the main result of this section. It gives the order of magnitude of the displacement field and the energies associated with a stable equilibrated configuration, according to the order of magnitude of the loading.
REMARK 5.1. We note that the displacements are of order 0 as soon as the order of the load is less than or equal to 2. Moreover we see that the order of the elastic energy is always equal to the order of the potential energy except when n = 1, for which it is negligible.
We also note that U 3 is equal to zero as soon as n ≥ 3. Moreover U corresponds to an inextensional displacement when n = 1 or n = 2.
Hierarchy of one-dimensional models
In order to find the limit models associated to the different orders of the exterior load, we proceed as follows: 1. We fix n and choose admissible displacement fields v of the same class and of the same order as those obtained for u in Theorem 5.2.
2. We express the leading term of the potential energy P (v ), say P m with m = n+max{0, n− 2}, in terms of the involved two or three first terms of the expansion of v , say v q (x) = V (x 3 ), v q+1 and eventually v q+2 with q = max{0, n − 2}.
3. We eventually minimize P m with respect to v q+2 , at given V and v q+1 ; then, we minimize P m -which is then considered as a functional of V and v q+1 -with respect to v q+1 at given V .
4.
We then obtain P m as a functional of the leading term V alone. Its minimization constitutes the desired one-dimensional model.
Throughout this section, we denote by |ω| and I αβ , respectively, the area and the geometrical moments of the dimensionless cross-section ω, and we assume that the origin of the coordinates corresponds to the centroid of ω
hal-00551082, version 1 -3 Jan 2011
We start by considering the weakest loadings and we finish by the loadings of order 0. Thus the reader will be able to see the evolution of the behavior of the body by imagining that the loading is increased gradually.
Step 1. Here m = 2n − 2 and q = n − 2. Since u is of Class 3 and of order n − 2, its first two leading terms read as We choose v with the same form, i.e.
The leading terms V and U belong to C f l .
Step 2. The leading term of the Green-Lagrange strain field, say E n−1 , is given by The leading term of the Piola-Kirchhoff stress field is Σ n−1 = CE n−1 . As E n−1 depends on the first three terms of the expansion of v , it is also the case for the leading term of the potential energy This dependence is quadratic. Since the remainder of the procedure is classic, the broad outline only is given, see (Marigo and Madani, 2004) for details.
Step 3. Minimizing P 2n−2 with respect to v n leads to the famous Saint-Venant problems of extension, bending and torsion. Since the cross-section is homogeneous and since the material is isotropic, these problems are uncoupled. Moreover the transversal components Σ n−1 αβ of the Piola-Kirchhoff stress tensor vanish. The solution of the extension and bending problems can be obtained in a closed form, while that is possible for the torsion problem only for very particular forms of section. Finally, we obtain v n up to "rigid displacements": where w denotes the solution of the torsion problem, After inserting these results into P 2n−2 , the leading term of the potential energy becomes where J denotes the torsion rigidity modulus of the cross section, In (40), the strain energy is the sum of three terms: the first corresponds to the extension energy, the second to the torsion energy and the third to the bending energy. But, since the displacements fields V 3 and θ v are not involved in the leading term of the loading potential 0 , the minimization of P 2n−2 with respect to V 3 and θ v , at given U , gives Consequently, the extension and torsion energies are negligible in comparison to the bending energy.
Step 4. Finally, the leading term of the displacement field U and of the potential energy are given by solving the following minimization problem: Find U ∈ C f l minimizing 1 REMARK 6.1. Here one recognizes the usual model of elastic bending of the linear theory of beams. It is the reign of linearity: the displacements and the strains are small, the forces act only like dead loads, and the theorems of uniqueness prevail.
Step 1. Here m = 4 and q = 1. Since u is of Class 3 and of order 1, its first two leading terms take the form We choose v with the same form, i.e.
The leading terms V and U belong still to C f l .
Step 2. The leading term E 2 of the Green-Lagrange strain field is given by the leading term of the Piola-Kirchhoff stresses is Σ 2 = CE 2 and the leading term of the potential energy reads as This functional is no more quadratic, because of the appearance of nonlinear terms such V α V β . However the results differ little from those of the preceding case.
Step 3. Minimizing P 4 with respect to v n leads to the Saint-Venant problems again. The transversal components Σ 2 αβ of the leading term of the Piola-Kirchhoff stress tensor still vanish. The torsion problem remains unchanged. Finally, the leading term of the potential energy reads hal-00551082, version 1 -3 Jan 2011 Thus the extension energy only is different. But, since the displacements fields V 3 and θ v are still not involved in the leading term of the loading potential, the minimization of P 2n−2 with respect to V 3 and θ v , at given U , gives and hence the extension and torsion energies are still negligible in comparison to the bending energy.
Step 4. Finally, the leading term of the displacement field U is given by solving still (43).
REMARK 6.2. This model corresponds to the model obtained by Γ-convergence by (Mora and Müller, 2004).
Step 1. Here m = 2 and q = 0, the displacements are finite, but the strains are still infinitesimal. Since u is of Class 1 and of order 0, its first two leading terms take the form We choose v with the same form, i.e.
Step 2. The leading term E 1 of the Green-Lagrange strain field is given by The leading term of the Piola-Kirchhoff stresses is Σ 1 = CE 1 and the leading term of the potential energy reads as This functional is quadratic in terms of v 2 and V , for a given Q.
Step 3. Minimizing P 2 with respect to v 2 involves the Saint-Venant problems once more. The transversal components Σ 1 αβ of the leading term of the Piola-Kirchhoff stress tensor still vanish. Thanks to the linearity, we find After insertion into the potential energy equation, we get At Q fixed, the minimization of P 2 with respect to V give (for Q = R): and hence the extension energy is once more negligible.
Step 4. To determine the leading term U of the displacement, in practise, it is more convenient to consider the skew-symmetric matrix field S = R T R as the principal unknown. Indeed, let Then, a unique field Q T such that Q T (0) = I and Q T T Q T = T is associated with each T ∈ S. Thus Q T is an admissible rotation field. Moreover, let V be the functional which associates to T the displacement field V = V(T ) defined by V = (Q T − I)e 3 and V (0) = 0. Then, the leading term of the potential energy can be considered as the following functional of T : Its minimization constitutes the desired one-dimensional model Once the minimizer S is found, we obtain U = V(S (Mora and Müller, 2002).
CASE n = 1
Here m = 1 and q = 0, the displacements are finite, but the strains are still infinitesimal. Moreover, the strain energy is negligible in comparison to the loading potential. The displacement solution u is of Class 1 : (3), the leading term of the potential energy P 1 is reduced to −ηL 0 V . It involves V only which, because of its link with the rotation field Q, must satisfy the inextensional constraint : e 3 + V = 1. Finally, the one-dimensional problem reads 6.5. CASE n = 0 Steps 1-2. In this case m = q = 0. Choosing v of Class 0 and of order 0, we get v 0 (x) = V (x 3 ) with V (0) = 0. The leading term of the Green-Lagrange strain field depends only on V and v 1 : and (V, v 1 ) must satisfy the condition det F 0 > 0. It is more convenient for the sequel to consider the strain energy density as a function of F 0 : The leading term of the potential energy can read as: Step 3. Let us first minimize P 0 with respect to v 1 at given V . That is equivalent to minimize the strain energy with respect to v 1 at given V . But, since v 1 is only involved through its derivatives with respect to (x 1 , x 2 ), the coordinate x 3 plays the role of a parameter and we can localize the minimization on each cross-section. Thus the generic problem becomes : inf {v:ω→R 3 | det(e 1 +v ,1 |e 2 +v ,2 |c)>0} ωW (e 1 + v ,1 , e 2 + v ,2 , c) dx 1 dx 2 for a given c ∈ R 3 . But, since the cross-section is homogeneous, this problem can be localized again and becomes a problem of minimization on R 6 . So by setting we have obtained The one-dimensional elastic potential W 1 enjoys the following properties: (i) SinceW is an isotropic function, so is W 1 which thus depends on c only through c ; (ii) SinceW (F ) is nonnegative and vanishes if and only if F is a rotation, W 1 is non negative and vanishes if and only if c = 1; (iii) Consequently, W 1 is not convex; (iv) Its convex envelope, say c → W * * (c), vanishes if and only if c ≤ 1, i.e. when the string is in compression; (v) In the case of the Saint-Venant Kirchhoff potential, i.e. when W (E) = 1 2 CE · E, one obtains an explicit expression of W 1 , see (Meunier, 2003): where · denotes the positive part.
Step 4. Finally the problem reads as This is the model of nonlinear elastic strings.
REMARK 6.5. It is the only model in which the nonlinear behavior of the material plays a role. The strain energy is, at most, of the order of the potential energy of the external forces. But, according to the type of loading, it could happen that the body behaves like an inextensible string. In other words, it can happen that the solution of (65) satisfies e 3 + U ≤ 1. In such a case, the extension energy is negligible. This explains why the order of the strains is not perfectly given in Theorem 4.1 when n = 0.
REMARK 6.6. Since W 1 is not convex, the problem (65) must be relaxed. In the present context of a one-dimensional problem, that simply consists in changing W 1 by its convex envelope W * * , see (Dacorogna, 1989). The relaxed model was first deduced, by Γ-convergence, from the threedimensional theory of elasticity by (Acerbi et al., 1991).
Proof of Theorem 4.1
For clarity, we divide the proof of theorem 4.1 into several lemmas.
Therefore, E 0 = 0 if and only if the matrix e 1 +v 1 ,1 (x) e 2 +v 1 ,2 (x) e 3 +V (x 3 ) is an orthogonal matrix, say R(x). Since v ∈ C , R(x) ∈ SO(3). We can then use Lemma A.1 and conclude that v is of Class 1.
Conversely let us consider v of Class 1. Then E p = 0 for p ≤ 0. To complete the proof, it remains to establish that E 1 = 0. Its component E 1 33 reads as So that it vanishes, it is necessary that R e α · Re 3 = 0. But since R(x 3 ) ∈ SO(3) we have also R e α · Re 3 + R e 3 · Re α = 0 and R e 3 · Re 3 = 0.
Let us now assume that v is of Class 3 with O(v ) = 1. Then one first verifies that E p = 0 when p ≤ 1. It remains to prove that E 2 = 0. Let us consider E 2 33 , a direct calculation gives If E 2 33 = 0, then V = 0 and, owing to the boundary condition, V = 0. But, since O(v ) = 1, it is not possible, so E 2 33 = 0.
Proof.
Let v ∈ C be such that O(v ) = q ≥ 2. Then E p = 0 for p ≤ q − 2 and E q−1 reads as One considers from now on only v of Class 4. To prove (ii) we must distinguish the case q = 2 from the case q > 2.
Only E q αβ differs, the term w 2 having disappeared. Since E q α3 and E q 33 remain unchanged, we can proceed as is the case q = 2. The proof of the Lemma is complete.
|
2018-04-19T04:17:53.470Z
|
2006-03-22T00:00:00.000
|
{
"year": 2006,
"sha1": "44382de394bb66a58643ca7fc90f5150d7761b67",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10659-005-9036-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "89e22cb042adff1e77b188bf345726247461d28b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
216433656
|
pes2o/s2orc
|
v3-fos-license
|
A dedicated flask sampling strategy developed for ICOS stations based on CO2 and CO measurements and STILT footprint modelling
In situ CO2 and CO measurements from five atmospheric ICOS (Integrated Carbon Observation System) stations have been analysed together with footprint model runs from the regional transport model STILT, to develop a dedicated strategy for flask sampling with an automated sampler. Flask sampling in ICOS has three different purposes: 1) Provide an independent quality control for in situ observations, 2) provide representative information on atmospheric components currently not monitored in situ at the stations, 3) collect samples for CO2 analysis that are significantly influenced by fossil 25 fuel CO2 (ffCO2) emission areas. Based on the existing data and experimental results obtained at the Heidelberg pilot station with a prototype flask sampler, we suggest that single flask samples should be collected regularly every third day around noon/afternoon from the highest level of a tower station. Air samples shall be collected over one hour with equal temporal weighting to obtain a true hourly mean. At all stations studied, more than 50 % of flasks to be collected around mid-day will likely be sampled during low ambient variability (<0.5 ppm standard deviation of one-minute values). Based on a first 30 application at the Hohenpeißenberg ICOS site, such flask data are principally suitable to detect CO2 concentration biases larger than 0.1 ppm with a one-sigma confidence level between flask and in situ observations from only 5 flask comparisons. In order to have a maximum chance to also sample ffCO2 emission areas, additional flasks need to be collected on all other days in the afternoon. Using the continuous in situ CO observations, the CO deviation from an estimated background value must be determined the day after each flask sampling and, depending on this offset, an automated decision must be made if a flask 35 shall be retained for CO2 analysis. It turned out that, based on existing data, ffCO2 events of more than 4-5 ppm will be very rare at all stations studied, particularly in summer. During the other seasons, events could be collected more frequently. The strategy developed in this project is currently being implemented at the ICOS stations.
Introduction
Since the pioneering work by Charles David Keeling who, already in the 1950s, has started monitoring with in situ 40 instrumentation continuous atmospheric carbon dioxide concentration at South Pole and Mauna Loa (Brown and Keeling, 1965), global coverage of continuous greenhouse gas (GHG) observations has considerably improved (https://ds.data.jma.go.jp/gmd/wdcgg/). However, there still exist large observational gaps in remote regions of the globe, which have partly been filled by regular flask sampling with subsequent GHG analysis in central laboratories. In the marine realm, if frequently conducted under certain conditions, data from flask sampling are often representative for monitoring the 45 large-scale distribution of GHGs in the atmosphere and, respectively, for estimating large-scale flux distributions by inverse modelling.
In the last decades, observational networks have been extended to the continents in order to closely monitor GHG concentrations and quantify terrestrial GHG fluxes. These, however, are more heterogeneous, temporally variable and often 50 less well represented by models than it is the case with modelled ocean fluxes (Friedlingstein et al., 2019). Terrestrial biospheric fluxes are prone to (regional) climatic variability and changes, and only continental observations provide the gateway for process understanding. Besides monitoring the terrestrial biosphere, measurements over continents are also conducted to observe man-made emissions, in particular from fossil fuel burning and agriculture. Due to their proximity to these highly variable sources and sinks, measurements over continents are best conducted continuously with in situ 55 instrumentation at high temporal resolution, in order to cover the variability and to fully represent the entire footprint of a station (e.g. Andrews et al., 2014). However, not all atmospheric trace components to be included in continental top-down GHGs budgeting can yet be precisely measured in situ at remote stations. The most prominent example is radiocarbon ( 14 C) in atmospheric CO2, a quantitative tracer to separate the fossil from the biospheric component in recently emitted CO2 from continental sources (e.g. Levin et al., 2003). Note that in the industrialised and highly populated areas of mid latitudes of the Northern Hemisphere, i.e. in North America, Eastern Asia or Europe, atmospheric signals from the biosphere and from fossil fuel sources are of same order (see Sect. 4.3.1). To correctly interpret absolute CO2 concentration variations in terms of source/sink attribution, separation of the fossil from the biogenic CO2 signal is, therefore, mandatory. Precise 14 CO2 measurements are, however, currently only possible in dedicated laboratories and on discrete samples.
65
In Europe the Integrated Carbon Observation System Research Infrastructure (ICOS RI) (https://www.icos-ri.eu/icos-researchinfrastructure) has been established to monitor GHGs concentrations and fluxes in the atmosphere, in various ecosystems and over the neighbouring ocean basins. ICOS atmosphere has set up a pan-European network of preferentially tall tower stations located at least 50 km away from industrialised and highly populated areas. The primary purpose is to monitor biogenic sources and sinks in Europe and their behaviour under changing climatic conditions. In addition to continuous CO2, CH4 and CO 70 observations, a subset of stations (Class 1 stations) perform two-week integrated sampling of CO2 for 14 C analysis. Class 1 stations are additionally equipped with an automated flask sampler, dedicated to three major objectives. Firstly, the collected flasks shall provide an independent quality control (QC) for the continuous in situ measurements of CO2, CH4, CO and further species mole fractions. Secondly, flasks shall be collected for analysis of additional trace components not measured in situ at the stations, and finally flasks with a potentially elevated fossil fuel CO2 component originating from anthropogenic sources 75 in the footprint of the stations shall be analysed for 14 CO2.
Dedicated sampling strategies had to be developed for ICOS, which best meet these three objectives, and which can be accomplished in the framework of the infrastructure and its available capabilities and resources. This includes technical constraints at the stations but also analysis capacity at the ICOS Central Analytical Laboratories, which are analysing all flask 80 samples in ICOS. The ICOS flask sampling strategy might change in future, e.g. when real-time GHG or footprint prediction tools become available.
In the current paper, we first give an introduction to the current ICOS atmospheric station network, and then present a strategy how to collect the flask samples for ICOS in a simple and cost-effective way. The sampling strategies have been developed 85 based on footprint model simulations with the regional transport model STILT (Lin et al., 2003), that was implemented at the ICOS Carbon Portal (https://www.icos-cp.eu/about-stilt) for ICOS station PIs and data users. First tests to develop a strategy for the quality control objective were performed at the ICOS pilot station in Heidelberg, where ICOS instrumentation and a prototype of the ICOS flask sampler have been installed, as well as at the Hohenpeißenberg station. The strategy was further tested for its feasibility based on the first years of continuous ICOS CO2 and CO observations available at the ICOS Carbon 90 Portal (ICOS RI, 2019).
The atmospheric station network and its Central Facilities
The ICOS atmospheric station network currently consists of 23 official stations (with 14 stations more to come), located in 12 countries and covering Europe from Scandinavia to Italy and from Great Britain to Czech Republic (see Fig. 1). The preferred 95 station type are tall tower sites, allowing vertical profile sampling at a minimum of three height levels up to at least 100m a.g.l.
Tall tower stations cover footprints of several 10 to 100 km distance from the sites (Gloor et al, 2001;Gerbig et al., 2006).
Although their representation in state-of-the-art regional atmospheric transport models is more difficult than in the case of tower observations, due to their often long history of GHG measurements also a number of mountain and coastal stations are part of the ICOS network. However, the flask sampling strategy developed here was designed specifically for the standard 100 ICOS tall tower stations. https://doi.org/10.5194/acp-2020-185 Preprint. Discussion started: 17 March 2020 c Author(s) 2020. CC BY 4.0 License.
All ICOS atmosphere stations are equipped with commercially available instruments measuring continuously at high temporal resolution CO2, CH4 and CO. Instruments are tested at the Atmospheric Thematic Centre (ATC), an ICOS Central Facility hosted by LSCE in Gif-sur-Yvette, France, before they are installed at the sites (Yver Kwok et al., 2015). The calibration gases 105 for the in situ measurements are prepared and calibrated at the Flask and Calibration Laboratory (FCL), which has been established at the Max Planck Institute for Biogeochemistry in Jena, Germany, as part of the ICOS Central Analytical Laboratories (CAL). This procedure guarantees best possible compatibility of observations within the ICOS atmospheric network and maintaining the link to the internationally accepted WMO calibration scales. In addition, FCL analyses the flasks with the focus on QC and additional species. Precise 14 CO2 analysis of integrated samples and selected flasks is conducted in 110 the second part of ICOS CAL at Heidelberg University, Institute of Environmental Physics, at the Karl Otto Münnich Central Radiocarbon Laboratory (CRL).
All raw data (level-0) are automatically transferred, on a daily basis, from the measurement sites to the ATC where they are converted to calibrated (level-1) concentration values (Hazan et al., 2016), based on regular on-site calibrations and FCL-115 assigned calibration values. For ongoing automatic data quality assurance of all measurements, the ATC has developed automatic procedures. Further software tools are made available by the ATC for mandatory validation of all raw data by the station PIs. These quality-assessed data form the basis of the hourly mean concentrations, which are finally released as level-2 data and made available to the user community at the ICOS Carbon Portal, hosted by Lund University, Sweden. For the latest data release see ICOS RI (2019). 120 Two station types are currently implemented in the ICOS atmospheric station network, Class 1 and Class 2. Class 1 stations are equipped with the complete instrumentation including integrated 14 CO2 and flask sampling. Class 2 stations perform only in situ continuous measurements of CO2, CH4 and CO (currently not mandatory), but with the same instrumentation and demand on data quality as for Class 1 stations. A detailed description of the specifications of instrumentation is given in the 125 ICOS Atmospheric Station Specification document (https://icos-atc.lsce.ipsl.fr/filebrowser/download/69422), which is regularly updated. To become official part of the ICOS atmospheric station network, stations have to undergo a two-step labelling process, which shall warrant their conformance with the ICOS station specifications, including smooth data transfer to the ATC as well as meeting ICOS data quality requirements.
Description of selected ICOS stations 130
For developing and testing our flask sampling strategy we selected five ICOS Class 1 tall tower stations in four different countries. A short description of these stations is given in the following.
Hyltemossa (HTM) is located few kilometres south of Perstorp, in north western Skåne, Sweden (56.098° N, 13.418° E, 115 m a.s.l.). It hosts a combined atmospheric and ecosystem station, labelled respectively as Class 1 and Class 2 site in its 135 respective networks. The site was established in 2014 in a 30 year old managed Norway spruce forest. Further than 600 m from the tower there is a mosaic consisting of forests, clear cuts and farm fields. Within the radius of 100 km, the elevation changes between 0-200 m a.s.l., while in the near vicinity of the tower elevation gently changes only by 35 m. In the larger footprint, the site is surrounded by cities, i.e. to the north Halmstad (70 km, 58 000 inhabitants), to the east Kristianstad (45 km, 36 000 inhabitants), to the south-west Lund (45 km, 111 000 inhabitants), Malmö (60km, 318 000 inhabitants) and 140 Copenhagen, DK (70 km, 1 990 000 inhabitants) and to the west Helsingborg (45 km, 124 000 inhabitants) and Helsingør, DK are installed in a container next to the tower. Air is being sampled for 5 min from each level where data for the first minute after switching are discarded. All inlet lines are continuously flushed with approx. 5 L min -1 . Meteorological sensors for air temperature, relative humidity, wind speed and direction are installed at every sampling height. For historical reasons, Gartow modelling was conducted for 344 m a.g.l. (not for the highest sampling level, at 341 m); this difference between measured and 160 modelled level is, however, not relevant for the comparisons presented in the context of this study. Hydrometeorological Institute with 30 years of practice in meteorology as well as air quality monitoring. Today, these two 165 stations form the National Atmospheric Observatory. Since the site is designed as a background station, the area is not significantly influenced by human activity. The tower is surrounded by fields and, at a greater distance, by forests and small villages (the closest in 1 km distance). There is a highway running north-east of the tower at approx. 6 km distance, however, the wind frequencies from north and east are 9 and 5 %, respectively. The closest towns, Pelhřimov, Vlašim and Humpolec, with 10 to 17 thousand inhabitants, are located approx. 20 km away from the station. As for industrial activity, a small wood-170 processing company is located 20 km to the west (which is the prevailing wind direction). Town Havlíčkův Brod with ca. Meteorological sensors (air temperature, relative humidity, wind speed and direction) are installed at every sampling height.
Atmospheric transport modelling for ICOS stations
A footprint simulation tool based on the regional atmospheric transport model STILT (Stochastic Time Inverted Lagrangian 210 Transport; Lin et al, 2003;Gerbig et al., 2006) was implemented at the ICOS Carbon Portal (https://www.icos-cp.eu/aboutstilt) as a service for ICOS station PIs and data users. The STILT model simulates atmospheric transport by following a particle ensemble released at the measurement site backward in time and calculates footprints that represent the sensitivity of tracer concentrations at this site to surface fluxes upstream. The footprints are mapped on a 1/12° latitude x 1/8° longitude grid and coupled to the EDGAR v4.3 emission inventory (Janssens-Meanhout et al., 2019) and the biosphere model VPRM (Mahadevan 215 et al., 2008) to simulate atmospheric CO2 and CO concentrations. These regional concentration components represent the influence from surface fluxes inside the model domain (covering the greater part of Europe). For CO2 the contributions from global fluxes are accounted for by using initial and lateral boundary conditions from the Jena CarboScope global analysed CO2 concentration fields (http://www.bgc-jena.mpg.de/CarboScope/s/s04oc_v4.3.3D.html), while for CO only regional contributions are evaluated in our study. 220
The automated ICOS flask sampler
The automated ICOS flask sampler was designed and constructed at the Max Planck Institute for Biogeochemistry (MPI-BGC), Jena, Germany, by the Flask and Calibration Laboratory (FCL) of the CAL to allow automated air sampling under highly standardized conditions. The sampler can hold up to 24 individual glass flasks (four drawers with sic flasks each) for separate air sampling events (Fig. 2, upper panel). The glass flasks can be individually replaced and sent to the CAL for 225 analysis. The glass flasks used within ICOS (three litre volume, product code ICOS3000 by Pfaudler Normag Systems GmbH, Germany) were developed according to ICOS' specific requirements based on well-proven designs (Sturm et al., 2004). Each https://doi.org/10.5194/acp-2020-185 Preprint. Discussion started: 17 March 2020 c Author(s) 2020. CC BY 4.0 License. flask has two valves, one at each end, that allow air exchange by flushing sample air through the flask. The flasks are attached with ½" clamp-ring connectors to the flask sampler. The flask valves with PCTFE sealed end-caps can be motor-driven opened and closed. 230 A sample is taken by flushing air through a flask at a constant over-pressure of 1.6 bar (absolute). Sampling at over-pressure increases the amount of available sample air for analysis and allows detecting flasks with leak problems. Flasks are pre-filled with 1.6 bar of dry ambient air with a well-known composition at the CAL to avoid concentration changes due to wall adsorption effects. The schematic sampler layout is depicted in Fig. 2 (lower panel). Incoming air is dried to a dew-point of 235 approx. -40° C by passing a cooled glass vessel where the exceeding humidity is frozen out. The glass vessel is placed in a silicon oil heat bath that is cooled for drying and heated for out-flushing the collected water to regenerate the trap. The drying unit is automated and consists of two independent inter-switchable drying branches that complement each other and allow a near interruption-free drying. The dryer design is inspired by an already existing system from Neubert et al. (2004). The incoming sample air is compressed with a pump (Air Dimensions J161-AF-HJ0). A mass-flow controller (MFC, Bronkhorst 240 F-201CV) between compressor and flasks allows to sample at pre-set flow rates, i.e. with a decreasing flow rate over time so that the sample represents a real average e.g. over one hour (Turnbull et al., 2012). The flask pressure during sampling (1.6 bar) is kept constant through a pressure regulator at the outlet of the flasks. An over-pressure valve set at (2.0 bar) behind the pump assures a constant flow rate through the intake line, independent of the flow rate through the mass flow controller.
245
In the case of ICOS we strive to sample real one-hour mean concentrations in 3-Litre flasks. The 1/t filling approach requires for this specific case a theoretical dynamic flowrate between 80 mL min -1 and infinity. In reality, the maximum flow rate of the selected flow controller is limited to 2 L min -1 . Therefore, the in situ measurement is averaged with the weighting function resulting from the real flow through the sampled flask. To overcome the flow limitations in the first minutes, the flask is purged for 30 minutes prior to sampling, assuring a complete air exchange. Average concentrations with the aimed uncertainty can 250 only be reached under sufficiently stable concentration conditions during sampling. For a hypothetical ambient CO2 variability of 1 ppm the upper limit of the associated CO2 flask sample concentration uncertainty was estimated to be in the order of 0.1 ppm.
With the current design of the flask sampler, technical restrictions do not allow parallel sampling of flask duplicates or 255 triplicates as a means for quality control e.g. based on flask pair agreement. The technical effort to allow exact parallel hourly averaged sampling is very high. Therefore, the ICOS Atmosphere Monitoring Station Assembly (MSA) decided to sample only single flasks. This seems appropriate because in the ICOS network the flask sampler is always collecting flasks in parallel to continuous measurements, and erroneously collected flasks or errors due to flask leakages can be detected when comparing results with the continuous data. Therefore, in contrast to general practice of duplicate flask sampling, in our network single 260 flask sampling seems to be sufficient to meet ICOS objectives. This has the additional advantage that single flask sampling allows more frequent sampling and thus a more representative coverage of the footprint of the stations. If true duplicate samples are required in the future, the flask sampler is designed to accommodate an additional mass flow controller to fulfil this task.
Aims and technical constraints of ICOS flask sampling 270
As briefly outlined above, there are three main aims for regular flask sampling at ICOS stations: 1. Flask results are used for comparison with in situ observations (i.e. CO2, CH4, CO, (N2O)). This comparison provides an ongoing quality control of the in situ measurement system, including the intake lines.
2. Flasks are analysed for components not measured continuously at the station, such as SF6 or H2, but also stable isotopes of CO2 or O2/N2 ratio. The aim is here to monitor large-scale representative concentration levels of these components, 275 which allow estimating their continental fluxes with help of inverse modelling.
3. A subset of flasks are analysed for 14 C in CO2 to allow determining the atmospheric fossil fuel CO2 component (ffCO2) and with help of these data and inverse modelling to estimate the continental fossil fuel CO2 source strength of the sampled areas.
To meet aims 1 and 2 flask sampling during well-mixed meteorological conditions is required and the sampled footprints 280 should not be dominated by particular hot spot source areas. Particularly for aim 2, we further strive at covering the entire daytime footprint of the station. In contrast, aim 3, due to the generally small fossil fuel signals at ICOS stations, requires targeted sampling of "hot spot emission areas" in the footprint to maximize the fossil fuel CO2 signal in the samples. Note that the detection limit (or measurement uncertainty) of the fossil fuel CO2 (ffCO2) component with 14 CO2 measurements is of order 1-1.5 ppm (e.g. Levin et al., 2011). 285 There are a number of technical/logistic constraints concerning flask sampling, shipment and analysis in ICOS, which need to be taken into account when designing an operational sampling strategy that best meets the three aims listed above. The most important limitations are listed in the following:
290
Timing: In order that all flask sample results are useful for flux estimates with current regional inversion models, flasks should be collected during mid-day or early afternoon at the standard ICOS tall tower stations. During this time of the day, atmospheric mixing is strong and model transport errors are smaller than during night (Geels et al., 2007). For all samplings, wind speeds should be larger than about 2 m s -1 , so that the sampled footprint is well defined. The strategy outlined below has been developed for tall tower sites that are located not directly at the coast, i.e. that are of predominantly continental character. 295 Intake height: There is only one intake line from the highest level of the tower running to the flask sampler; therefore, only the continuous observations from this height can be quality-controlled with parallel sampled flasks (aim 1). As modellers prefer using data (aim 2) from the highest level of the tower (largest footprint, most representative, etc.), all flasks will be sampled from that highest level (as specified in the ICOS Atmospheric Station Specification Document, https://icos-300 atc.lsce.ipsl.fr/filebrowser/download/69422).
Integration period: Flasks should be sampled as integrals, i.e. the collected sample should represent a real mean of ambient air (e.g., a 1-hour mean, comparable to current model resolution). Also, synchronizing in situ continuous observations and integrated flask sampling is important for the quality control aim (aim 1). This latter requirement is easier to achieve with 305 longer integration times in flask sampling. This means, however, that for comparison reasons, the continuous in situ observations must be kept at the flask sampling height during the entire flask sampling period (i.e. no calibration gas measurement, no switching of in situ intake heights during flask sampling, no profile information available). This also means that flow rates, delay volumes and residence times in the tubing, as well as time of both, flask and in-situ sampling systems must be properly monitored. 310 https://doi.org/10.5194/acp-2020-185 Preprint. Discussion started: 17 March 2020 c Author(s) 2020. CC BY 4.0 License.
Flask handling: Flasks need to be installed and removed manually from the sampler. Remote stations are regularly visited about once per month by a technician. The flasks sampled to meet aim 1 should be shipped to the FCL within one month after sampling, so that a potential bias between in situ and flask analyses is detected without major delay. 14 CO2 analysis of flasks in the CRL is less urgent, therefore a few months delay in shipment of flasks collected for aim 3 are acceptable. 315 Consequently, all flasks will be shipped from the station to the FCL and after analysis a subset will be shipped for further 320 analysis to the CRL. After all analyses have been finished all flasks including those, which were analysed at the CRL are leaktested and conditioned at the FCL before dispatch to the stations.
Solutions and testing to meet aim 1: Ongoing quality control
The ICOS atmospheric station network supported by ICOS Central Facilities (ATC and CAL) has been designed and 325 implemented to achieve the highest possible accuracy, precision and compatibility of atmospheric GHGs measurements. ICOS aims to meet the compatibility goals agreed on by the international WMO/GAW measurements community (WMO, 2018) for all its measured components. These compatibility goals were chosen by the community to detect small inter-station gradients and to be used to estimate flux distributions by means of inverse models. For ICOS CO2 observations, a compatibility goal of 0.1 ppm or better is compulsory. Similarly, ICOS needs to meet the WMO compatibility goals for CH4 and CO, which are 2 330 ppb for both gases (WMO 2018). First evaluations of ICOS CO2 measurements indeed yield monthly mean afternoon differences between stations in the free troposphere above 100 m of typically very few ppm (Ramonet et al., 2020), underlining the importance of excellent precision and compatibility of these observations. With a regular and frequent comparison of flask and in situ measurements, ICOS aims at independently monitoring their 335 compatibility and provide respective alerts if e.g. the average difference of CO2 exceeds 0.1 ppm over a few weeks comparisons. Using flasks sampled from a dedicated intake line to cross-check the in situ measurements is an important part of the ICOS quality management. It allows an independent end-to-end QC of the entire in situ measurement system consisting of inlet system, drier, analyser and calibration. As mentioned above, for logistical reasons, about once per month or every five weeks a box with 12 flasks is scheduled to be shipped from a remote station to the FCL. After analysis, the flask results 340 covering about one month of time will be compared with the corresponding in situ data. In the following paragraph we elaborate the minimum number of comparison flasks and the corresponding time delay to detect a significant CO2 bias between flask and in situ measurements larger than 0.1 ppm. Therefore we tested experimentally at the ICOS pilot station in Heidelberg the envisaged flask sampling procedure, and present here its first application at an ICOS field station.
Flaskin situ CO2 comparisons in Heidelberg 345
Similar to the official ICOS atmosphere stations, Heidelberg is equipped with an ICOS-conform CRDS instrument continuously measuring CO2, CH4 and CO in ambient air. Also the Heidelberg instrument is calibrated with standard gases provided by the FCL and its continuous data are automatically evaluated at the ATC. All flasks have been analysed at the FCL.
However, since the site does not have a high tower and is located in an urbanized environment, the variability of the signal can complicate the flask-in situ comparison. 350 https://doi.org/10.5194/acp-2020-185 Preprint. Discussion started: 17 March 2020 c Author(s) 2020. CC BY 4.0 License.
In order to collect a real hourly integrated air sample in the flask, the flow rate through the flask has to be adjusted during the filling process (Turnbull et al., 2012, see Sect. 2.4). First tests with a decreasing (1/t) flow rate through the flasks were conducted in Heidelberg during the period of September 2018 to February 2019, and with a better suited flow controller for the 1/t decreasing flow rate from May to October 2019. Ambient air for continuous measurements as well as for flask sampling 355 was collected via by-pass from a permanently flushed intake line from the roof of the institute's building about 30m above local ground. These flasks have been collected not only at low ambient air variability during afternoon hours, but also during other times of the day, when within-hour concentration variations for CO2 at this urban site were higher than 10 ppm. The results of the concentration differences between in situ and flask measurements for CO2 are displayed in Fig. 3 (left panel).
During the first experimental period we obtained three outliers, where flask CO2 results have been up to more than 3 ppm 360 higher than the in situ measurements. CH4 and CO in the flasks (not shown) did, however, compare very well within a few ppb with the continuous in situ data. Although one of the mass flow controllers had some problems to exactly regulate the flow over the large range of flow rates, we did not find obvious reasons for malfunction of the sampling system. The only explanation for the outliers may, thus, be contamination of these flasks with room air, which is elevated in CO2 but not in CH4 or CO compared to outside air. 365 If we disregard the three outliers in the first testing period (one at a low variability situation, see Fig. 3
, right panel) and
consider only observations with ambient air CO2 variability < 0.5 ppm, the limited results from the (polluted) Heidelberg site give confidence that flask samples collected over one hour at low ambient CO2 variability are well suited to meet our aim 1 of ongoing quality control at Class 1 stations. It is important though that the different air residence times in the intake systems of 370 flask sampler and in situ instrument are properly adjusted; they may significantly differ, e.g. if a mixing volume system is installed in the intake lines (as at Hyltemossa). The mean differences between in situ and flask measurements for CO2 in Heidelberg have been 0.02 ppm at an ambient CO2 variability of less than 0.5 ppm, with a standard deviation of ±0.06 ppm (n=18) (see also Fig. 3 right panel, which shows that only one out of the 18 low-variability comparisons lies outside the ±0.1 ppm compatibility range indicated by the dashed red lines). For CH4 we observed for ambient variability smaller than 10 ppb 375 a mean difference of 0.18 ppb with a standard deviation of 0.74 ppb (n = 111). CO comparison data have not been evaluated here as the CRDS in situ data were not finally calibrated and thus not fully compatible with the flask results.
The test measurements in Heidelberg did clearly show that meaningful quality control results can best be obtained during situations of low ambient concentration variability. Individual concentration differences increase with increasing ambient 380 variability within the one-hour comparison period. The reason for this increase may be uncertainties in the synchronization of the measurements (note that a few minutes shifts in the timing of the integration already introduces a significant bias) or also due to incorrect flow rates through the flasks in the 1/t sampling scheme. For the QC aim, flask samples should preferentially be collected during low variability situations. We therefore evaluated how frequent afternoon events with less than 0.5 ppm variability occur at typical ICOS stations. In the years 2016 to 2019, except for few stations and for few summer months, we 385 find at all five stations at least 10 hours per month at mid-day (13 h local time (LT)) with hourly CO2 standard deviations smaller than 0.5 ppm. On average over the year more than half of all midday hours had CO2 standard deviations below 0.5 ppm. Based on this evaluation, we decided that we will not need to pre-select sampling days with low ambient variability but can pursue a very simple sampling scheme, e.g. sampling every three or four days, to be able to detect a mean bias larger than 0.1 ppm between flask and continuous measurements within a period of 4-5 weeks. On average we can expect that every 390 second flask we sample is suitable for precise intercomparison with in situ measurements. This simple methodology will help us meeting aim 2 (see below). https://doi.org/10.5194/acp-2020-185 Preprint. Discussion started: 17 March 2020 c Author(s) 2020. CC BY 4.0 License.
Flaskin situ CO2 comparisons at the ICOS station Hohenpeißenberg 395
A very first field test of our flask sampling scheme for QC was conducted at the ICOS station Hohenpeißenberg (HPB). From the highest level of the tall tower (131 m) ambient air for continuous measurements as well as for flask sampling was collected via two separate lines. Collecting flasks at HPB started in July 2019. The flasks were always sampled with a decreasing 1/t flow rate and between 12:30 and 14:00 UTC as we aim for conditions with low ambient variability, which is largest in well mixed conditions during the afternoon. Up to now, 48 flasks have been collected, which could be used for QC of this ICOS 400 Class 1 station. The overall results of the concentration differences for CO2 for the complete test period are shown in Fig. 4 (left panel).
Our first results of the comparison between continuous measurements and flasks were available in October 2019 and showed larger differences between in situ and flasks measurements than expected. A mean difference of 0.33 ppm with a standard 405 deviation of ±0.13 ppm (n=4) was determined for situations with an ambient variability of less than 0.5 ppm. Based on these results the intake system and the entire CO2 instrumentation was carefully checked. Whilst the last regular leak test on April For the period after the leak elimination, the calculated differences between in situ and flask measurements for an ambient variability of less than 0.5 ppm lie all within the compatibility goal for CO2 (0.1 ppm), see blue dots in Fig. 4 (right panel).
The mean difference between flasks and in situ measurements is 0.01 ppm with a standard deviation of ±0.06 ppm (n=5). 415 These results of the first field test of the flask sampling scheme for QC are promising, e.g. enabling detection of potential leaks at the stations. Once the flask QC procedures have been set up operational, potential system malfunctions can be detected within a month, complementing the half-yearly compulsory ICOS leak tests.
Solutions and testing to meet aim 2: Representative flask sampling
In the preceding section we could show that low ambient variability situations would be best suited to meet aim 1. Moreover, 420 a potential bias between flask and in situ measurements could be detected with better confidence with an increased number of comparisons. However, for meeting aim 2, a scheme collecting flasks only during low variability situations may cause a significant bias in the sampled footprint. We have tested if such a sampling bias would be visible in the European ICOS network and calculated with STILT all afternoon (13 h LT) footprints of the five selected stations for the year 2017. Figure 5 shows respective aggregated footprints for the month of October 2017. The left panels in each of the five rows show the 425 aggregations if every afternoon hour (13 h LT) was sampled, the middle panels the aggregated footprints for every third day and the right panels show the 10 footprints with the lowest variability during this month. As expected, we can see that regional coverage of the entire station footprint is generally better when sampling randomly every third day than when sampling the 10 days with the lowest variability.
430
In addition to the footprint analysis, which gives a visual qualitative idea of the effect of different flask sampling schemes, we evaluated the first three years of continuous CO2 measurements from the five ICOS stations to quantify the effect of random sampling every three days versus sampling only low variability situations. Figure 6 shows, in the upper panels for each station, all available hourly atmospheric CO2 data as grey dots, while the blue lines, each shifted by one day, connect the 13 h LT data every three days. The red dots in the upper panels highlight the 10 lowest variability afternoon values in each month. As 435 expected, all summer afternoon concentrations generally fall into the lower concentration range of the bulk of data. At all https://doi.org/10.5194/acp-2020-185 Preprint. Discussion started: 17 March 2020 c Author(s) 2020. CC BY 4.0 License. stations, the variability changes from a diurnal shape during the summer months to a more synoptic variability in the winter half-year (for more details see also Fig. 8 and 9). This synoptic variability is also represented in the afternoon sampling. In the five middle panels of Figure 6 we have plotted as black dots monthly means calculated from all afternoon hours between 11h and 15 h LT as well as their standard deviations. The blue dots show the monthly mean values obtained from sampling every 440 third day (the three different 3-day patterns are shown in individual shifted blue lines), while the red dots represent the monthly means calculated from the 10 samples with the lowest variability (the coloured dots were shifted by one day each for better visibility). It is obvious that regular sampling provides much more representative monthly means, deviating only in few cases from the all-afternoon means in CO2 by more than 2 ppm (Fig. 6, lowest panels). If samples were collected at low variability only, they would often underestimate monthly mean values, in some cases by more than four ppm (red lines in Fig. 6 lowest 445 panels). Although also regular sampling every third day introduces some variable deviations from the correct afternoon means, sampling only at low variability may introduce rather large biases mainly towards lower CO2 concentrations. Note that inversion models also select measured data for their inversion runs only for time of the day and not for low variability data to estimate fluxes (Rödenbeck, 2005).
450
We have investigated here only potential sampling effects on CO2 concentrations, however, also other tracer concentrations are expected to be affected in a similar way. For the ICOS atmosphere network we, therefore, choose the simpler sampling scheme of one flask every third day. This sampling scheme is expected to serve aims 1 and 2, where those flasks with low within-hour variability (on average one flask per week, see Sect. 4.1) could be used for the quality control aim, while all flask samples would deliver as much as possible representative data for all additional trace components analysed in the FCL solely 455 on flasks.
Solutions and testing to meet aim 3: Catching potentially high fossil fuel CO2 events
First 14 C analyses on integrated CO2 samples at ICOS stations showed rather low average fossil fuel CO2 (ffCO2) concentrations, therewith confirming that ICOS stations primarily monitor the terrestrial biospheric signals. Figure 7 (upper panels of the graphs for the individual stations) shows our first 14 CO2 results from the two-week integrated CO2 sampling at 460 Hyltemossa, Křešín, Observatoire Pérenne de l'Environnement and Hohenpeißenberg. Particularly during summer, the monthly mean regional fossil fuel CO2 offsets, if compared to a background level calculated from the composite of two-week integrated 14 CO2 measurements at Jungfraujoch in the Swiss Alps and Mace Head at the Irish coast, are often lower than a few ppm (Fig. 7, lower panels). Only during winter, regional ffCO2 offsets can reach two-week mean concentrations of more than 5 ppm. These signals, although providing good mean ffCO2 results for the average footprints of the stations, are often too small 465 to provide a solid top-down constraint of regional fossil fuel CO2 emission inventories and its changes when evaluated in regional model inversions (Levin and Rödenbeck, 2008;Wang et al., 2018). One of the aims of flask sampling in ICOS is, therefore, to explicitly sample air, which has passed over fossil fuel CO2 emission areas. Ideally we would like to obtain signals and analyse flasks for 14 CO2 only in cases when the expected fossil fuel CO2 component is larger than 4-5 ppm. This would allow to obtain an uncertainty of the estimated ffCO2 component below 30 % (Levin et al., 2003;Turnbull et al., 2006). Further, 470 as sample preparation for 14 C analysis is very laborious and the capacity of the CRL is limited to about 25 flask samples per station per year, one should know beforehand, if a sample potentially contains a significant regional fossil fuel CO2 component.
This could either be found out with Near Real Time transport model simulations or directly using the in situ observations at the station.
475
A good indicator for the potential regional fossil fuel CO2 concentration at a station is the ambient CO concentration (Levin and Karstens, 2007), a trace gas that is monitored continuously at all ICOS Class 1 sites. It would then depend on the average CO/ffCO2 ratio of fossil fuel emissions in the footprint of the stations to estimate from measured CO the expected ffCO2 https://doi.org/10.5194/acp-2020-185 Preprint. Discussion started: 17 March 2020 c Author(s) 2020. CC BY 4.0 License. concentration. Mean CO/ffCO2 emission ratios can be very different in different countries, they mainly depend on the energy production processes and on domestic heating systems. In this respect, also the share of biofuel use may be relevant. In our 480 study we first analysed our selected ICOS stations for regional fossil fuel CO2 signals larger than 4 ppm, and determined the frequencies of those events. Note that, in order for the flask results to be used in transport model investigations, similar to all other flask samples, also 14 CO2 flasks should be collected during early afternoon when atmospheric mixing can be modelled with good confidence. During these situations, however, any ffCO2 signals will be highly diluted. Similar to the approach in the previous section, we investigated the potential ffCO2 levels for the five stations Hyltemossa, Gartow, Křešín, Observatoire 485 Pérenne de l'Environnement and Hohenpeißenberg, first theoretically with STILT model simulations transporting EDGARv4.3 emissions to the five measurement sites. As a second step, we evaluated real continuous CO2 and CO observations from 2017 and 2018 (see Table 1). Fig. 8). At the same time, the modelled CO offset was elevated, but did not 500 reach 0.04 ppm (second panel). CO offsets were estimated relative to the minimum modelled CO concentration of the last three days (grey line in second panel). In October 2017, the modelled (Fig 9, upper panel) and measured CO (Fig. 9, lowest panel) offsets do, however, rather frequently exceed 0.04 ppm. The generally good correlation between simulated ffCO2 and CO offset can therefore be used as a criterion for ffCO2 in collected flasks, and 0.04 ppm may be a good threshold for Gartow to predict a ffCO2 signal of more than 4 ppm in sampled ambient air. This is supported by real observations displayed in the 505 two lowermost panels of Figs. 8 and 9, where observed CO offsets > 0.04 ppm (marked by magenta crosses) coincide with high total CO2, and also with STILT-simulated ffCO2 (see for example the synoptic event on October 18-19, 2017).
Investigation of afternoon fossil fuel CO2 events in 2017 at Gartow
The aggregated footprints of the three afternoon situations with STILT-simulated ffCO2 > 4 ppm in July 2017 are displayed in Fig. 10 (upper panels). They show south-westerly trajectories and a dominating surface influence from the highly populated 510 German Ruhr area, but also some influences from large emitters (e.g., power plants) in north western Germany and at the Netherland's North Sea coast. The main influence area with high ffCO2 emissions in October 2017 (Fig. 10, lower panels), show also Berlin as a significant emitter and some "hot spots" close to the German-Polish border in the south-east.
Investigation of afternoon fossil fuel CO2 events in 2017 and 2018 at Hyltemossa, Křešín, Observatoire Pérenne de l'Environnement and Hohenpeißenberg 515
Overlapping measurements and STILT model runs are also available for the other four ICOS stations. The general picture is similar here as in Gartow, but the number of elevated ffCO2 events is often even smaller at these stations than at Gartow. For example we find no ffCO2 events at HTM, GAT, KRE and HPB, and only three at OPE in July 2018 (Table 1). Simultaneously observed CO elevations relative to background are often only small in summer and do not reach the (preliminary) threshold of https://doi.org/10.5194/acp-2020-185 Preprint. Discussion started: 17 March 2020 c Author(s) 2020. CC BY 4.0 License. 0.04 ppm. Starting in October or November, ffCO2 elevations become more frequent coupled to the more synoptic variability 520 of GHGs in the winter half-year (cf. Fig. 6 upper panels). The number of modelled fossil fuel CO2 events larger than 4 ppm for all months in 2017 and 2018 or based on observed CO offsets larger than 0.04 ppm using the same estimate for the CO background as for the model results displayed in Fig. 8 and 9 are listed in Table 1. Only in the winter half year we can potentially sample well measurable fossil fuel CO2 signals. Lower CO thresholds could be used for summer, then accepting larger uncertainties of the ffCO2 component. A better alternative would probably be to restrict 14 C analysis on flasks collected 525 in autumn, winter and spring, with the additional advantage that the variability of biospheric signals is smaller during these seasons (cf. Fig. 9).
To give some indications of the main ffCO2 emission areas influencing the four stations, Fig. 11 shows aggregated footprints as well as the respective surface influence areas contributing to modelled ffCO2 concentrations larger than 4 ppm in October 530 2017. At all four stations and also at Gartow (Fig. 10) the areas potentially contributing significantly to the fossil fuel signals are located rather far away and many of them are associated to large coal-fired power plants or other point sources. But also a few big cities such as Prague at Křešín occasionally contribute.
Implementation of the flask sampling scheme at ICOS stations
Sampling one flask every third day, independent of ambient CO2 variability can easily be implemented at ICOS stations since 535 sampling of all 24 flasks in the sampler can individually be programmed in advance. Assuming that flasks can be exchanged about once per month, during this time span 12 flasks would have been collected and could then be shipped in one box to the FCL for analysis. The remaining 12 flasks in the sampler would be reserved for ffCO2 event sampling. In order to have a realistic chance to catch all possible events at a station, the sampler would be set to fill one of these flasks on each day between the regular every third day sampling. As continuous trace gas measurement data are transferred from the station to the ATC 540 every night, level-1 CO data are available on the next morning after flask sampling the day before. These data will then be automatically evaluated at the ATC for potentially elevated CO to decide if the flask that had been collected on the day before has potentially an elevated ffCO2 concentration and should be retained for 14 CO2 analysis. If yes, the flask sampler will obtain a respective message from the ATC. If not, the flask can be re-sampled. Based on our analysis of modelled ffCO2 for the year 2017 and 2018, the likelihood is small that more than 12 ffCO2 events are sampled within one month. Also, some of the events 545 may already have been sampled in one of the "regular" every third day flasks. If this has been the case, these flasks will be marked, so that they are passed on to the CRL after analysis of all other components in the FCL. In the future, especially the flask sampling strategy for ffCO2 events might change once real-time GHG prediction systems or prognostic footprint products are available, which would allow more accurate targeting of certain emission areas. First tests, using prognostic trajectories to automatically trigger 14 CO2 flask sampling are made at the ICOS CRL pilot station and at selected ICOS Class 1 stations, but 550 are not yet mature enough to be implemented in the entire ICOS network. It is, however, also worth to mention that sampling flasks also during night time could largely increase the significance of 14 C-based ffCO2 estimates. Currently we optimize our sampling strategy to meet the inability of transport models digesting also night time data. This situation is unfortunate and must urgently be improved in order to increase our ability to monitor, in a top-down way, long-term changes of the envisaged ffCO2 emissions in Europe. 555
Conclusions
Developing a flask sampling strategy for a network like ICOS is a new approach, which, to our knowledge, has not yet been taken in any other sampling network. It may contribute to optimizing efforts at the (remote) ICOS stations as well as the analytical capacities and capabilities of the ICOS Central Analytical Laboratories. Our strategy was designed to meet, on one https://doi.org/10.5194/acp-2020-185 Preprint. Discussion started: 17 March 2020 c Author(s) 2020. CC BY 4.0 License. hand, the requirements for quality control, making sure by comparison of flask results with the parallel in situ measurements 560 that ICOS data are of highest precision and accuracy. Our first results showed that this strategy of independent quality control is successfully working. At the same time, our sampling scheme will provide flask results that can be optimally used in current inverse modelling tasks to estimate continental fluxes, not only of core ICOS components, such as CO2 and CH4, but also of trace substances, which are not yet measured continuously. Trying to monitor also fossil fuel CO2 emission hot spots at ICOS stations during well-mixed afternoon hours will be a particular challenge, because the ffCO2 influence at that time of the day 565 is often very small. There is thus an urgent need for transport model improvement so that also night time data can be used for the inversion of fluxes. Experience of the coming years will show if our current strategy is successful to meet all aims or needs further adaption. Author contributions: IL and UK designed the study, UK developed the Jupyter notebook and conducted the STILT model runs, ME built the flask sampler and developed its software, FM and SA conducted the flask sampling and evaluated the comparison data, DR was responsible for flask, SH for 14 CO2 analysis, MR was responsible for ICOS data evaluation, GV, SC, MH, DK and ML were responsible for the measurements at the ICOS stations. IL and UK prepared the manuscript with 580 contributions from all other co-authors.
Competing interests:
The authors declare that they have no conflict of interest.
|
2020-03-19T10:31:43.658Z
|
2020-03-17T00:00:00.000
|
{
"year": 2020,
"sha1": "439eee6066005f5276f83bd606be0a7bcb98ab3c",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/20/11161/2020/acp-20-11161-2020.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "2914840693e68116da09f27d5d7c7dd94034efba",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
248774736
|
pes2o/s2orc
|
v3-fos-license
|
Combining TMEM Doorway Score and MenaCalc Score Improves the Prediction of Distant Recurrence Risk in HR+/HER2− Breast Cancer Patients
Simple Summary 90% of breast cancer mortality is caused by distant metastasis, a process that involves both dissemination of cancer cells to distant sites as well as their proliferation after arrival. However, prognostic assays currently used in the clinic are based on proliferation and do not measure tumor cell dissemination potential. We previously reported that the density of Tumor Microenvironment of Metastasis (TMEM) doorways (portals for cancer cell intravasation and dissemination) is a prognostic biomarker for the development of distant metastasis in hormone receptor positive/human epidermal growth factor receptor 2 negative (HR+/HER2−) patients. We have shown further that MenaCalc, a mechanistically linked (but independent) biomarker for distant metastasis, is prognostic in some cohorts of triple-negative patients. Here, we develop and compare several digital pathology-based machine vision algorithms to investigate if a combined TMEM-MenaCalc biomarker could provide improved prognostic information over and above that of either biomarker alone. Abstract Purpose: to develop several digital pathology-based machine vision algorithms for combining TMEM and MenaCalc scores and determine if a combination of these biomarkers improves the ability to predict development of distant metastasis over and above that of either biomarker alone. Methods: This retrospective study included a subset of 130 patients (65 patients with no recurrence and 65 patients with a recurrence at 5 years) from the Calgary Tamoxifen cohort of breast cancer patients. Patients had confirmed invasive breast cancer and received adjuvant tamoxifen therapy. Of the 130 patients, 86 cases were suitable for analysis in this study. Sequential sections of formalin-fixed paraffin-embedded patient samples were stained for TMEM doorways (immunohistochemistry triple staining) and MenaCalc (immunofluorescence staining). Stained sections were imaged, aligned, and then scored for TMEM doorways and MenaCalc. Different ways of combining TMEM doorway and MenaCalc scores were evaluated and compared to identify the best performing combined marker by using the restricted mean survival time (RMST) difference method. Results: the best performing combined marker gave an RMST difference of 5.27 years (95% CI: 1.71–8.37), compared to 3.56 years (95% CI: 0.95–6.1) for the associated standalone TMEM doorway analysis and 2.94 years (95% CI: 0.25–5.87) for the associated standalone MenaCalc analysis. Conclusions: combining TMEM doorway and MenaCalc scores as a new biomarker improves prognostication over that observed with TMEM doorway or MenaCalc Score alone in this cohort of 86 patients.
Previously, we reported the discovery of the "Tumor Microenvironment of Metastasis" (TMEM) doorway, a portal into the blood vasculature composed of a tumor cell overexpressing the actin regulatory protein Mena, a perivascular macrophage, and an endothelial cell, all in direct contact ( Figure S1A-D left,E). TMEM doorways function as vascular openings through which tumor cells intravasate and disseminate hematogenously [6,7]. We previously showed that a triple immunohistochemical stain for the three constituent cell types that make up TMEM doorways can be used as a biomarker (called TMEM Score) for prognosticating the development of distant metastasis [8]. We further showed that TMEM Score prognosticates the risk of distant metastasis in HR+/HER2− breast cancer patients better than the IHC4 immunohistochemical assay score [9] and independently of classical clinicopathologic features [4]. Finally, we analytically validated a quantification of the TMEM Score using an automated, high-throughput assay implemented within a Clinical Laboratory Improvement Amendments (CLIA) certified clinical diagnostic laboratory and showed that TMEM Score is significantly associated with early distant recurrence (within 5-years of diagnosis) [10].
While these studies clinically validated the use of TMEM Score for prognosticating metastatic outcome in HR+/HER2− patients (the largest subgroup of breast cancer patients), a statistically significant association between TMEM Score and distant recurrence outcome was not observed in the triple-negative or HER2+ breast cancer subpopulations, perhaps due to the smaller number of these subjects available for analysis. While there is currently no evidence of a connection between the HER2 receptor status and TMEM doorways or Mena Calc , we cannot rule out its existence.
To identify those cancer cells within the tumor that are capable of intravasation, we developed the in vivo invasion assay, a technique capable of isolating the motile fraction of cancer cells from the rest of the immobile bulk of the primary tumor [11][12][13]. Using this assay in mouse models of breast cancer, we were able to determine that a subset of tumor cells and macrophages communicate with each other via a paracrine loop that enables them to co-migrate together along collagen fibers at 10-100 times the speed of the rest of the tumor cells within the bulk tumor. This type of coordinated cellular motion is known as "fast streaming migration" [13]. We further determined that endothelial-cell-secreted Hepatocyte Growth Factor (HGF) gradients provide a directional chemoattractant signal which attracts fast-migrating cells that are less than 500 µm away from blood vessels [14] ( Figure 1A). Expression profiling of these cells showed that Mena, a key actin polymerization regulatory protein, plays an important role in potentiating tumor cell motility as well as tumor cell intravasation near TMEM doorways [15][16][17]. Mena consists of several splice-variant isoforms which confer distinct phenotypes to tumor cells [18]. Of these isoforms, Mena11a, an anti-metastatic isoform that is strongly associated with an epithelial phenotype, is down-regulated during epithelial-to-mesenchymal transition (EMT) and in invasive tumor cells [18]. Several other isoforms, including Mena INV , have been shown to confer a pro-metastatic motile phenotype and are found to be expressed exclusively in invasive and disseminating tumor cells [19]. We have found that tumor cells that have high levels of overall Mena expression, and also contain a Mena INV-Hi and Mena11a Low isoform expression pattern, are involved in invasion, fast streaming migration, and intravasation [14,16,20]. Based upon these observations, we developed a quantitative immunofluorescence (IF)-based biomarker designed to quantify the relative amounts of pro-metastatic and anti-metastatic Mena isoforms. This metric, termed Mena Calc , is computed by quantifying the abundance of the Mena11a isoform ( Figure S1A-D right) and subtracting the normalized value of this isoform from the normalized amount of PanMena (Figure S1A-D middle), i.e., all Mena isoforms present.
In initial retrospective studies, Mena Calc was shown to be prognostic of distant metastasis in the ER-and in the node-positive subsets of a cohort of patients [21]. A second study in a different cohort [22] showed that Mena Calc is prognostic in a node-negative subset of patients.
Given the difference in performance of TMEM doorway and Mena Calc scores in patients with diverse breast cancer subtypes, we asked how we might be able to improve the prognostic ability of these tests. We reasoned that no intravasation would be possible within the tumors of patients that contain TMEM doorways, but which lack Mena Calc-Hi tumor cells capable of intravasating through the TMEM doorways ( Figure 1B). Similarly, no intravasation would be possible within tumors that contain fast streaming and highly invasive Mena Calc-Hi tumor cells, but which lack TMEM doorways ( Figure 1C). Successful intravasation of tumor cells would require both motile Mena Calc-Hi tumor cells and TMEM doorways ( Figure 1D). Thus, it is logical to suggest that patients with both high TMEM Score and high Mena Calc Score would have higher risk of distant metastasis and a worse prognosis.
However, it is unclear from Mena Calc alone which subgroup would most benefit from a combined TMEM-Mena Calc biomarker. Since ER+/HER2− is the most common subtype of breast cancer with the longest time to recurrence, it is a high priority to determine if we can improve prognostication in this subtype.
Thus, our primary goal in this proof-of-principle initial study was to determine if a combined TMEM-Mena Calc biomarker is able to improve upon the prognostication ability of either marker alone, all within a small cohort of patients with HR+/HER2− breast cancer (see Section 4 Materials and Methods for cohort description). Since HR+/HER2− is the most common subtype of breast cancer and has the longest time to recurrence, there is an urgent need to find better prognosticators of metastatic outcome for this subtype. Furthermore, multivariate analysis (including tumor size, grade, and nodal status) showed that TMEM doorway density is prognostic for distant recurrence in patients with ER+ breast cancer [10], independent of these clinical factors. Thus, it is of particular interest to determine if combining TMEM Score with Mena Calc can improve TMEM Score performance in mixed-patient populations of the type studied previously. To accomplish this, we have evaluated several different ways of combining TMEM and Mena Calc scores to create a multi-parameter quantitative analysis with much improved prognostic value for distant metastasis in breast cancer patients.
Methods of Biomarker Combination
All analyses performed in this study used our previously published automated TMEM doorway identification and quantification algorithm [5,10] with variation in ROI type and tissue coverage, defined as follows: First, this study varied the range of tissue analyzed, i.e., analyses either spanned a region of interest (ROI) that contained the entire tumor tissue (Whole Tumor Tissue ROI, Figure 2A) or was limited to a sub-region of the most representative tumor tissue predetermined by the pathologists (Path ROIs, Figure 2B). Second, TMEM doorway density (doorways per unit area) was measured either within the entire ROIs ("Entire Area", Figure 2C,D) or in the 10 high-power fields of view containing the highest TMEM doorway density measured within the ROIs ("Top 10 Fields", Figure 2E,F). Thus, in total, four TMEM doorway quantification methods were tested. An example field of view showing identified TMEM doorways is presented in Figure 2G.
The first method (TMEM1) scored TMEM doorway density across the entire tissue area within the Whole Tumor Tissue ROI ( Figure 2C). The second method (TMEM2) scored the TMEM doorway density within the Path ROIs ( Figure 2D). The third (TMEM3) and fourth (TMEM4) methods quantified TMEM doorway density, as described previously [10], as the sum of TMEM doorways within a given area: namely, the 10 high power fields of view (40× magnification, 330 × 440 µm 2 ) that contained the most TMEM doorways in either the Whole Tumor Tissue ROI ( Figure 2E) or the Path ROIs ( Figure 2F), respectively. These different methods are summarized in Table 1. We evaluated Mena Calc using the same approaches described above. In addition, in the original Mena Calc publications [21,22], quantification of Mena signals was limited to a binary "tumor mask" representing only epithelial cells and excluding stromal features. To determine if this signal was indeed important for the performance of the marker, we additionally varied whether the Mena Calc was measured within the entire image ( Figure 3A, pink area), or just within the area determined by a thresholded cytokeratin signal mask ( Figure 3B, pink area). As a result, eight different variations of Mena Calc were evaluated, named MC1 through MC8 (Table 1). While this creates many variations for the Combined Marker (8 × 4 = 32 combinations), we only considered combinations of TMEM and Mena Calc scores that utilized the same ROI type and tissue coverage for both markers. This left only the eight combinations indicated in Table 2. Table 2. Valid combined marker test pairs. While the variation of ROI type, tissue coverage, and application of a cytokeratin mask creates 32 potential combinations of TMEM and Mena Calc scores, only eight were evaluated within the same ROI type and tissue coverage. Thus, Combined Marker was evaluated in these 8 valid combinations.
Measurement of Performance
The performance of each individual marker was determined by establishing a cut-off point which divided the cohort into "Low" and "High" score groups. The progressionfree survival curves of these groups were then compared using Kaplan-Meier analyses, which uses disease progression as an endpoint, i.e., the development of distant metastasis in this study [23]. In order to quantify the separation between survival curves, we em-ployed the restricted mean survival time (RMST) difference. RMST is a well-established method for evaluating survival data in clinical trials [24][25][26][27][28][29][30][31][32]. This metric quantifies the average event-free survival time, up to (or restricted to) a pre-specified, clinically important time point, which corresponds graphically to the area under the survival curve [33] ( Figure 4A). The absolute difference in RMST between two study groups thus provides an estimate of the event-free life expectancy difference between the groups [32]. Graphically, it is the difference between the areas under the survival curves ( Figure 4B), i.e., the group separation. Thus, in order to determine the best possible prognostic performance of each marker individually, we varied the cut-off point over the range of possible values to establish the optimal cut-off point, i.e., the cut-off point value which produced the best group separation ( Figure 5A,B; Figure S2). Since the RMST difference calculation depends upon having two survival curves, the calculation breaks down when all patients fall into a single group. Furthermore, the RMST difference calculation may produce artificially high differences when a group contains only a few patients whose survival time is very short. Observing very few patients in either group would be an unrealistic situation since it is expected that~20-40% of ER+ patients would experience distant metastasis [34]. Thus, we limited the range of the cut-off points so that the "Low" and "High" groups both contained at least 10% of the entire cohort size.
To construct a Combined Marker, we used the logical "AND" operation between the two best-performing TMEM doorway and Mena Calc tests so that cases were considered "High" for the Combined Marker if they were "High" for both the TMEM doorway and Mena Calc markers individually, and "Low" for the Combined Marker if they were "Low" for at least one ( Figure S4). Given the small cohort size of this study, we limited the analysis to just two groups. To objectively evaluate the comparison between the performance of the Combined Marker and that of the individual tests, cut-off points were not altered from their optimal values. This prevents "over-training" of the system and allows evaluation of the increase in performance of the Combined Marker over the best possible performance of each individual marker alone. The RMST difference values and cut-off points for all analyses are tabulated in Table 3. Among the eight Combined Marker analyses, five divided the two groups with less than 10% of the population in one group (Table 4). All three of the remaining analyses showed improved differentiation in progression-free survival compared to their respective TMEM doorway and Mena Calc analyses alone (Table 3).
Determination of Best Performer
The analysis which resulted in the largest RMST difference value was TMEM1-MC2, the test where the TMEM Score and the Mena Calc Score were evaluated over the entire area of the whole tumor tissue ROI ( Figure 6A) and utilized a cytokeratin mask to limit the Mena Calc evaluation to tumor cell cytoplasm ( Figure 6B). TMEM1-MC2 gave an RMST difference of 5.27 years (95% CI: 1.71-8.37), compared to 3.56 years (95% CI: 0.95-6.1) for TMEM1 and 2.94 years (95% CI: 0.25-5.87) for MC2, i.e., 1.71 years longer in progressionfree survival than TMEM1 alone and 2.37 years longer than MC2 alone. Despite the large and overlapping confidence intervals, this Combined Marker analysis shows a markedly improved ability to discriminate between those patients who experience distant recurrence and those who do not (Figure 7). The number of patients in each individual risk group is shown in Table 5.
Discussion
Our study of the different methods of combining TMEM doorway and Mena Calc analyses showed a noticeable improvement in prognostic performance when measuring TMEM doorways and Mena Calc over the entire range of the tumor tissue and utilizing a cytokeratin mask to limit the Mena Calc evaluation to tumor cell cytoplasm only. This result can be understood by considering the interaction between tumor cell intravasation sites (TMEM doorways) and highly motile (Mena Calc-Hi ) tumor cells ( Figure 1D).
Using high-resolution intravital imaging, we have previously shown that, during intravasation, tumor cells respond to chemotactic signals, migrate towards blood vessels, and enter the blood stream through TMEM doorways [6,7,14]. Tumor cells within a 500 µm radius of blood vessels are attracted towards the blood vessels by hepatocyte growth factor (HGF) gradients [14]. In tumor cells lying between 500 µm and 1000 µm from blood vessels, migration is not directed towards the blood vessels but is toward macrophages which draws the tumor cell macrophage pairs into the HGF gradient. The tumor-cell-macrophage paired chemotaxis is driven by a paracrine loop between tumor-cellsecreted colony-stimulating factor 1 (CSF-1) and macrophage-secreted epidermal growth factor (EGF) [13]. Both chemotaxis-mediated tumor cell movements are greatly amplified in tumor cells with high Mena Calc levels [14,[16][17][18]. Thus, there may be a high probability of tumor cell intravasation when Mena Calc-Hi cells are close (within an area~1 mm in radius) to blood vessels that contain TMEM doorways ( Figure 1A). By centering this area upon each TMEM doorway, we can define a "TMEM interaction zone" wherein Mena Calc-Hi tumor cells are likely to intravasate.
In patients with a high TMEM Score, the density of TMEM doorways is high enough that the interaction zone for one TMEM doorway may overlap with that of adjacent TMEM doorways and thus the total interaction zone may cover the tumor volume nearly completely. Therefore, the improved performance when the Combined Marker is evaluated over the entire range of the tumor tissue (vs. just 10 fields of view) is to be expected as a result of increased sampling of the tissue which has the net effect of "averaging out" tissue heterogeneity.
Furthermore, the improved performance of the Combined Marker when utilizing the cytokeratin mask is to be expected as well, for two reasons. The first is that a cytokeratin mask limits the signal to only the tumor cells and excludes non-specific binding of antibody to stromal cells. Secondly, since Mena is a cytoplasmic and membrane-bound protein, exclusion of nuclei narrows the area for analysis to only the one where Mena is expected to be found. Both of these effects improve the specificity of detection and better separate signal from noise.
Cohort
Patient cases were taken from the Calgary Tamoxifen cohort (described previously in [35]), a large retrospective cohort of breast cancer patients diagnosed between 1985 and 2000. A subset of 130 patients (randomly chosen based on recurrence status, 65 patients with no distant recurrence and 65 patients with a distant recurrence at 5 years) that had previously been utilized to investigate the influence of ATM protein in both tumor and cancer-associated stroma on clinical outcome [36] was used for this study. The information regarding overall survival was not available at the time of the study. Patients had confirmed invasive breast cancer (74 ductal, 8 lobular, 2 other, 1 unknown) and received adjuvant tamoxifen therapy. Patients were excluded from this analysis if they had a prior cancer diagnosis (except non-melanoma skin cancer). Because most patients with ER+ disease do not show additional benefits from chemotherapy compared to endocrine therapy alone [37,38], and it is expected that most patients with ER+ breast cancer will be treated with endocrine therapy alone, patients who received neo-or adjuvant-chemotherapy were also excluded from the study. Moreover, chemotherapy may increase TMEM Score in some patients with ER+ disease [39]. In addition, it should be noted that chemoendocrine therapy in node-positive ER+ breast cancer patients was shown to be beneficial in pre-menopausal women (RxPONDER trial), and is continued to be included as standard of care treatment option for these patients. Of the 130 patients, pathological review determined that 86 cases had sufficient tissue for analysis. The characteristics of the study cohort are summarized in Table 6. The maximum follow-up time is 15 years.
IHC Triple Staining
TMEM IHC staining was carried out by a commercial entity (MetaStat, Boston, MA, USA) using the MetaSite Breast™ assay which measures TMEM Score as described previously [10]. According to the company, formalin-fixed paraffin-embedded invasive breast cancer samples were stained for TMEM doorways using a modified triple chromogen immunohistochemical stain for CD31-positive blood vessels using a rabbit anti-CD31 monoclonal antibody (AbCam/Epitomics Clone EP3095, Burlingame, CA, USA), CD68positive macrophages using an anti-CD68 mouse monoclonal antibody (Thermo Scientific Clone PGM1, Waltham, MA, USA), and Mena-positive tumor cells using a proprietary anti-PanMena mouse monoclonal antibody. CD31-positive blood vessels, CD68-positive macrophages, and Mena-expressing tumor cells were visualized using brown, blue, and red chromogens, respectively.
Digital Whole Slide Scanning
Slides were digitized on the PerkinElmer Pannoramic 250 Flash II digital wholeslide scanner using a 20x 0.8NA Plan-Apochromat objective (PerkinElmer, Hopkinton, MA, USA). A typical digital whole-slide scan consists of thousands of fields which are mosaicked to form a high-resolution image for analysis. TMEM doorway slides were imaged in bright field mode (pixel size = 0.24 µm, bit depth = 8) and Mena Calc slides were imaged in fluorescence mode (pixel size = 0.33 µm, bit depth = 8).
Automated TMEM Doorway Quantification
After scanning, the digital slides were imported into Visiopharm's image analysis software, Vis (Visiopharm, Hørsholm, Denmark). In Vis, Mena Calc slides were aligned to TMEM doorway slides using the Tissue Align module. The boundaries of each tissue were determined by heavily smoothing (51 × 51 px kernel) the negated brightfield image of the TMEM doorway stained slide and thresholding the resulting signal ("Whole Tumor Tissue ROI", Figure 2A). Next, three pathologists identified more limited regions of interest ("Path ROIs", Figure 2B) for analysis based upon appropriate pathological criteria (e.g., high density of blood vessels, low levels of stromal tissue) and image quality (e.g., out of focus regions). In addition, staining quality and adequacy of tissue on both TMEM doorway and IF-stained slides were checked in all ROIs.
The 10 highest TMEM doorway scoring high power fields were identified using an automatic ranking mechanism as previously published [5,10]. Briefly, the MicroImager module in Vis divided the entire area of Whole Tumor Tissue or the Path ROIs into subfields of equal area with each subfield equivalent to a pathologist's microscope high-power field (330 × 440 µm 2 each, Figure 2G). Next, TMEM doorways were quantified in all of the high-power fields and the 10 fields with the highest number of TMEM doorways were identified and the sum of TMEM doorways in these fields was used as the TMEM Score for the patient sample (Top 10 TMEM Score) [5].
Automated Mena Calc Quantification
Mena Calc was quantified similarly to previous publications as the difference between PanMena and Mena11a z-scores (Equations (1)-(3)) ( Figure S3) [21,22]. Following TMEM doorway quantification, Mena Calc was measured within the ROIs and tissue coverage area as described in Table 1 and Figure 2. In addition, Mena Calc quantification was further measured across all cells ( Figure 3A) or restricted to only the area with a mask generated by a cytokeratin stain ( Figure 3B). This resulted in a total of eight different methods of Mena Calc quantification (Table 1).
Statistical and Survival Analysis
All statistical analyses were performed and automated using R 4.0.4 in RStudio (RStudio, Inc., Boston, MA, USA).
Standalone TMEM Doorway and Mena Calc Analyses
In each of the four TMEM doorway tests (Table 1), patient cases were separated into two risk groups (high and low risk) by a cut-off point value so that the TMEM Score of the high-risk group was equal to or greater than the cut-off point value and the TMEM Score of the low risk group was less than the cut-off point value. A custom-written R script was used to automate the analysis by incrementally varying the cut-off point value across the range of the TMEM scores (60 equally spaced cut-off points were used), constructing a Kaplan-Meier survival curve, and evaluating the group separation with a RMST difference value for each cut-off point. Only the analyses which generated both high-risk and low-risk group sizes greater than 10% of the population were recorded. The optimal cut-off point value for the TMEM doorway analysis was chosen as the cut-off point which generated the maximum RMST difference value within the series. Bootstrapping with 1000 repetitions was performed to estimate confidence intervals for the optimal cut-off point.
The eight standalone Mena Calc analyses were performed in a similar way to the four TMEM doorway analyses.
Combined Marker Analysis
For the Combined Marker analyses, a TMEM doorway test and a Mena Calc test were paired to form a combined test pair (Table 2). Among the total 32 possible Combined Marker analyses, only 8 were consistent in the range and quantity of tissue analyzed ( Table 2). In the Combined Marker analysis, a patient was included in the high-risk group only if they fell within the high-risk group in both the associated TMEM doorway and Mena Calc analyses ( Figure S4). Otherwise, they were deemed low risk. In each Combined Marker analysis, the TMEM cut-off point and Mena Calc cut-off point were taken directly from the corresponding standalone TMEM doorway analysis and Mena Calc analysis. Only the analyses which generated both group sizes greater than 10% of the population were recorded. The RMST difference values and cut-off point values of the Combined Marker analyses and the associated TMEM doorway and Mena Calc analyses are given in Table 3. The entire Combined Marker analysis was also automated in a custom-written R script.
Conclusions
In conclusion, we have developed and evaluated a potential new biomarker for prognosticating metastatic progression in ER+/HER2− breast cancer patients that combines the previously published TMEM doorway and Mena Calc prognostic biomarkers. Our results show that the Combined Marker potentially improves prognostication over that observed with TMEM or Mena Calc Score alone. While promising, the patient cohort in this study is limited in size, only considers HR+/HER2− breast cancer, and lacks an independent validation cohort to confirm the results. Future work currently in progress is focused on validating these results in a larger, independent cohort (including HER2 patients), expanding the analysis to include other breast cancer subtypes and, importantly, accounting for other variables with a multivariate analysis. Informed Consent Statement: Patient consent was waived due to the retrospective nature of the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
2022-04-29T15:29:18.582Z
|
2022-04-26T00:00:00.000
|
{
"year": 2022,
"sha1": "6c4638f60b3a7ce86fe1e31f357d06b2c8099d0b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/9/2168/pdf?version=1651057280",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a725d25674f679e242cb6bf9d23e44e0bf80286",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237869448
|
pes2o/s2orc
|
v3-fos-license
|
Numerical and Experimental Research on Thermal Insulation Performance of Marine Diesel Engine Piston Based on YSZ Thermal Barrier Coating
: Although YSZ ceramic coating has been used in the field of aeroengines for a long time to protect blades from high temperature erosion, its application on marine engines is still very rare. In this study, YSZ powder was sprayed onto the upper surface of the Al-Si alloy piston by atmospheric plasma spraying. The piston with or without ceramic coatings was applied to the diesel engine bench, and the ship propulsion characteristics test was carried out to study the effect of the coating on the performance of the diesel engine when the ship was sailing. The temperature field results show that under 25% load, the temperature of the top surface of the coated piston is about 30.91 ◦ C higher than that of the conventional piston. The increase in the temperature of the combustion chamber is conducive to better combustion of the fuel in the cylinder of the diesel engine. Therefore, when the marine diesel engine is tested for propulsion characteristics, the thermal efficiency is increased by 5% under the condition of 25% load.
Introduction
The main transportation method of global trade is still carried out by sea, and diesel engines are favored by ship power due to their high reliability. At present, diesel engines are in a leading position in the field of marine propulsion power [1]. If an innovation can effectively improve the thermal efficiency of marine diesel engines and reduce fuel consumption, it will have a positive impact on global trade. In order to improve the thermal efficiency of diesel engines, many studies have found that the ceramic thermal barrier coating (TBC) can well reduce the heat loss in the cylinder of the reciprocating internal combustion engine, thereby affecting the working performance of the engine [2,3].
With the continuous improvement of the power of marine diesel engines, the parts of the combustion chamber suffer more and more thermal damage, which has a great impact on the safe and stable operation of the engine. As the piston is the most important part of the combustion chamber of the engine, it is necessary to study the heat transfer of the piston to ensure the safe and reliable operation of the engine. Ceramic thermal barrier coating (TBC) has been widely used in blades of aircraft engines and gas turbines to protect the superalloy parts of the blades from high temperature and oxidation, thereby prolonging the service life of the engine [4][5][6][7]. The NiCrAlY coating is widely used in the bonding layer between the thermal insulation coating and the metal substrate because the coefficient of thermal expansion is between the metal substrate and the ceramic heat insulation layer [8]. In this study, NiCrAlY is also selected as the bonding layer of the coating. In addition, ceramic coatings can also provide thermal damage protection for the heated parts of the combustion chamber of marine diesel engines [9,10]. Hiregoudar [11] found that the use of coatings is more conducive to the combustion of low 16-paraffin fuels. The thickness of the coating and the coating process can affect the surface temperature of the piston, thereby affecting the combustion in the cylinder and having an impact on the fuel efficiency of the engine [12].
In this study, the atmospheric plasma spraying method was used to spray the TBC on the top surface of the piston, and the ship propulsion characteristics test research was carried out on the marine diesel engine. By comparing the test results of pistons with thermal barrier coatings to conventional pistons, the effect of coatings on the thermal efficiency of the engine is analyzed. In addition, this study presents finite element modeling of conventional diesel engine pistons and ceramic coating diesel engine pistons, and analyzes the effect of the TBC on the heat transfer of the piston by means of finite element simulation.
Piston and Coating Materials
The traditional material of internal combustion engine pistons is generally cast iron, but the density of cast iron pistons is relatively high, which makes the reciprocating inertia force of the piston at high speeds very large, thus restricting the development of internal combustion engine power. Therefore, the piston material of the medium and small power internal combustion engine is generally Al-Si alloy. Due to the small proportion of aluminum alloy, which is about one-third of that of cast iron piston, the reciprocating inertia force produced by piston work will be greatly reduced.
The piston body used in this study is made of Al-Si alloy, and the ring groove is made of wear-resistant inlay made of cast iron (see Figure 1). In this way, the piston can have the characteristics of the light weight of Al-Si alloy pistons, and at the same time, it can avoid the jam fault due to the difference of thermal expansion performance between piston and piston ring. Although Al-Si alloy has good plasticity, it can make the piston bear more stress. However, aluminum alloys cannot withstand too high temperatures. This study provides the thermal insulation effect after spraying YSZ thermal barrier coating powder (see Figure 2) on the Al-Si alloy piston of a marine diesel engine. As shown in Figure 3, the top surface of the Al-Si alloy piston is coated with a YSZ ceramic thermal insulation coating, which completely covers the surface of the combustion chamber on the top surface of the piston. This greatly improves the reliability of the piston at high temperature.
sulation layer [8]. In this study, NiCrAlY is also selected as the bonding layer of the coating. In addition, ceramic coatings can also provide thermal damage protection for the heated parts of the combustion chamber of marine diesel engines [9,10]. Hiregoudar [11] found that the use of coatings is more conducive to the combustion of low 16-paraffin fuels. The thickness of the coating and the coating process can affect the surface temperature of the piston, thereby affecting the combustion in the cylinder and having an impact on the fuel efficiency of the engine [12].
In this study, the atmospheric plasma spraying method was used to spray the TBC on the top surface of the piston, and the ship propulsion characteristics test research was carried out on the marine diesel engine. By comparing the test results of pistons with thermal barrier coatings to conventional pistons, the effect of coatings on the thermal efficiency of the engine is analyzed. In addition, this study presents finite element modeling of conventional diesel engine pistons and ceramic coating diesel engine pistons, and analyzes the effect of the TBC on the heat transfer of the piston by means of finite element simulation.
Piston and Coating Materials
The traditional material of internal combustion engine pistons is generally cast iron, but the density of cast iron pistons is relatively high, which makes the reciprocating inertia force of the piston at high speeds very large, thus restricting the development of internal combustion engine power. Therefore, the piston material of the medium and small power internal combustion engine is generally Al-Si alloy. Due to the small proportion of aluminum alloy, which is about one-third of that of cast iron piston, the reciprocating inertia force produced by piston work will be greatly reduced.
The piston body used in this study is made of Al-Si alloy, and the ring groove is made of wear-resistant inlay made of cast iron (see Figure 1). In this way, the piston can have the characteristics of the light weight of Al-Si alloy pistons, and at the same time, it can avoid the jam fault due to the difference of thermal expansion performance between piston and piston ring. Although Al-Si alloy has good plasticity, it can make the piston bear more stress. However, aluminum alloys cannot withstand too high temperatures. This study provides the thermal insulation effect after spraying YSZ thermal barrier coating powder (see Figure 2) on the Al-Si alloy piston of a marine diesel engine. As shown in Figure 3, the top surface of the Al-Si alloy piston is coated with a YSZ ceramic thermal insulation coating, which completely covers the surface of the combustion chamber on the top surface of the piston. This greatly improves the reliability of the piston at high temperature. Due to its simple operation and high preparation efficiency, plasma spraying equipment is used in the preparation of most thermal barrier coatings [13][14][15][16][17].
The YSZ ceramic coatings are used as thermal insulation coatings because of their low thermal conductivity and relatively high coefficient of thermal expansion, thereby reducing harmful coating interface stress. In this study, a two-layer thermal barrier coating system with a thermal barrier layer and bonding layer was used (see Figure 4). The top coating comprises a 300 mm thick YSZ ceramic. The second layer is a bond coating containing a 100 mm thick NiCoCrAlY compound (see Figure 5). Due to its simple operation and high preparation efficiency, plasma spraying equipment is used in the preparation of most thermal barrier coatings [13][14][15][16][17].
The YSZ ceramic coatings are used as thermal insulation coatings because of their low thermal conductivity and relatively high coefficient of thermal expansion, thereby reducing harmful coating interface stress. In this study, a two-layer thermal barrier coating system with a thermal barrier layer and bonding layer was used (see Figure 4). The top coating comprises a 300 mm thick YSZ ceramic. The second layer is a bond coating containing a 100 mm thick NiCoCrAlY compound (see Figure 5). Due to its simple operation and high preparation efficiency, plasma spraying equipment is used in the preparation of most thermal barrier coatings [13][14][15][16][17].
The YSZ ceramic coatings are used as thermal insulation coatings because of their low thermal conductivity and relatively high coefficient of thermal expansion, thereby reducing harmful coating interface stress. In this study, a two-layer thermal barrier coating system with a thermal barrier layer and bonding layer was used (see Figure 4). The top coating comprises a 300 mm thick YSZ ceramic. The second layer is a bond coating containing a 100 mm thick NiCoCrAlY compound (see Figure 5). Due to its simple operation and high preparation efficiency, plasma spraying equipment is used in the preparation of most thermal barrier coatings [13][14][15][16][17].
The YSZ ceramic coatings are used as thermal insulation coatings because of their low thermal conductivity and relatively high coefficient of thermal expansion, thereby reducing harmful coating interface stress. In this study, a two-layer thermal barrier coating system with a thermal barrier layer and bonding layer was used (see Figure 4). The top coating comprises a 300 mm thick YSZ ceramic. The second layer is a bond coating containing a 100 mm thick NiCoCrAlY compound (see Figure 5). Before spraying the thermal barrier coating, it is necessary to cut the top surface of the piston to ensure that the compression ratio of the engine remains unchanged.
Thermal Analysis by Finite Element Method
Based on the marine diesel engine pistons made of Al-Si alloy and cast iron, this research conducted finite element thermal analysis on uncoated piston and coated piston.
The finite element model of this study uses a total of 4,382,973 elements and 7,783,387 nodes (see Figure 6). The meshing used for heat transfer analysis of YSZ coated piston is refined in ANSYS Workbench. In the thermal analysis, the material properties of YSZ barrier layer, bonding layer and piston substrate are taken from Table 1. The marine diesel engine piston issued is shown in Figures 1 and 3. Because the piston is axisymmetric, Based on the marine diesel engine pistons made of Al-Si alloy and cast iron, this research conducted finite element thermal analysis on uncoated piston and coated piston. The finite element model of this study uses a total of 4,382,973 elements and 7,783,387 nodes (see Figure 6). The meshing used for heat transfer analysis of YSZ coated piston is refined in ANSYS Workbench. In the thermal analysis, the material properties of YSZ barrier layer, bonding layer and piston substrate are taken from Table 1. The marine diesel engine piston issued is shown in Figures 1 and 3. Because the piston is axisymmetric, only half of the piston model is selected for simulation analysis in this paper. The half model of the piston and the coating thickness are shown in Figure 5. The YSZ thermal barrier coating and NiCoCrAlY bond coating are sprayed on the top of the piston by air plasma spraying. The temperature distribution on different surfaces of the TBC piston is shown by simulation analysis, and their results are compared with those of the uncoated piston.
The thermal boundary conditions of the piston are very complicated, mainly including: the combustion chamber, the piston and the cooling oil cavity, the area between the piston and the cylinder liner, the bottom of the engine piston and the crankcase mixed gas. In this study, the thermal boundary conditions of the temperature and the heat transfer coefficient of the medium are calculated based on the empirical formula and the experimental measured values under the maximum power situation.
Because the analysis of the piston temperature field is often performed under the maximum load, this article collects relevant data and calculates the relevant boundary conditions when the engine speed is 945 rpm and the load is 15 kW.
In this way, thermal boundary conditions of the different elements are shown in Table 2. The surface temperature of the combustion chamber on the top surface of the piston The YSZ thermal barrier coating and NiCoCrAlY bond coating are sprayed on the top of the piston by air plasma spraying. The temperature distribution on different surfaces of the TBC piston is shown by simulation analysis, and their results are compared with those of the uncoated piston.
The thermal boundary conditions of the piston are very complicated, mainly including: the combustion chamber, the piston and the cooling oil cavity, the area between the piston and the cylinder liner, the bottom of the engine piston and the crankcase mixed gas. In this study, the thermal boundary conditions of the temperature and the heat transfer coefficient of the medium are calculated based on the empirical formula and the experimental measured values under the maximum power situation.
Because the analysis of the piston temperature field is often performed under the maximum load, this article collects relevant data and calculates the relevant boundary conditions when the engine speed is 945 rpm and the load is 15 kW.
In this way, thermal boundary conditions of the different elements are shown in Table 2. The surface temperature of the combustion chamber on the top surface of the piston was estimated to be 460 • C, and the corresponding convection coefficient was 507 W/m 2 K. The temperature of the piston fire shore position was assumed to be 130 • C with a convection coefficient of 127 W/m 2 K. The temperatures of the upper, middle, and lower parts of the first ring were estimated to be 90 • C, 85 • C, and 85 • C, and the corresponding convection coefficients were 127 W/m 2 K, 73 W/m 2 K, 1261 W/m 2 K, respectively. The temperature of the second ring was estimated to be 80 • C, and the corresponding convection coefficient was 326 W/m 2 K. The surface temperature of the piston skirt was specified as 75 • C, and the corresponding convection coefficient was 215.5 W/m 2 K. The surface temperature of the cooling oil cavity of the piston was defined as 80 • C, and the corresponding convection coefficient was 2000 W/m 2 K.
Experimental Setup
In order to study the influence of thermal barrier coatings on diesel engines during ship voyage, the diesel engine test in this paper is a propulsion characteristic test carried out according to the working conditions of the ship. The engine used in this analysis was a WP4 direct-injection marine diesel engine equipped with a turbocharger. When the rated speed was 1500 rpm, the corresponding rated power was 60 kW. The details of the test engine are given in Table 3. By comparing the experimental data of marine diesel engines with coated pistons to conventional pistons, the effect of coatings on the thermal efficiency of the engine is analyzed. The marine diesel engine bench is shown in Figure 7.
Temperature Measurements
In this paper, the hardness plug made of GCr15 steel is used as the measuring method
Temperature Measurements
In this paper, the hardness plug made of GCr15 steel is used as the measuring method to measure the temperature value of the corresponding point on the piston. The hardness of GCr15 steel will change differently at different temperatures. The hardness plug after the test is taken out, and the maximum temperature of the hardness plug is reversed according to its Vickers hardness result. Due to the low temperature of the piston under 25% load, in order to avoid the large measurement error of the hardness plug at low temperature, this paper calculates the highest temperature value of the corresponding point of the piston by measuring the result of the hardness plug under 50% load, then compares the experimental results with the simulation results to verify the reliability of the piston model. Figure 8 shows the location of reference points on the conventional piston. The temperature field of a conventional aluminum-silicon piston at 50% load can be seen in Figure 9.
Temperature Measurements
In this paper, the hardness plug made of GCr15 steel is used as the measuring method to measure the temperature value of the corresponding point on the piston. The hardness of GCr15 steel will change differently at different temperatures. The hardness plug after the test is taken out, and the maximum temperature of the hardness plug is reversed according to its Vickers hardness result. Due to the low temperature of the piston under 25% load, in order to avoid the large measurement error of the hardness plug at low temperature, this paper calculates the highest temperature value of the corresponding point of the piston by measuring the result of the hardness plug under 50% load, then compares the experimental results with the simulation results to verify the reliability of the piston model. Figure 8 shows the location of reference points on the conventional piston. The temperature field of a conventional aluminum-silicon piston at 50% load can be seen in Figure 9.
Comparison of Temperature Measurement Results
The temperature results of the corresponding reference point of the piston under the 50% working condition of the simulation analysis were compared with the test results of the hardness plug, as shown in Figure 10. The deviation between these results is less than 6%, which is within an acceptable range, indicating that the simulation results are highly accurate and reliable.
Comparison of Temperature Measurement Results
The temperature results of the corresponding reference point of the piston under the 50% working condition of the simulation analysis were compared with the test results of the hardness plug, as shown in Figure 10. The deviation between these results is less than 6%, which is within an acceptable range, indicating that the simulation results are highly accurate and reliable.
Comparison of Temperature Measurement Results
The temperature results of the corresponding reference point of the piston under the 50% working condition of the simulation analysis were compared with the test results of the hardness plug, as shown in Figure 10. The deviation between these results is less than 6%, which is within an acceptable range, indicating that the simulation results are highly accurate and reliable.
Temperature Distribution
According to the established piston model, the temperature fields of the conventional and the thermal barrier coating piston were evaluated through numerical analysis, as shown in Figure 11.
It can be seen from Figure 11a that the maximum temperature on the ceramic coating of the piston with thermal barrier coating reaches 217.66 °C, which appears in the throat ring area of the piston combustion chamber. Figure 11b shows the temperature of the thermal barrier coating piston substrate; the maximum temperature of the piston is 182.24 °C, and the lowest temperature is 93.15 °C.
Temperature Distribution
According to the established piston model, the temperature fields of the conventional and the thermal barrier coating piston were evaluated through numerical analysis, as shown in Figure 11. accurate and reliable.
Temperature Distribution
According to the established piston model, the temperature fields of the conventional and the thermal barrier coating piston were evaluated through numerical analysis, as shown in Figure 11.
It can be seen from Figure 11a that the maximum temperature on the ceramic coating of the piston with thermal barrier coating reaches 217.66 °C, which appears in the throat ring area of the piston combustion chamber. Figure 11b shows the temperature of the thermal barrier coating piston substrate; the maximum temperature of the piston is 182.24 °C, and the lowest temperature is 93.15 °C. It can be seen from Figure 11a that the maximum temperature on the ceramic coating of the piston with thermal barrier coating reaches 217.66 • C, which appears in the throat ring area of the piston combustion chamber. Figure 11b shows the temperature of the thermal barrier coating piston substrate; the maximum temperature of the piston is 182.24 • C, and the lowest temperature is 93.15 • C.
The temperature is visually analyzed by selecting the path AB from point A on the edge of the piston top surface to point B in the radial direction (shown by the line in Figure 12) and extracting the temperature value of the node on the path. The temperature values on the path AB of the top ceramic coating, the bonding layer and the top of the piston substrate of the TBC piston are shown in Figure 12a-c, respectively.
Analyzing the path analysis of the temperature field distribution of the thermal barrier coating piston in Figure 12d, it is apparent that the temperature of the TBC piston is the highest in the top ceramic coating, and the temperature gradient is larger between the ceramic coating and the bonding layer. The temperature difference between the bonding layer and the piston substrate is relatively small.
The simulation analysis of the temperature field distribution cloud diagram of the engine piston without thermal barrier coating is shown in Figure 13. Comparing Figures 11 and 13, it can be seen that after the marine diesel engine piston is processed by thermal barrier coating, the highest temperature of the piston drops from 186.75 to 182.24 • C, therefore the temperature is reduced by 4.5 • C.
Through performing comparative analysis of the temperature field cloud diagram of the piston substrate with TBC (see Figure 11b) and that of the piston without TBC (see Figure 13), and using the software to extract the temperature value of the node on the path AB, the temperature distribution of the path AB at the top edge of the piston with or without TBC was obtained as shown in Figures 12 and 14. The temperature is visually analyzed by selecting the path AB from point A on the edge of the piston top surface to point B in the radial direction (shown by the line in Figure 12) and extracting the temperature value of the node on the path. The temperature values on the path AB of the top ceramic coating, the bonding layer and the top of the piston substrate of the TBC piston are shown in Figure 12a-c, respectively. Analyzing the path analysis of the temperature field distribution of the thermal barrier coating piston in Figure 12d, it is apparent that the temperature of the TBC piston is the highest in the top ceramic coating, and the temperature gradient is larger between the ceramic coating and the bonding layer. The temperature difference between the bonding layer and the piston substrate is relatively small.
The simulation analysis of the temperature field distribution cloud diagram of the engine piston without thermal barrier coating is shown in Figure 13. Comparing Figures 11 and 13, it can be seen that after the marine diesel engine piston is processed by thermal barrier coating, the highest temperature of the piston drops from 186.75 to 182.24 °C, therefore the temperature is reduced by 4.5 °C. Through performing comparative analysis of the temperature field cloud diagram of the piston substrate with TBC (see Figure 11b) and that of the piston without TBC (see Analyzing the path analysis of the temperature field distribution of the thermal barrier coating piston in Figure 12d, it is apparent that the temperature of the TBC piston is the highest in the top ceramic coating, and the temperature gradient is larger between the ceramic coating and the bonding layer. The temperature difference between the bonding layer and the piston substrate is relatively small.
The simulation analysis of the temperature field distribution cloud diagram of the engine piston without thermal barrier coating is shown in Figure 13. Comparing Figures 11 and 13, it can be seen that after the marine diesel engine piston is processed by thermal barrier coating, the highest temperature of the piston drops from 186.75 to 182.24 °C, therefore the temperature is reduced by 4.5 °C. Through performing comparative analysis of the temperature field cloud diagram of the piston substrate with TBC (see Figure 11b) and that of the piston without TBC (see Figure 15 shows the temperature value of the node on the path AB. The red line indicates the temperature of the uncoated piston substrate, and the black line indicates the temperature of the piston substrate after spraying the coating. It can be seen from the figure that the highest temperature is at the throat of the piston combustion chamber of the diesel engine. After the marine diesel engine piston is processed by the thermal barrier coating, the temperature of the Al-Si alloy engine piston substrate decreases significantly, indicating that the thermal damage to the piston substrate is lower, which is better than the piston without TBC. diesel engine. After the marine diesel engine piston is processed by the thermal barrier coating, the temperature of the Al-Si alloy engine piston substrate decreases significantly, indicating that the thermal damage to the piston substrate is lower, which is better than the piston without TBC. Figure 15 shows the temperature value of the node on the path AB. The red line indicates the temperature of the uncoated piston substrate, and the black line indicates the temperature of the piston substrate after spraying the coating. It can be seen from the figure that the highest temperature is at the throat of the piston combustion chamber of the diesel engine. After the marine diesel engine piston is processed by the thermal barrier coating, the temperature of the Al-Si alloy engine piston substrate decreases significantly, indicating that the thermal damage to the piston substrate is lower, which is better than the piston without TBC.
The Effect of Coating on Thermal Efficiency
Due to the heat insulation effect of the TBC, the heat transfer loss of the engine through the end face of the piston combustion chamber is reduced, thereby enhancing the brake thermal efficiency of the diesel engine [18][19][20][21].
In this research, by applying the piston sprayed with YSZ barrier coating to the marine diesel engine bench, the ship propulsion characteristics test was carried out to study the effect of the coating on the performance of the diesel engine when the ship was sailing.
The brake specific fuel consumption (BSFC) is the fraction between fuel mass consumption rate ( ) and brake power ( ), obtained from:
BSFC =
(1) Figure 15. The temperature distribution along the AB path. Red indicates the temperature of the uncoated piston substrate, and black indicates the temperature of the piston substrate after spraying the coating.
The Effect of Coating on Thermal Efficiency
Due to the heat insulation effect of the TBC, the heat transfer loss of the engine through the end face of the piston combustion chamber is reduced, thereby enhancing the brake thermal efficiency of the diesel engine [18][19][20][21].
In this research, by applying the piston sprayed with YSZ barrier coating to the marine diesel engine bench, the ship propulsion characteristics test was carried out to study the effect of the coating on the performance of the diesel engine when the ship was sailing.
The brake specific fuel consumption (BSFC) is the fraction between fuel mass consumption rate (m f ) and brake power (P b ), obtained from: The brake thermal efficiency (η th ) is the ratio between produced work per cycle and chemical energy in the fuel, the calculation formula is as follows: where Q HV is the fuel lower heating value. In this test, six measurement points of ship propulsion characteristics were selected to carry out the test research, which are 25%, 50%, 75%, 80%, 90% and 100%. Figure 16a shows the BSFC comparison of uncoated and coated piston marine diesel engine under different propulsion characteristic loads. Figure 16b shows the change of engine braking thermal efficiency with the change of ship propulsion power.
It can be seen from Figure 16 that as the speed and load increase, the thermal efficiency of the coated engine will increase. Venkadesan's research also showed that pistons coated with TBC materials increase the break thermal efficiency due to reduction in heat loss and converts it into more available work, as the BTE increased by 8% [2]. Compared with the uncoated engine, the thermal efficiency of the piston engine coated with YSZ is increased by 5% and 2.2%, respectively, under the conditions of low and medium load. At the same speed and load, the fuel consumption of the engine after adopting the YSZ ceramic coating is significantly reduced in the case of low and medium loads, indicating that the coated engine can achieve the same as the original engine while consuming less fuel. Combined with the temperature field analysis data, it is shown that the coating reduces the cooling loss of the engine and improves the thermal efficiency of the engine. Under high load conditions, because the surface temperature of the coating is too high, the air intake efficiency is reduced, and the thermal efficiency is not increased significantly.
The brake thermal efficiency (η ) is the ratio between produced work per cycle and chemical energy in the fuel, the calculation formula is as follows: where is the fuel lower heating value. In this test, six measurement points of ship propulsion characteristics were selected to carry out the test research, which are 25%, 50%, 75%, 80%, 90% and 100%. Figure 16a shows the BSFC comparison of uncoated and coated piston marine diesel engine under different propulsion characteristic loads. Figure 16b shows the change of engine braking thermal efficiency with the change of ship propulsion power. It can be seen from Figure 16 that as the speed and load increase, the thermal efficiency of the coated engine will increase. Venkadesan's research also showed that pistons coated with TBC materials increase the break thermal efficiency due to reduction in heat loss and converts it into more available work, as the BTE increased by 8% [2]. Compared with the uncoated engine, the thermal efficiency of the piston engine coated with YSZ is increased by 5% and 2.2%, respectively, under the conditions of low and medium load. At the same speed and load, the fuel consumption of the engine after adopting the YSZ ceramic coating is significantly reduced in the case of low and medium loads, indicating that the coated engine can achieve the same as the original engine while consuming less fuel. Combined with the temperature field analysis data, it is shown that the coating reduces the cooling loss of the engine and improves the thermal efficiency of the engine. Under high load conditions, because the surface temperature of the coating is too high, the air intake efficiency is reduced, and the thermal efficiency is not increased significantly.
Conclusions and Future Work
Through comparative analysis, it is found that the highest temperature on the top surface of the diesel engine piston always appears in the throat ring area of the combustion chamber. It has also been observed that the temperature of the top surface of the YSZ coated engine piston is higher than that of the uncoated engine piston. After the piston is coated, the maximum temperature of the piston substrate is about 4.5 °C lower than that of the uncoated piston, indicating that the piston substrate suffers less thermal damage, which will have a positive impact on the reliability of the Al-Si alloy piston and engine life.
In addition, according to the propulsion characteristics test of the marine diesel engine, after adopting the YSZ ceramic coated piston, the fuel consumption of the coated
Conclusions and Future Work
Through comparative analysis, it is found that the highest temperature on the top surface of the diesel engine piston always appears in the throat ring area of the combustion chamber. It has also been observed that the temperature of the top surface of the YSZ coated engine piston is higher than that of the uncoated engine piston. After the piston is coated, the maximum temperature of the piston substrate is about 4.5 • C lower than that of the uncoated piston, indicating that the piston substrate suffers less thermal damage, which will have a positive impact on the reliability of the Al-Si alloy piston and engine life.
In addition, according to the propulsion characteristics test of the marine diesel engine, after adopting the YSZ ceramic coated piston, the fuel consumption of the coated engine is lower when the output is the same, especially under low load conditions. Compared with the uncoated engine, the YSZ-coated piston engine has the most improved thermal efficiency under low load conditions, with a maximum increase of 5%. Therefore, this study shows that the YSZ coating can effectively improve the thermal efficiency of the diesel engine when the ship is sailing at medium and low speeds and has a positive effect on the development of the shipping industry.
Since this study only considered 300 µm thick YSZ coatings, coating applications of different thicknesses and materials will be developed on marine engines in the future to provide an evaluation basis for finding the best coating application program.
|
2021-09-01T15:09:13.198Z
|
2021-06-25T00:00:00.000
|
{
"year": 2021,
"sha1": "9e129974d2c66d928b184709b4f39b0706941df5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6412/11/7/765/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "417cc773e8915bd76218d2f9296637ec93737486",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
218618425
|
pes2o/s2orc
|
v3-fos-license
|
miRNAs: EBV Mechanism for Escaping Host’s Immune Response and Supporting Tumorigenesis
Epstein-Barr virus (EBV) or human herpesvirus 4 (HHV-4) is a ubiquitous human oncogenic virus, and the first human virus found to express microRNAs (miRNAs). Its genome contains two regions encoding more than 40 miRNAs that regulate expression of both viral and human genes. There are numerous evidences that EBV miRNAs impact immune response, affect antigen presentation and recognition, change T- and B-cell communication, drive antibody production during infection, and have a role in cell apoptosis. Moreover, the ability of EBV to induce B-cell transformation and take part in mechanisms of oncogenesis in humans is well known. Although EBV infection is associated with development of various diseases, the role of its miRNAs is still not understood. There is abundant data describing EBV miRNAs in nasopharyngeal carcinoma and several studies that have tried to evaluate their role in gastric carcinoma and lymphoma. This review aims to summarize so far known data about the role of EBV miRNAs in altered regulation of gene expression in human cells in EBV-associated diseases.
Introduction
Epstein-Barr virus (EBV) or human herpesvirus 4 (HHV-4) is a ubiquitous human oncogenic virus that belongs to the family Herpesviridae, subfamily Gammaherpesvirinae, and genus Lymphocryptovirus [1]. Since its first description in 1964 and subsequent recognition of its ability to induce B-cell transformation in vitro, EBV has been extensively used as a model for research focusing on fundamental mechanisms of oncogenesis in humans. It is also used as a model in devising diagnostic, therapeutic, and prevention strategies in malignant diseases [2][3][4][5]. Approximately 95% of the human population is infected with EBV and will remain carriers of the virus for the rest of their lives. By modulating its own transcriptional patterns during lytic and latent stages of infection, EBV establishes a lifelong persistence in both immunocompetent and immunocompromised hosts and modifies the host's immune system effector mechanisms [6]. However, in the context of immunosuppression, regardless of the specific cause, the immune system fails to efficiently control EBV replication [7]. Subsequently, latent infection with EBV may be associated with development of various malignancies originating from epithelial cells, lymphocytes, and mesenchymal cells, including posttransplant B-cell lymphomas, Hodgkin's and non-Hodgkin's lymphomas, diffuse large B-cell lymphoma, Burkitt's lymphoma, natural killer (NK)/T-cell lymphoma, nasopharyngeal carcinoma (NPC), and gastric carcinoma (GC) [7,8]. MicroRNAs (miRNA) are small, non-coding RNA molecules of cellular or viral origin that consist of 18-22 nucleotides and play an important role in regulation of gene expression. Consequently, they affect key events in cell biology such as proliferation, apoptosis, and lipid metabolism. By binding to messenger RNAs (mRNAs), miRNAs induce degradation of mRNAs or inhibition of translation, thus reducing levels of expression of target genes. Virus-encoded miRNAs (v-miRNAs) are considered an important non-immunogenic tool for post-transcriptional regulation of both host and viral gene expression in infected cells [9]. The majority of literature data on v-miRNAs, particularly in the context of tumorigenesis, focuses on v-miRNAs in EBV, Kaposi's sarcoma-associated herpesvirus (KSHV/HHV8), human papillomaviruses (HPV), hepatitis C virus (HCV), hepatitis B virus (HBV), and Merkel Cell Polyomavirus (MCPyV) [9][10][11]. EBV was the first virus shown to express miRNAs [12]. EBV-encoded miRNAs (EBV miRNAs) target both viral and cellular mRNAs in infected cells, extending their role beyond regulating various stages of the EBV replication cycle. They influence cellular proliferation and apoptosis and play a part in driving diverse molecular pathways of oncogenesis and evading innate and adaptive immune responses. The aim of this review is to summarize current views on the role of EBV miRNAs in altered gene expression associated with immune evasion and tumorigenesis.
EBV miRNAs
EBV genome is a double-stranded DNA molecule that consists of 175 kbp containing nearly 100 genes and coding for 44 microRNAs [13][14][15]. It was first sequenced in 1984 by using M13 libraries made from viral EcoR I and BamH I fragments gathered after sonification [16]. Today, after more than 100 EBV genomes from tumor cells samples and healthy individuals' tissue have been sequenced, the complete EBV genome can be found in the NCBI GeneBank [17]. In 2004, Pfeffer et al. first described two clusters in the EBV genome responsible for production of EBV miRNAs [12]. The first cluster was found in the sequence for BamH I fragment H rightward open reading frame 1 (BHRF1) mRNA. It produces three miRNA precursors (ebv-miR-BHRF1-1, -2, and -3) and, subsequently, four mature miRNAs [18]. The second cluster is BamH I-A region rightward transcript (BART) and it encodes 22 miRNA precursors (ebv-miR-BART1-22) of 40 mature miRNA molecules [12,19] (Figure 1). Pathogens 2020, 9, x 2 of 11 MicroRNAs (miRNA) are small, non-coding RNA molecules of cellular or viral origin that consist of 18-22 nucleotides and play an important role in regulation of gene expression. Consequently, they affect key events in cell biology such as proliferation, apoptosis, and lipid metabolism. By binding to messenger RNAs (mRNAs), miRNAs induce degradation of mRNAs or inhibition of translation, thus reducing levels of expression of target genes. Virus-encoded miRNAs (v-miRNAs) are considered an important non-immunogenic tool for post-transcriptional regulation of both host and viral gene expression in infected cells [9]. The majority of literature data on v-miRNAs, particularly in the context of tumorigenesis, focuses on v-miRNAs in EBV, Kaposi's sarcoma-associated herpesvirus (KSHV/HHV8), human papillomaviruses (HPV), hepatitis C virus (HCV), hepatitis B virus (HBV), and Merkel Cell Polyomavirus (MCPyV) [9][10][11]. EBV was the first virus shown to express miRNAs [12]. EBV-encoded miRNAs (EBV miRNAs) target both viral and cellular mRNAs in infected cells, extending their role beyond regulating various stages of the EBV replication cycle. They influence cellular proliferation and apoptosis and play a part in driving diverse molecular pathways of oncogenesis and evading innate and adaptive immune responses. The aim of this review is to summarize current views on the role of EBV miRNAs in altered gene expression associated with immune evasion and tumorigenesis.
EBV miRNAs
EBV genome is a double-stranded DNA molecule that consists of 175 kbp containing nearly 100 genes and coding for 44 microRNAs [13][14][15]. It was first sequenced in 1984 by using M13 libraries made from viral EcoR I and BamH I fragments gathered after sonification [16]. Today, after more than 100 EBV genomes from tumor cells samples and healthy individuals' tissue have been sequenced, the complete EBV genome can be found in the NCBI GeneBank [17]. In 2004, Pfeffer et al. first described two clusters in the EBV genome responsible for production of EBV miRNAs [12]. The first cluster was found in the sequence for BamH I fragment H rightward open reading frame 1 (BHRF1) mRNA. It produces three miRNA precursors (ebv-miR-BHRF1-1, -2, and -3) and, subsequently, four mature miRNAs [18]. The second cluster is BamH I-A region rightward transcript (BART) and it encodes 22 miRNA precursors (ebv-miR-BART1-22) of 40 mature miRNA molecules [12,19] (Figure 1). Expression of BHRF1 miRNAs is latency stage-dependent (they are mainly expressed in type III latency), while BART miRNAs are transcribed in all latency stages [12]. Despite the coordinated Expression of BHRF1 miRNAs is latency stage-dependent (they are mainly expressed in type III latency), while BART miRNAs are transcribed in all latency stages [12]. Despite the coordinated expression of each miRNA cluster, significant differences in the expression levels of individual EBV miRNAs in the same type of human cells (as high as 50-fold) have been observed [12,22]. It has been suggested that different genotypes as well as genomic variants of EBV could be associated with different patterns of individual miRNA expression in infected cells [23][24][25]. A number of studies on EBV miRNA biosynthesis (reviewed by Wang et al., 2018) suggest that EBV gene products are not necessary for expression of each miRNA cluster, significant differences in the expression levels of individual EBV miRNAs in the same type of human cells (as high as 50-fold) have been observed [12,22]. It has been suggested that different genotypes as well as genomic variants of EBV could be associated with different patterns of individual miRNA expression in infected cells [23][24][25]. A number of studies on EBV miRNA biosynthesis (reviewed by Wang et al., 2018) suggest that EBV gene products are not necessary for v-miRNA production. This proposition indicates that the molecular mechanisms responsible for the synthesis of v-miRNAs and cellular miRNAs in host cells are similar [20]. Analysis of EBV genomic sequences in gastric carcinoma and EBV-associated lymphoma has showed that despite the genetic diversity in almost the entire EBV genome, the regions encoding miRNAs are highly conserved [26,27].
The Role of EBV miRNAs in Immune Evasion
Alongside immune evasion strategies mediated by host miRNAs and viral glycoproteins, EBV miRNAs are another means by which the virus successfully avoids effector mechanisms of the host's immune system [28]. The main strategies for immune evasion used by EBV miRNAs are the following:
Pattern-Recognition Receptor-Mediated Signaling Pathways and Interferons
EBV interferes with efficient initiation of innate immune response at the very first step, e.g., by targeting expression of pattern recognition receptors (PRR). The impaired expression of PRR affects subsequent signal transduction as well as cytokine synthesis [29]. The two main targets for EBV miRNAs are retinoic acid-inducible protein 1 (RIG-I) receptors (mediated by miR-BART6-3p) and Toll-like receptors [20,30]. The lack of EBV molecular pattern recognition by PRR is associated with the absence of the JAK (Janus kinase)-STAT (Signal Transducer and Activator of Transcription)mediated transduction pathway. The ineffectiveness of this signaling pathway leads to the impaired synthesis of type I interferons (IFNs) and other pro-inflammatory cytokines that are essential for innate immune responses [31]. Studies on nasal NK-cell lymphoma (NNL) cells showed that EBV miRNAs (miR-BART20-5p and miR-BART8) impact the signal transduction pathway for IFN-γ by targeting STAT1, which enables the virus to avoid antiviral activity of both type I and type II IFNs [32] (Figure 2).
Natural Killer (NK) Cells
The role of EBV miRNAs in the evasion of NK-cell-mediated responses has been analyzed in nasopharyngeal carcinoma cells in vitro. Interaction between the natural killer group 2 member D (NKG2D) receptor on NK cells and major histocompatibility complex class I chain-related peptide A (MICA) is considered the key step in the recognition and killing of cancer cells. Expression of MICA
Natural Killer (NK) Cells
The role of EBV miRNAs in the evasion of NK-cell-mediated responses has been analyzed in nasopharyngeal carcinoma cells in vitro. Interaction between the natural killer group 2 member D (NKG2D) receptor on NK cells and major histocompatibility complex class I chain-related peptide A (MICA) is considered the key step in the recognition and killing of cancer cells. Expression of MICA in NPC is positively regulated by transforming growth factor β-1 (TGF-β1). Wong et al. (2018) showed that miR-BART7 reduces the expression of TGF-β1 in NPC cells and impairs the NK-cell-mediated recognition of virus-infected cells [33] (Figure 2).
Cytokines and Chemokines
miR-BART1, miR-BART2, miR-BART10, and miR-BART22 suppress efficient CD8+ T-cell-mediated antiviral immune response by targeting IL-12, a pro-inflammatory cytokine responsible for differentiation of naive type 1 helper T-cell (Th1 cells) to mature Th1 cells, increased synthesis of IFN-γ, activation of NK-and T-cells as well as inhibition of angiogenesis [36,37]. In addition, miR-BART6-3p interferes with biological activity of IL-6 by targeting expression of the IL-6 receptor [38]. EBV miRNAs also enable the virus to evade Th1-mediated antiviral immunity by modulating expression of chemokines. For example, miR-BHRF1-3 targets an IFN-inducible chemokine CXCL11 (CXC-chemokine ligand 11) responsible for the selective homing of Th1 effector cells and NK-cells to the sites of infection [20,39] (Figure 2).
Antigen Presentation
EBV miRNAs impair mechanisms of specific immunity by affecting adequate antigen recognition at the level of antigen processing (reduced expression of lysosomal enzymes), transport of processed antigenic peptides to major histocompatibility complex (MHC) molecules (by targeting peptide transporter subunit TAP2), and antigen presentation (reduction of lymphocyte antigen 75 expression on dendritic cells) [20,37,[40][41][42] (Figure 2).
Specific Cellular Immunity
T-cell-mediated immunity can maintain long term immune control over EBV replication (for >50 years in some individuals) while clinical consequences associated with EBV infection in persons with impairment of T-cell development or function are shown to be very severe. This suggests that virus-specific T-cell responses represent the main means of protective immunity in EBV infection [37]. EBV miRNAs specifically target host genes coding for key regulators of T-cell responses including T-bet (miR-BART20-5p), Mucosa-associated lymphoid tissue lymphoma transport protein 1 (MALT1) (miR-BHRF1-2-5p), and C type lectin superfamily 2 member D (CLEC2D) (miR-BART1-3p and miR-BART3-3p) [43] (Figure 2).
T-bet belongs to the T-box family of transcription factors that are the main enhancers of the Th1 differentiation pathway and subsequent Th1 cell-specific IFN-γ synthesis, which are important for efficient antiviral immunity. Inhibition of T-bet translation (with subsequent suppression of p53) by EBV miRNAs, originally shown in invasive nasal NK/T-cell lymphoma cells, is supposedly associated with the inhibition of Th1 differentiation pathways. As a result, the control of EBV replication is less efficient [43].
However, T-bet also regulates transcriptional networks that are common among other types of immune cells (including dendritic cells and innate lymphoid cells) and is currently considered to have an important role in bridging innate and adaptive immunity (for review see Lazarevic et al., 2017) [44].
In addition, T-bet acts as a selective repressor of transcriptional pathways associated with type I IFNs subsequent to IFN-γ-induced signaling [44,45]. Therefore, EBV miRNAs-mediated inhibition of T-bet's biological activity may have a significantly broader effect on the evasion of antiviral immunity in EBV infection than originally thought.
The Role of EBV miRNAs in Tumorigenesis
EBV association with development and progression of malignant tumors is well known, especially its frequent infection of lymphoma and carcinoma cells. Nevertheless, the role of EBV miRNAs in tumorigenesis has only recently come into focus.
The presence of diverse EBV miRNAs in different types of lymphoma was analyzed not only in patient samples, but it was also researched through cell line-based studies. In BL41/95 cell line, derived from BL, BHRF miRNAs were detected [12]. The study by Ambrosio et al. in which BL-derived cell line Akata was used as a model, revealed that the expression of PTEN and IL6 receptor subunits was lower in the presence of miR-BART6-3p and was restored if the cells were simultaneously transfected with miR-BART6-3p inhibitor [48]. Zhou et al. found that Ramos cell line (derived from EBV-negative BL) transfected simultaneously with oligonucleotides mimicking cellular miRNA-142 and miBART6-3p, displayed lower expression of PTEN compared to negative control and cells transfected with the same oligonucleotides separately. This suggests that miR-BART6-3p downregulates PTEN, a tumor suppressor that regulates the PI3K/Akt pathway, in cooperation with cellular miRNA-142 [51]. Moreover, in two cell lines derived from lymphomas of NK cell origin (YT and NK92), it was shown that miR-BART20-5p was responsible for downregulation of T-bet, and therefore p53, and IFN-γ [43].
EBV miRNAs in Carcinoma
EBV is also known to be associated with the development of NPC and GC. Recently, the role of EBV miRNAs was thoroughly studied in these malignancies. BART miRNAs were first found in NPC, xenografted and propagated in nude mice [52,53], and subsequently in NPC patient biopsies [54]. They were found to be highly expressed in NPC and GC, but BHRF1 miRNAs were generally not present in those entities. At least 105 host genes were shown to be regulated by BART miRNAs during carcinoma development [55], but genes recognized as EBV miRNAs' targets in B-cell lymphomas were not confirmed in carcinomas. Increased expression of BART miRNA clusters and individual BART miRNAs correlated with higher tumor grade and poor patient survival in NPC and GC.
Generally, it is believed that BART miRNAs in carcinoma act in synergy or obtain significant effects by combining their individual activities, in cooperation or competition with cellular miRNAs. It was shown that BART miRNAs target numerous transcripts of different genes, thus deregulating various downstream molecules and signaling pathways [56]. Consequently, BART miRNAs allow infected cells to avoid apoptosis by inactivating different pro-apoptotic molecules, influence cell proliferation, inhibit the expression of regulatory tumor suppressors, mediate cancer metabolism, stimulate cell migration and metastasis, inhibit cell differentiation, and manage immune evasion and regulation of the virus latency through coordination of cellular and viral signaling pathways [49,57,58] (Table 1). BART miRNAs also modulate host cell pathways by mimicking cellular miRNAs. Although this feature is still not explored enough, several BART miRNAs were shown to have "seed" sequences similar to those of cellular miRNAs: miR-BART5 compared to miR-18a and miR-18b [66,79], miR-BART9 to miR-200a and miR-141 [72], miR-BART15-3p to miR-223-3p, and miR-BART18-5p to miR-26a [35,80].
Overall, BART miRNAs are highly expressed in NPC and GC types associated with EBV infection where they act synergistically and have redundant activities, but also possibly differ in their target genes in different intracellular milieus.
Conclusions
We envision that further detection of cellular processes affected and regulated by EBV miRNAs will contribute to better understanding of the role of viral non-coding RNAs in the development of virus-induced cancers in humans. In case of EBV, despite extensive research, there are currently no antiviral drugs or EBV-vaccines approved for use in humans. In years to come, better evaluation and understanding of the viral miRNA mechanisms might reveal new biomarkers and potential therapeutic targets.
|
2020-05-14T13:03:13.738Z
|
2020-05-01T00:00:00.000
|
{
"year": 2020,
"sha1": "468199e8e31bf06fc2944167a9096db10ec44ad6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0817/9/5/353/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f9c10478587a9cab154fb16d8ea8533820eaa23",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
255456850
|
pes2o/s2orc
|
v3-fos-license
|
Regulation of Osteoblast Differentiation and Iron Content in MC3T3-E1 Cells by Static Magnetic Field with Different Intensities
Many studies have indicated that static magnetic fields (SMFs) have positive effects on bone tissue, including bone formation and bone healing process. Evaluating the effects of SMFs on bone cell (especially osteoblast) function and exploring the mechanism, which is critical for understanding the possible risks or benefits from SMFs to the balance of bone remodeling. Iron and magnetic fields have the natural relationship, and iron is an essential element for normal bone metabolism. Iron overload or deficiency can cause severe bone disorders including osteoporosis. However, there are few reports regarding the role of iron in the regulation of bone formation under SMFs. In this study, hypomagnetic field (HyMF) of 500 nT, moderate SMF (MMF) of 0.2 T, and high SMF (HiMF) of 16 T were used to investigate how osteoblast (MC3T3-E1) responses to SMFs and iron metabolism of osteoblast under SMFs. The results showed that SMFs did not pose severe toxic effects on osteoblast growth. During cell proliferation, iron content of osteoblast MC3T3-E1 cells was decreased in HyMF, but was increased in MMF and HiMF after exposure for 48 h. Compared to untreated control (i.e., geomagnetic field, GMF), HyMF and MMF exerted deleterious effects on osteoblast differentiation by simultaneously retarding alkaline phosphatase (ALP) activity, mineralization and calcium deposition. However, when exposed to HiMF of 16 T, the differentiation potential showed the opposite tendency with enhanced mineralization. Iron level was increased in HyMF, constant in MMF and decreased in HiMF during cell differentiation. In addition, the mRNA expression of transferrin receptor 1 (TFR1) was promoted by HyMF but was inhibited by HiMF. At the same time, HiMF of 16 T and MMF of 0.2 T increased the expression of ferroportin 1 (FPN1). In conclusion, these results indicated that osteoblast differentiation can be regulated by altering the strength of the SMF, and iron is possibly involved in this process.
Introduction
All the organisms on the Earth are continuously exposed to intrinsic geomagnetic field (GMF, 25-65 μT), which plays an essential role in living. Besides GMF, chances for human exposed to various static magnetic fields (SMF) have increased a lot with rapid development in science and technology, such as magnetic resonance imaging (MRI), overhead cables with high-voltage direct current, and some transportation systems based on magnetic levitation. Furthermore, the intensity of SMF in deep space for astronaut is much lower than the GMF: < 300 nT on the moon and~6 nT in interplanetary space. Growing evidence suggests that deprivation of GMF Jiancheng Yang and Jian Zhang equally contributed to this work.
(i.e., hypomagnetic field, HyMF) has adverse impacts on many functional states of organisms (reviewed in [1]). Thus, studying the biological effects under HyMF can help not only better understand the GMF's function on the health of human beings, but also predict the potential effects of HyMF on the health of astronaut during interplanetary navigation. Under high SMF (HiMF) of up to several teslas, vertigo, nausea, and phosphenes may occur in some people due to peripheral nerve stimulation and perturbation of the vestibular system. Nevertheless, there is no convincing evidence that moderate SMF (MMF) or HiMF would induce any adverse effects [2]. Certain SMFs are also used to keep healthy or treat some diseases nowadays [3]. Therefore, it is necessary to systematically elucidate the biological effects and mechanisms of SMFs ranging from hypomagnetic field (HyMF, < 5 μT), weak SMF (WMF, 5 μT-1 mT), moderate (MMF, 1 mT-1 T) to high (HiMF, > 1 T).
Many animal studies concerned with health effects have demonstrated that SMFs with moderate intensity can improve bone formation with increased bone mineral density (BMD) and enhance bone healing in numerous circumstances, such as bone surgical invasion [4], ischemic bones [5], adjuvant arthritis rats [6], bone fracture [7], ovariectomized rats [8], and bone grafts [9]. HyMF aggravates bone loss induced by hindlimb unloading in rat femurs [10]. Most studies believe that these positive impacts on bone are related to the function enhancement of osteoblast [11]. Osteoblasts arise from mesenchymal stem cells and function as bone synthesis and mineralization. At cellular level, SMF do have some modulations on behaviors and function [12], such as morphology, proliferation, cell cycle distribution, apoptosis, differentiation, gene expression etc. Previous studies demonstrate that SMFs especially MMF promote osteoblast differentiation. The ability of osteogenic differentiation in various osteoblastic cells is enhanced under moderate SMF, including human osteosarcoma cell lines MG63 [13][14][15], mouse calvarial osteoblast MC3T3-E1 [16][17][18], and rat calvaria cells [19]. It should be noted that almost all the researches focus on MMF generated by permanent magnets due to its easy realization. Moreover, the experimental design, exposure facility, magnetic induction (ranging from HyMF to HiMF), and types of biological samples used (animals, cells, or molecules) are largely heterogeneous. Thus it is difficult to draw a definite conclusion that how bone cells respond to SMF.
Iron is essential for almost all living organisms and is crucial for many biological processes such as the oxygen transport and enzymatic reactions [20]. In recent years, preclinical and clinical studies have demonstrated a close relationship in iron metabolism and bone metabolism [21]. Iron overload or iron deficiency can cause abnormal bone metabolism or osteoporosis. Excess iron that could inhibit the biological activity of osteoblasts has been demonstrated in vitro experiments [22][23][24]. Low iron, in contrast, inhibits osteoblastogenesis in vitro as well [24]. Although iron and magnetic fields have the natural relationship, there are few studies concerning the effect of SMFs on iron at the level of biochemistry. Recently, a study showed that SMF exposure with 128 mT alters the plasma levels of iron in rats [25].
In order to comprehensively examine the regulatory role of SMF on osteoblast, and whether iron involve in the altered bone formation by osteoblast under SMFs, the present study was undertaken to investigate differentiation and iron changes of osteoblastic MC3T3-E1 cells under such SMFs range from HyMF, GMF, MMF to HiMF. In this study, three types of SMF exposure systems were used according to our previous research [26,27]. HyMF of about 500 nT (created by magnetic shielding room), MMF of 0.2-0.4 T (surrounding a superconducting magnet), and HiMF of 16 T (generated by a superconducting magnet) were employed to simultaneously investigate the SMF effects.
Materials and Methods
The Facilities of SMFs Exposure for Cell Culture HiMF of 16 T was generated by a superconducting magnet (JASTEC, Kobe, Japan), and the cell culture was maintained in the central bore of the magnet ( Fig. 1a and d). The distribution of the magnetic field along the Z axis of 238.7 mm was the highest, and its intensity was about 16 T (Fig. 1e). MMF of 0.2 T was achieved in a circular space around the superconducting magnet, where the distribution of magnetic field was about 0.2-0.4 T with a decreased gradient of 2 T/m along the radium direction ( Fig. 1d and f). For cell culture, we have established an experimental platform for biological research in and around the superconducting magnet as we described [28][29][30]. The temperature was kept at 37°C by heating circulating water baths, and the concentration of CO 2 was 5% calibrated by a CO 2 analyzer (Geotech, Leamington, UK).
HyMF was achieved by magnetic shielding technology [10]. A magnetic shielding box (550 mm × 420 mm × 420 mm) made of permeability alloy (NORINDAR International, Shijiazhuang, Hebei, China) was used to create a hypomagnetic condition, where the magnetic field strength was approximately 500 nT ( Fig. 1b and c). The shield box was put in a cell incubator (Thermo Fisher Scientific, Waltham, MA, USA) and a fan installed to ensure the optimal conditions of cell culture (5% CO 2 , 37°C).
Cells of GMF control were cultured in a normal cell incubator (Thermo Fisher Scientific) where the magnetic field was about 45 μT and slightly lower than the local GMF in the laboratory (~55 μT) due to the magnetic shielding effect of the incubator. The intensity of magnetic field was measured by a gaussmeter (Lake Shore Cryotronics, Westerville, OH, USA). The alternative current (AC) magnetic fields generated by the incubator and the fans of the magnetic shielding box were measured previously [31]. The AC field in the GMF control incubator and magnetic shielding chamber was 1013.2 ± 157.5 nT and 12.0 ± 0.0 nT, respectively, which was much smaller than the intensity of GMF. Besides, the predominant frequency was 50 Hz, equal to the used power line frequency. The temperature and CO 2 were set at 37C o and 5%, respectively, to ensure the optimal conditions of cell culture.
Cell Culture
Murine osteoblastic cell line MC3T3-E1 Subclone 4 [32] was used in this study and kindly provided by Prof. and Dr. Hong Zhou of the University of Sydney. The osteoblastic MC3T3-E1 cells were maintained by α-Minimum Essential Medium (α-MEM; Gibco, Grand Island, NY, USA), supplemented with 2 mM L-glutamine, 10% (v/v) fetal bovine serum (FBS; Gibco) in a humidified 5% CO 2 atmosphere at 37°C.
Hematoxylin-Eosin Staining
Cell morphology was monitored by hematoxylin-eosin (HE; Beyotime, Shanghai, China) staining. The cells were seeded on coverslips and pre-cultured for 24 h at a density of 3000 cells/cm 2 and then continuously exposed to SMF for 2 days. After that, cells were fixed by 4% paraformaldehyde, and then stained by 0.5% hematoxylin for 7 min and 0.5% eosin for 7 min. Digital images were obtained by using a Nikon Eclipse 80i microscope (Nikon, Tokyo, Japan). For statistical analysis, we selected 100 cells per group to quantify cell area and diameter of MC3T3-E1 cells by Image J software (National Institutes of Health, USA; http://imagej.nih.gov/ij/).
Cell Proliferation Assay
The cells (8000 cells/cm 2 ) were planted in 96-well plates (Corning, NY, USA). The proliferation of MC3T3-E1 cells was measured by MTT assay. Briefly, osteoblasts were uninterruptedly cultured in SMFs for 48 h; thereafter, MTT dye solution was added. Continue to incubate for 4 h, the supernatant was removed and DMSO was added to solubilize the MTT. The absorbance was read at 570 nm using a microplate reader (Bio-Rad Laboratories, Hercules, CA, USA).
Cell Cycle Distribution Assay
MC3T3-E1 cells were first seeded at 3000 cells/cm 2 in petri dishes with 35 mm diameter and pre-cultured for 24 h. After In this study, HiMF and MMF were produced by a vertical cylindricaltype superconducting magnet (a). The side view of the magnet (d) was shown to illustrate the position of cell culture, where arrow represented magnetic field direction. e and f showed the distribution of magnetic field at the center bore of the magnet along Z axis and in the circular area around the magnet, respectively. A permalloy magnetic shielding box designed for the realization of HyMF was placed in a CO 2 incubator (b). As the side view of the magnetic shielding box (c) shown, a fan was installed on the top to facilitate the gas and heat exchange with the cell culture incubator. B magnetic flux density, T tesla, R radius from center of the superconducting magnet that, the cells were synchronized at G0/G1-phase by serum starvation (α-MEM with 1% FBS) for 24 h. Then, the cells were transferred into normal medium and released in SMFs for 24 h. For cell cycle analysis, cells were washed with icecold phosphate buffered saline (PBS), fixed in 75% ice-cold ethanol overnight and stained by 50 μg/ml propidium iodide (PI; Sigma-Aldrich) and 1 mg/ml RNase A (Sigma-Aldrich) for 60 min. Cell cycle was detected and analyzed with a flow cytometer (BD Bioscience, Franklin Lakes, NJ, USA).
Mineralization Assay
The MC3T3-E1 cells (5 × 10 4 cells/cm 2 ) were seeded into 35 mm petri dishes. At confluence, osteogenesis by osteoblast MC3T3-E1 was induced by cell culture medium with ascorbic acid (50 μg/ml; Sigma-Aldrich) and β-glycerophosphate disodium salt hydrate (10 mM; Sigma-Aldrich). The cells were then constantly exposed to SMF for day 8, and osteogenic media was changed every 48 h. For mineralization assay, mineralized osteoblast cultures were fixed in 4% paraformaldehyde and then stained by 0.1% Alizarin red S (Sigma-Aldrich). Positive alizarin red staining for calcium represented the calcium phosphate of osteoblast culture mineralization. Alizarin red-stained osteoblast cultures were photographed by a scanner, and the total area of red calcified nodules was measured by Image J software (National Institutes of Health).
ALP Activity Assay
The MC3T3-E1 cells (5 × 10 4 cells/cm 2 ) were seeded into 96well plates. At confluence, osteogenic medium was used and changed every 48 h. After uninterrupted treat with SMF, cells were harvested at certain time points (from 2 to 8 days with a 2-day interval). Intracellular ALP activity was evaluated by using p-nitrophenyl phosphate (pNPP; Sigma-Aldrich) assay based on the ability of phosphatases to hydrolyze pNPP to pnitrophenol (pNP), a yellow soluble product under alkaline conditions with absorbance at 405 nm. Cells were washed twice with PBS and then lysed by three repeated freeze-thaw which cells were placed at − 80°C and room temperature for 15-min intervals. One hundred and fifty microliter of pNPP was added into each well and incubated at 37°C for 30 min. Absorbance was read at 405 nm. ALP activity was expressed as nanomole of pNPP hydrolyzed per 30 min per well (Corning, Tewksbury, MA, USA). Total protein was measured with a BCA kit (Thermo Fisher Scientific).
Calcium Deposition and Iron Content Assay
The calcium deposition and iron content in osteoblast culture were determined by atomic absorption spectrometry (AAS; Analytik Jena, Jena, Germany) as previously described [26]. At mineralization period, cell culture was washed with 0.9% NaCl and then dissolved in 1 ml 65% HNO 3 at 60°C for 2 h. The dried samples were dissolved in 10 ml 0.1% HNO 3 . Calcium content in cells and iron content in cultural supernatant was detected by flame AAS, while iron content in cells was determined by graphite furnace AAS.
qPCR After 2 days of mineralization, total RNA was extracted with the TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol and was used to synthesize cDNA with PrimeScript™ RT regent kit (TaKaRa, Liaoning, China). Then mRNA expression levels of the following genes were analyzed with quantitative real-time polymerase chain reaction (qPCR) assays and performed on CFX96 Touch qPCR System (Bio-Rad Laboratories, Hercules, CA, USA) using SYBR Premix Ex Taq™ (TaKaRa). The specific pairs of primers were listed in Table 1. The data were calibrated to GAPDH and analyzed via 2 −ΔΔCt method.
Statistical Analysis
All experiments were performed at least in triplicate. Summary data were reported as mean ± SD, compiled, and analyzed by Graph Pad Prism software (GraphPad, La Jolla, CA, USA). Mean ± standard deviation (SD) was calculated for each group using the appropriate version of one-way analysis of variance (ANOVA) with Newman-Keuls. P < 0.05 was considered statistically significant.
Effects of SMFs on the Growth of Osteoblastic MC3T3-E1 Cells
In the present study, we first evaluated whether or not the osteoblastic MC3T3-E1 cells could survive and grow well under SMF. The MC3T3-E1 morphology was examined. After SMFs treatment for 2 days, MC3T3-E1 cells were not detached or became thinner apparently (Fig. 2a). However, when grown under 500 nT, MC3T3-E1 showed an increase in spread area (Fig. 2b).
In order to further investigate whether cells can grow well in these extreme man-made environments, cell proliferation was evaluated. After exposed to 16 T for 48 h, the proliferation of MC3T3-E1 cells was distinctly accelerated compared with that cultured under GMF, while the proliferative of cells exposed to HyMF and MMF for the same times was not significantly changed (Fig. 2c). This indicated that MC3T3-E1 cells could grow well regardless of whether they were treated with SMF including HyMF, MMF, and HiMF or not.
To verify whether SMF effects on cell proliferation was associated with cell cycle distribution, we tested whether or not SMF could alter the cell cycle distribution of MC3T3-E1 using flow cytometry. After MC3T3-E1, cells were synchronized at G0/G1-phase by serum starvation and then released with normal medium under SMFs for 1 day; SMF caused an increase in the proportion of G2/M-phase cells and a significant decrease in S-phase cells (Fig. 2d). These results may in part account for the stimulus effect on cell proliferation.
Effects of SMFs on Iron Level of Osteoblastic MC3T3-E1 Cells During Proliferation
During cell proliferation, osteoblastic MC3T3-E1 cells were exposed to SMFs for 48 h; there was not a significant change in total iron content of cell culture dishes (Fig. 3a), but the level of iron in cell culture medium was decreased significantly under SMFs (Fig. 3b). In order to eliminate the effect of cell inconsistent proliferation under SMFs, we normalized the total iron content with the number of cells and the total protein content per dish. The results indicated that elemental iron in each cell was increased under MMF of 0.2 T and HiMF of 16 T, but was not significant alterations in HyMF of 500 nT (Fig. 3c). Similarly, iron content per unit protein was elevated in MMF and HiMF, but did not show any changes in HyMF (Fig. 3d).
Effects of SMFs on the Formation of Mineralization in Osteoblastic MC3T3-E1 Cells
Osteogenesis from osteoblast is a complex process that involves three stages: cell proliferation, matrix maturation, and matrix mineralization [32]. When matrix maturation occurs, there is extensive expression of ALP and several matrix proteins, including BSP, Col, DMP1, OC, OPN, etc. Then, minerals mainly in the form of hydroxyapatite crystals are deposited in the matrix.
The effects of SMFs on matrix mineralization of MC3T3-E1 cells were examined after continuous exposure 8 days. Matrix mineralization was characterized by analyzing the formation of calcified nodules. As shown in Fig. 3a, for the osteoblasts treated for 8 days, the area of formed nodules in 16 T group was significantly more than that of control group. Meanwhile, the formed nodules in 500 nT and 0.2 T were less than in GMF ( Fig. 4a and b). In order to further evaluate the degree of mineralization, atomic absorption spectrometry was utilized to precisely analyze the calcium content in the mineralized cultures. Calcium deposition was enhanced by HiMF and declined by HyMF and MMF at day 8 (Fig. 4c).
Effects of SMFs on ALP Activity of Osteoblastic MC3T3-E1 Cells
Mineralization is accompanied by increased activity and expression of ALP, which is regarded as markers during osteoblast differentiation. Consistently, the alteration of ALP activity was in a similar tendency as matrix mineralization. Compared to the control of GMF, treatments with 16 T significantly increased the ALP activity at day 8 (Fig. 5a). Moreover, total protein during osteoblast differentiation was significantly higher than GMF-control (Fig. 5b). The results indicate that the promotion of ALP activity under HiMF of 16 T may be associated with enhanced expression and secretion of ALP.
Effects of SMFs on mRNA Expressions of Matrix Proteins
During differentiation, osteoblast needs to synthesize and secrete bone matrix proteins including BSP, Col I, OC, OPN, DMP1 etc. In this study, total proteins were determined to and expressed as milligram per dish. The total iron content of each dish was normalized by the number of cells (c) and the total protein content per dish (d). All SMF groups were compared with the GMF of 0.05 mT group. Data shown are mean ± SD, *P < 0.05 represent the expression of matrix proteins during osteoblast differentiation. These osteogenic gene markers were detected based on polymerase chain reaction. Sixteen tesla exposure at day 8 significantly elevated the contents of total protein (Fig. 5b), which may be related to increased expressions of these bone matrix proteins after a 2-day stimulus of HiMF exposure (Fig. 6). The mRNA expression of ALP, BSP, and DMP1 was suppressed in MC3T3-E1 cells treated under 0.2 T. Moreover, cells from 500 nT group expressed less BSP and DMP1, but higher Col I and OC than that of control group.
Effects of SMFs on Iron Metabolism of Osteoblasts During Differentiation
During mineralization, HyMF of 500 nT increased the level of iron in cells (Fig. 7a) and slightly reduced iron content in mediums (Fig. 7b). In contrast, the level of iron was decreased in cells and elevated in cultural supernatant under HiMF of 16 T. However, there is no significant changes on iron content in cells and mediums under 0.2 T (Fig. 7).
Under physiological conditions, iron in circulation is generally bound to transferrin (Tf). The uptake of Tf-bound iron through the membrane-bound transferrin receptor 1 (TFR1) is the main source of iron for most cells [33]. Once inside the cell, iron can be utilized and stored. Intracellular iron can be stored in ferritin; ferritins are composed of 24 similar subunits of two types, H and L. The H-subunit (H-ferritin) is responsible for the rapid oxidation of ferrous to ferric iron at a dinuclear center, whereas the L-subunit (L-ferritin) appears to help iron clearance from the ferroxidase center of the H-subunit and support iron nucleation and mineralization [34]. During iron egress, iron is exported by ferroportin 1 (FPN1) which is currently the only iron exporter to be identified in mammals [35]. In this experiment, the mRNA expression of some related genes in celluar iron metabolism was assessed, including TFR1, H-ferritin, L-ferritin, and FPN1 (Fig. 7c)
Discussion
To date, SMF has received increased attention as exposure from many different sources happens in various situations Fig. 4 Effects of SMFs on mineralization process of osteoblastic MC3T3-E1 cells. Osteogenic differentiation was confirmed by alizarin red S staining (a) and analyzed by nodule area per dish (Diameter, 35 mm) at day 8 (b). Calcium deposition during mineralization was detected by flame atomic absorption spectrometry and expressed as milligram per dish at day 8 (c) n = 3. All SMF groups were compared with the GMF of 0.05 mT group. Data shown are mean ± SD, *P < 0.05 Fig. 5 Effects of SMFs on ALP activity during osteoblast differentiation. a ALP activity in MC3T3-E1 cultures was detected by pNPP method at 8 days and expressed as micromoles of pNPP hydrolyzed per 30 min per well. b During differentiation, total protein was measured by BCA kit and expressed as microgram per well, n = 3. All SMF groups were compared with the GMF of 0.05 mT group. Data shown are mean ± SD, *P < 0.05 and its possible health effects have been studied in many fields such as cognitive systems, cardiovascular system, immune system, skeleton system etc. In the present study, we have assessed the influence of a board range of SMF from HyMF, GMF, MMF to HiMF on osteoblast differentiation and tried to understand the potential mechanism via iron metabolism for the first time. We found that such extreme SMF environments as GMF deprivation or HiMF did not have lethal effects on osteoblast viability. Osteoblast differentiation as well as celluar iron uptake and iron efflux can be controlled by altering the parameters of SMF, such as magnetic flux intensity.
We first examined whether or not MC3T3-E1 cells could survive well under such a board intensities of SMF by means of morphology, proliferation, and cell cycle distribution. Cell morphology demonstrated that SMFs did not result in distinct modifications of cell shape, but osteoblasts showed an increase in spread area under HyMF. However, a recent study showed that HyMF inhibited cell adhesion in human neuroblastoma cells, and cells were smaller in size and more round in shape [36]. These seemingly contradictory findings can be attributed to different cell types.
In this study, the results indicated that the effect of SMFs on cell proliferation differed in their characteristic responses to different magnetic intensities. Numerous studies have investigated the SMF effects on proliferation in various osteoblastic cell lines; these results are controversial. Our previous studies exhibit that 16 T promotes osteoblast proliferation in MC3T3-E1 and MG63 cells [37,38]. But exposure to HiMF of 8 T exerts no effects on proliferation of MC3T3-E1 cells [18]. On the other hand, proliferation rate of MC3T3-E1 cells was decreased after treated with MMF of 250 mT [17]. Results from Huang et al. [14] indicate that the effects of SMF on osteoblast proliferation are associated with the initial cell densities. Cellcell contact may influence the degree of reorientation and deformation of lipid bilayer or proteins embedded on the membrane by SMF. As the SMF affected cell proliferation, we monitored the cell cycle progression of MC3T3-E1 cell throughout the incubation period. 500 nT, 0.2 T, and 16 T not only decreased the percentage of S-phase cells, but also increased the number of G2/M-phase cells, indicating that transition through the DNA synthesis S phase to enter into the mitosis G2/M-phase. These results suggest that altered proliferation in osteoblast may be due to S-to-G2/M transition in cell cycle. Together, these published reports, with our findings, suggest that SMF do not have lethal effects on the viability of osteoblast from HyMF, MMF to HiMF.
Iron is essential for cellular growth and crucial to many fundamental cellular processes, including DNA synthesis, respiration, cell cycle regulation, and the function of proteins [33]. In this study, we observed the effects of SMF on cell proliferation with changed iron level. For under 16 T, osteoblast proliferation and elemental iron of each cell were dramatically promoted at 48 h. Therefore, it is potentially that increased elemental iron promotes cell proliferation. For under MMF of 0.2 T, although iron content in each osteoblast was increased, the cell proliferation did not change significantly. This inconsistent result is still a puzzle, because there is no study on changes in iron content during cell proliferation under SMFs.
Bone ALP, specifically synthesized by osteoblasts, removes phosphate group to form hydroxylapatite deposited in bone, reflecting the biosynthetic activity of osteoblast [39]. ALP activity is a sensitive and reliable indicator of osteoblast differentiation. ALP activity and mineralization function of human osteoblast cells (hFOB1.19) were decreased by ferric ammonium citrate (FAC, a complex salt composed of iron, ammonia, and citric acid) and increased by deferoxamine (DFO, an iron chelator) in a concentration-dependent manner [22,40]. Here, we found that iron content in osteoblasts under HiMF of 16 T was decreased at day 2. Consistently, ALP activity and mineralization formation of 16 T treated osteoblasts were found to be facilitated in HiMF after continuous exposure 8 days. These results indicate that osteoblast differentiation under 16 T is promoted possibly by reducing the level of iron in cells. In addition, ALP activity and osteoblast differentiation was restrained under 500 nT with increased iron levels. However, although osteoblast differentiation was inhibited under 0.2 T, there was not found any changes in iron content of osteoblasts. Our studies demonstrate that SMF effect is highly dependent on flux intensity, and iron is potentially involved in this effect. The flux intensities used in this study were respectively discussed below.
GMF is an essential element on the earth. Conversely, elimination of GMF (i.e., HyMF) poses many adverse impacts on living organisms [1]. We previously found that HyMF of 300 nT alone did not lead to the bone loss, microstructure alterations, and mechanical properties in rat femurs, but elevated the concentrations of serum iron and aggravated bone loss induced by hindlimb unloading in rat [10]. Iron is a trace element that has important functions in vivo. In the skeletal system, both excess and insufficient iron can reduce bone mass. In vitro, iron overload inhibited osteoblast differentiation, such as the activity of ALP, the deposition of calcium, and the growth of hydroxyapatite crystals [23,41]. In the current study, HyMF of 500 nT increased the level of iron in cells at the early stage. Afterwards, osteoblast differentiation and mineralization were inhibited in HyMF of 500 nT through restraining the activity of ALP, the deposition of calcium and the formation of mineralized nodule. During long-term space exploration, osteoporosis tends to occur especially in loadbearing bone for astronauts due to lacked gravity stress [42,43]. Data from Mir and ISS space stations shows that mechanical stimulation in the form of exercise is still not enough to prevent bone loss in long-duration spaceflight [44,45]. Considering the lack of GMF in outer space, this indicates that HyMF may aggravate the bone loss due to microgravity [10]. Moreover, data from the Spacelab 1 mission showed that ferritin increased 53% by the seventh day of spaceflight and 62% by landing day [46]. Iron storage and availability are increased after spaceflight [47]. Therefore, it is possible that increased tissue iron availability is the reason for spaceflightinduced bone loss.
The results demonstrated that MMF of 0.2 T decreased nodules formation, which was not consistent with the prevalent opinion. Most researches show that osteoblast differentiation can be accelerated under MMF, while several studies demonstrate the inconstant results. For example, osteoblast differentiation in MG63 cells was unaltered at 0.25 and 0.32 T, but increased at 0.4 T [48,49]. For MC3T3-E1 cells, intensities of 150 mT [16], 250 mT [17], and 8 T [18] have a beneficial effect, while 0.2 T used in the present study decreases bone formation. Considering the diversities of intensity, duration, cell type, and treatment, it is impossible to draw a convincing conclusion that what the lowest or highest intensity is to enhance or decrease bone formation. In this study, iron content of osteoblast did not show any change after 0.2 T treatment for 2 days. However, our previous study has observed an increased iron accumulation in 10 days differentiation under 0.2 T. These seemingly contradictory findings can be attributed to the length of exposure time and the different stages of mineralization. The exact mechanisms are not clear yet. There may be an Bamplitude window^around 0.2 T for the promotion of differentiation and the variation of iron in osteoblast. On account of this, it is necessary to investigate the MMF effects by using the same stimulus parameters and cell type and to find the corresponding threshold. This work will be our next research aims.
HiMF generated by superconducting technology has been widely used in medical and engineering fields, so there is a great potential for exposure to higher magnetic flux densities up to tesla order. Our studies showed that HiMF of 16 T promoted not only osteoblast proliferation, but also mineralization process. Furthermore, HiMF of 16 T increased the level of iron in cells and reduced in mediums, meanwhile the mRNA expression of TFR1 was suppressed while the expression of FPN1 was increased. These results indicate that HiMF inhibits iron uptake while promoting iron efflux in osteoblastic MC3T3-E1 cells. But in a previous study, we have shown that iron levels of osteoblast were increased with more bone nodules after 10 days differentiation in HiMF of 16 T [26]. This may be attributable to the cellular and molecular modifications induced by SMF even with the same parameters are highly dependent on the biological status of the exposed cells, such as age of the cells, mitogen activation, and so on [50,51]. Osteoblast growth and differentiation can be characterized by three principal periods: proliferation, matrix maturation, and mineralization. Under most circumstances, the different stages are not all promoted or prohibited under a specific intensity of SMF. Imaizumi et al. [17] investigate the long-term effects of SMF on osteoblast differentiation in MC3T3-E1 cells. After 1 month of continuous exposure, ALP activity is not altered at an early phase, but mineralization process is enhanced. These findings suggest that different stages of osteoblast differentiation may have different responses to SMF stimulus.
Iron can exist in two valence states: Fe(II) and Fe(III) whose magnetic properties are quite different [52]. Fe(II) can be either paramagnetic with the effective spin 2 (high-spin state) or diamagnetid (low-spin state), while Fe(III) is all the time paramagnetic with an effective spin of 5/2 (high-spin state) or 1/2 (low-spin state). All these states depend on the ligand atoms and paramagnetic ions always interact with the magnetic field proportional to the effective spin. In cells, Fe(III) is generally present in various proteins or enzymes which participate in different biological activities of cells; Fe(II) as a crossroad of cell iron metabolism, exist in cellular labile iron pool (LIP) which is defined as a pool of redoxactive iron complexes [53]. Therefore, in our next study, it is necessary to detect the changes of Fe(II) and Fe(III) under SMFs, respectively, and it would help us to understand the mechanism of SMFs affecting the iron metabolism in cells.
Several studies have been conducted to address the potential action mechanism of SMF. Cell membrane is implicated as a primary target for transmitting extracellular signals inside the cell. Under SMF, membrane phospholipids are subjected to magnetic torque due to diamagnetic anisotropy and rotate according to the direction of SMF, which deforms cell membrane, influencing its fluidity [54]. This arrangement of phospholipids may lead to changes in ion channels, and proteins embedded in membrane. Lin et al. [49] proved that SMF of 0.4 T affected membrane fluidity in osteoblastic MG63 cells. Importantly, it is well known that Tf-bound iron is transported into cells via the membrane endocytosis [55], and Fe(II) is transported into cells through the divalent metal transporter 1 (DMT1) on the membrane [56]. Nevertheless, we still do not know the exact mechanism of action that which molecules and channels are activated or disabled. Also, the relationship with direction, amplitude, time, and their combination is more complicated and not clear yet. SMF plays great roles in modulating the generation or reduction of reactive oxygen species (ROS). ROS can cause lipid peroxidation, alters cell membrane composition and fluidity, and damages proteins and DNA [57]. In addition, it was reported that ROS could inhibit differentiation of osteoblast MC3T3-E1 [58]. In our work, iron element, associated with regulating oxidative stress, was found to be altered during osteoblast differentiation. It is speculated that the decreased osteoblast mineralization under 500 nT may be due to accumulated iron. Our previous study [59] also demonstrated the modulation of cell biomechanical property under 0.2 T was accompanied by the alteration of proliferation, adhesion, cytoskeleton arrangement etc. During differentiation, the level of elastic modulus in osteoblast gradually decreases [60]. It is suggested that the response of osteoblast differentiation to SMFs may be associated with cell biomechanical property. Although many questions regarding the action mechanism remain unclear, study of the correlation between magnetic field and osteoblast activity would shed new lights that could improve our further understandings of bone health under SMFs and determine therapeutic parameters in treating or preventing human bone disorders on the Earth or in outer space.
In summary, SMFs do not have acute lethal effects on osteoblast, offering opportunities for osteoblast study in basic and application research. Osteoblast differentiation was controllable by various SMFs with different flux intensity. Moreover, iron element was altered by SMFs during osteoblast proliferation and differentiation. These results will shed new light on the corresponding mechanisms and osteoporosis treatment. From this perspective, SMF could be used as a noninvasion physical therapy to maintain health and treat disorders in bone.
|
2023-01-06T15:09:29.847Z
|
2017-10-19T00:00:00.000
|
{
"year": 2017,
"sha1": "186251b6b2aac95b92443934beede9bb493a0a6e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12011-017-1161-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "186251b6b2aac95b92443934beede9bb493a0a6e",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
271382101
|
pes2o/s2orc
|
v3-fos-license
|
Advances in Targeted Glioma Therapy with Transferrin-Conjugated Gemcitabine-Loaded PLGA Nanoparticles
Primary brain tumor Glioma has one of the highest fatality rates among brain cancers. Conventional chemotherapy for glioma often suffers from off-target drug loss and suboptimal drug availability in brain tissue. This study aimed to develop a targeted strategy for brain cancer cells using transferrin-conjugated, gemcitabine-loaded poly(lactic-co-glycolic acid) nanoparticles (Tf-GB-PLGA-NPs). GB-PLGA-NPs were prepared via solvent evaporation and nanoprecipitation, followed by conjugation with transferrin. The formulation was characterized for physicochemical properties, in-vitro release, cytotoxicity, apoptosis (U87MG cell line), and in-vivo pharmacokinetics. Tf-GB-PLGA-NPs exhibited a particle size of 143±6.23 nm, a PDI of 0.213, a zeta potential of -25 mV, and an entrapment efficiency of 77.53±1.43%. These nanoparticles showed a spherical morphology and sustained release of gemcitabine (76.54±4.08%) over 24 hours. Tf-GB-PLGA-NPs demonstrated significantly higher cell inhibition against the U87MG cell line compared to GB-PLGA-NPs and pure gemcitabine (P<0.05). Apoptosis in U87MG cells was higher with Tf-GB-PLGA-NPs (61.25%) than with GB-PLGA-NPs (31.61%). Additionally, Tf-GB-PLGA-NPs achieved significantly higher concentrations in the brain than pure gemcitabine and GB-PLGA-NPs, with a 11.16-fold increase in AUC0-t (bioavailability) compared to pure gemcitabine solution and a 2.23-fold increase compared to GB-PLGA-NPs. These findings suggest that Tf-GB-PLGA-NPs could be a potent alternative carrier for delivering gemcitabine to the brain for glioma treatment.
Introduction
Glioma exhibits one of the most elevated mortality rates among primary brain tumors, constituting approximately 50%-60% of all instances of such tumors (Seker-Polat F et al., 2002).The typical survival period for individuals suffering from glioma remains approximately fourteen months, and 25% of patients survive more than one year.Only 5% of patients survive more than five years post-diagnosis.Glioma is linked with an unfavorable prognosis and is identified by histological features like angiogenesis and tissue necrosis (Kumar LA et al., 2021).The microvasculature within gliomas exhibits heterogeneity, potentially impacting the distribution of chemotherapeutic agents to brain tumors.The primary approach for clinical intervention is surgical removal.specific growth patterns.As a result, complete surgical eradication of the tumor is usually unattainable (Hormuth II DA et al., 2022).
Further, chemotherapeutic drug delivery to brain tumor cells is significantly restricted by the blood-brain barrier (BBB) (Fathi K et al., 2020).Additionally, chemo-medications frequently impose adverse effects on healthy tissues and cells and aggravate the problem.All these factors collectively contribute to the suboptimal outcomes in glioma treatment.
In this context, using nanoparticles (NPs) made from biodegradable polymers has been investigated to address traditional chemotherapy's conventional drawbacks (Li L, Zhang X, Zhou J, Zhang L, and Tao J, 2022).NPs can be designed as passive and active targeting tools to transport drug candidates effectively on tumor sites (Shabani L et al., 2023 ).Actively targeted NPs offer unique advantages in chemotherapeutic drug delivery.They contribute to the retention of drugs within tumor cells by boosting cellular binding and drug accumulation.Furthermore, such NPs enhance the uptake of drugs by leveraging receptor-mediated endocytosis mechanisms (Ferraris C, Cavalli R, Panciani PP, and Battaglia L, 2020).Now, with the deepening insights into the physiological processes governing the development and sustenance of gliomas, the integration of molecularly targeted substances into conventional chemotherapy regimens emerges as a promising strategy with favorable outcomes.Simultaneously, the angiogenesis pathway generates a multitude of invasion-inhibiting agents, out of which integrin has been demonstrated to be active participation in enhancing glioma adhesion, migration, and angiogenesis (Luo H et al., 2023).Thus, integrating integrin in designing nano-platforms for targeting could be an exciting strategy for efficiently delivering chemotherapeutic agents to glioma.Because of the active role of transferrin receptor (TfR) expression in many cancer types, including glioma, TfR has become a widely used ligand for endothelial cell targets in NP-based active targeting therapy (Su X et al., 2022).
Tf is a plasma glycoprotein (79.5 kDa) containing practical binding domains for Fe³⁺ ions.Tf plays a significant role in binding and transporting iron throughout the body.TfR, a type II transmembrane receptor protein, is found to be expressed heavily on the surface of endothelial cells in glioma (Wang D, Wang C, Wang L, and Chen Y, 2019).It primarily regulates intercellular iron transport and metabolism.It is postulated that an increase in iron requirements by the cancer cells acts as a key guiding factor in the upregulation of TfR in cancerous cells compared to healthy cells.
Thus, drug-loaded NPs with Tf surface modification can precisely target TfR with the formation of ligand-receptor complex and can internalized by the cancer cells through receptor-mediated transcytosis (Tiwari P et al., 2023).
Previous studies have already documented the potential of Tf in improving tumor-specific delivery of anticancer drugs loaded through NPs (Hersom et al., 2016) (Koneru T et al., 2021).A study reported by Ramalho et al. showed that Tf-modified PLGA NPs maintained the anti-glioma activity of the loaded Phyto component and increased the cellular internalization in U87MG glioblastoma cells (Ramalho et al., 2022).Tf-conjugated PLGA-NPs were found to be highly adsorbed and endocytosed by BBB endothelial cells (Chang J et al., 2009).A fluorescence microscopic study confirmed higher internalization of Tf-NPs by the tested cell line compared to unconjugated NPs/free drugs (Cui Y, Xu Q, Chow PK, Wang D and Wang CH, 2013).Cui and its associates developed the Tf-Tfmodified magnetic silica PLGA-NPs of doxorubicin and paclitaxel for targeted delivery in glioma cancer.Tf-modified NPs showed the highest cytotoxicity in U-87 cells concerning those treated with unconjugated NPs/free drug/free Tf (Chang J et al., 2012).Tfconjugated PLGA-NPs improved the anticancer effectiveness of temozolomide and bortezomib in glioblastoma cells, as reported in another recent study (Ramalho MJ et al., 2023).
Gemcitabine (GB) is an FDA-approved, clinically established chemotherapy medication that functions as a nucleoside analog.
The GB is a prodrug.Moreover, it converts into active metabolites by switching nucleic acid building blocks during DNA elongation (Dyawanapelly et al., 2017).The active metabolites of GB inhibit malignant cell growth by interfering with DNA synthesis.It destroys cells synthesizing DNA and prevents them from passing the G1/S phase threshold (Shah MA and Schwartz GK, 2006).In clinical practice, Gemcitabine is widely used to treat bladder, breast, pancreatic, non-small cell lung cancer, etc.However, reports on the active targeting of Gemcitabine through Tf-functionalized Gemcitabine PLGA-NPs in glioblastoma cell lines in vitro/in vivo models are scarce.Based on previous literature and a gap in detail study of Tf-functionalized GB-loaded PLGA-NPs, we have already developed and optimized Gemcitabine-loaded PLGA-NPs (Kumar LA et al., 2024).We successfully developed Tf-functionalized GBloaded PLGA-NPs (Tf-GB-PLGA-NPs) for glioma cell-specific delivery in the present study.The formulation was characterized for physicochemical parameters, invitro release, in-vitro cell line, internalization, cell apoptosis against human glioblastoma cells (U87MG), and pharmacokinetic study in Wistar albino rats.
Experimental
Gemcitabine (GB) was obtained from Cipla laboratories (Goa, India) as a gift sample.Poly (lactic-co-glycolic acid (PLGA 50:50) and Transferrin (apo-transferrin bovine, T1428) were procured from Sigma-Aldrich (Mumbai, India (India).In addition to the chemicals and reagents used in this experiment, all other materials used in the study were a scientific grade.
Preparation of gemcitabine-loaded PLGA nanoparticles
A slightly modified solvent evaporation and nanoprecipitation technique prepared GB-PLGA-NPs (Kumar LA, 2024).Initially, the formulation was developed in two steps: emulsification of the organic polymer solution into the aqueous surfactant solution, followed by organic solvent evaporation, led to polymer precipitation and nanoparticle formation.The 200 mg of PLGA and 15 mg of GB were dissolved into acetone (10 ml).The aqueous phase containing PVA (1.5%) and 4 ml of tween 80 was prepared separately.Continuous stirring (15000 rpm) for two hours was used to add the organic phase to the aqueous phase.The formed primary emulsion was ultrasonicated for 5 min at 30 sec intervals (Sonics, Vibra cell, vcx750, New Town, USA).The GB-PLGA-NPs were stood overnight for evaporation of the organic solvent.The GB-PLGA-NPs were stored at 25ºC.Details of all the optimization processes are discussed in our previous work (Kumar LA et al., 2024)
Preparation of Tf-decorated Gemcitabine PLGA nanoparticles
An EDC/NHS activation and grafting procedure was used to conjugate Tf on the surface of GB-PLGA-NPs (Nogueira Librelotto DR et al., 2017).Briefly, at first, 2.1 mL of the GB-PLGA-NPs suspension was incubated with EDC (200 μL, 30 mg/mL) under slight stirring at 25°C and NHS (200 μL, 30 mg/mL) for 3 h to obtain amino-reactive esters from the carboxylic acid terminated polymer.
After incubation, the sample was filtered by using a centrifugal ultrafiltration unit (Centrisart® 10 kDa MWCO) followed by centrifugation at 2000 g for 40 min to eliminate excess EDC and NHS and unconjugated GB-PLGA-NPs.The ultrafiltrate was collected, and the volume of the GB-PLGA-NPs suspension was adjusted to 2.1 mL with deionized water.Tf solution (300 μL, 10 mg/mL) was added to the GB-PLGA-NPs suspension and was maintained under slight stirring at 25ºC for 2h to complete the Tf conjugation.Finally, the sample was filtered by a centrifugal ultrafiltration unit, and ultra-filtrate (Tf-GB-PLGA-NPs) was collected and stored till further analysis.
Fourier-transformed infrared spectroscopy
Fourier-transformed infrared (FT-IR) spectroscopy was used to study the interactions between the drug and the other components of the formulation (FTIR-8400S Shimadzu, Tokyo, Japan).Pure GB, PLGA, Transferrin, GB-PLGA-NPs, and Tf-GB-PLGA-NPs were placed in the instrument and scanned for 4000 to 400 cm−1 at 25ºC.(Rai VK, Mishra N, Yadav KS, and Yadav NP, 2018).
Particle size, Zeta Potential, and PDI
The Zeta Sizer (Nano ZS, Malvern Instruments, U.K.) was employed to assess the mean diameter, size distribution (PDI), and zeta potential of the GB-PLGA-NPs, Tf-GB-PLGA-NPs through dynamic light scattering (DLS) method.The diluted sample (1:10) was filled into a respected cuvette and analyzed at 25ºC and 90ºC scattering angles, represented in Figure 2. (Kumar LA et al., 2024).
Encapsulation efficiency
The dialysis technique was employed to calculate the encapsulation efficiency.The 6 ml of the GB-PLGA-NPs and Tf-GB-PLGA-NPs suspension were filled into a dialysis bag.The 50 ml of 0.1N NaOH was filled into a beaker and stirred at 75 rpm using a magnetic stirrer.The dialysis bag was submerged into a beaker for 60 min for dialysis.One milliliter of dialysis solution was withdrawn periodically at fixed intervals and was checked in a UV-visible spectrophotometer (Lab India 3200) at 281 nm until there was no finding of the drug in the medium (Kumar LA et al., 2024).
Morphology analysis
The morphology of the GB-PLGA-NPs and Tf-GB-PLGA-NPs was done by Transmission electron microscopy (TEM, Jeol JEM 1400 electron microscope, Japan).Briefly, drops of GB-PLGA-NPs and Tf-GB-PLGA-NPs were mounted on copper grids and stained with aqueous uranyl acetate (2% v/v) in water.Then, it was air-dried, followed by visualization under an electron microscope at an 80 kV accelerating voltage.
In vitro Release of drug
The in vitro drug release profile of GB-PLGA-NPs and Tf-GB-PLGA-NPs was conducted using a dialysis bag in phosphatebuffered saline (PBS, pH 7.4).(R. Nogueira-Librelotto et al.,).The required amount of GB-PLGA-NPs and Tf-GB-PLGA-NPs (5mg of GB) were filled into a dialysis tube (MWCO 100Kda), tied at both ends.The tubes were immersed in 37 °C PBS medium and stirred at 100 rpm. 1 ml of medium was withdrawn at definite intervals, and an equal volume of fresh PBS was added to maintain the same volume.The concentration in the sample was analyzed by UVvisible spectroscopy at 281 nm.
Cytotoxicity study by MTT
Vitro the cytotoxic effect of GB-PLGA-NPs and Tf-GB-PLGA-NPs was investigated using the MTT assay method on the U87Mg glioma cell line (CuiY et al., 2013).U87Mg glioma cells were procured from NCCS, Pune, India.The cell line was cultivated in DMEM (Himedia, India) and accompanied by 10% fetal bovine serum (10% FBS), streptomycin (1%), and glutamine (3Mm) at 37 °C in a humidified CO2 incubator up to 2×106 cells/ml cell density.
Then, cells were plated into 96 different well plates (Sigma Aldrich, Mannheim, Germany).The cells plated were inoculated with different concentrations (0.01-100µM) of PLGA-NPs, GB-PLGA-NPs, and Tf-GB-PLGA-NPs formulations and incubated for 72 h in the restricted environment.Then 20 µl of cold trichloroacetic acid was added into plates and incubated for 1h at 4°C.The microplate spectrophotometer (Model 680, Bio-Rad, Japan) was used to measure the optical density (OD) at 490 nm.
Apoptotic analysis
A fluorescence-activated cell sorter (FACS) instrument (BD Biosciences FACS Aria, Germany) was utilized to conduct an apoptotic study on U87M Glioma cells.The cells were maintained in alpha MEM media supplemented with 10 % FBS and the 1% antibiotic-antimycotic solution in the atmosphere of 5% CO2, 18-20% O2 at 37°C temperature in the CO2 incubator and subcultured every two days.The cells were cultured and placed at 2×106 cells/ml density in six-well plates (Sigma, Germany).They Analyze the samples by flow cytometry immediately after adding PI and measure the % apoptosis or necrosis in all conditions (Derycke ASL et al., 2004).
Internalization efficiency
The internalization efficiency of blank PLGA-NPs, GB-PLGA-NPs and Tf-GB-PLGA-NPs was determined by confocal microscopy (LSM 880 live cell imaging confocal system, Carl Zeiss, Germany).
The cells were incubated with FITC labeled with GB-PLGA-NPs and Tf-GB-PLGA-NPs for 0.5 h.The healthy cells were washed with PBS (pH 7.4) to eliminate the formulation from the cell surface and then inoculated into DMEM (Dulbecco's Modified Eagle Medium) culture media.After that, cells were trypsinized using 0.1% w/v trypsin and incubated for 5 min.The cells were harvested using 1 ml of PBS (pH 7.4) and sonicated for 5 min to get cell lysates.
Further, it was centrifuged at 8,000 rpm for 15 min, and the supernatant was collected for fluorescence assay (Gurumukhi VC and Bari SB, 2022).
Animal
Institutional ethical committee approval of the study protocol was obtained from Jeeva Life Science in Telangana, India.The approved protocol number is CCSEA/IACE/JLS/20/11/059.The albino Wistar rats (130-150g male) were used for the study.The rats were housed in standard normal conditions (22°C, dark/light) and were accessible for food and water.
Pharmacokinetic study
Eighteen albino rats of either sex (130-150 g) were selected for the pharmacokinetic study.We divided the rats into three groups, each with six rats.Before drug administration, rats were fasted overnight.
Group I received pure GB solution (10 mg/kg), Group II rats received GB-PLGA-NPs, and Group III rats received Tf-GB-PLGA-NPs(10 mg/kg in PBS) orally.Rat retro-orbital plexus blood samples were collected at various time intervals (0.5, 1, 2,3, 4, 6, 8, and 12 hours) and stored in heparin tubes.Plasma was separated from blood by centrifugation at 3000 rpm for 15 minutes (Remi centrifugal, India) and stored at -70 °C.Using diethyl ether and cervical dislocation, rats were sacrificed after blood was collected.
Extracted brains were then washed with saline solution and stored at -70°C.In order to extract GB, 0.5 ml of plasma and brain homogenized were added to 500 µL of 0.5 M PBS (pH 5.9) and 2 ml of diethyl ether.The organic layer was separated after vortexing and centrifuging for 5 minutes at 3500 rpm.After adding 0.2M HCl (200µL) to the organic layer, the tube was vortexed for 5 minutes, then centrifuged at 3000 rpm for 5 minutes.After the organic layer was detached, the acid residue was dried under a nitrogen stream (HGC-12, Tianjin Hengao Ltd., Tianjin, China).We reconstituted the dried samples in 100 µL of mobile phase and analyzed the GB concentration using the HPLC method described in.
HPLC conditions
The HPLC technique has been employed to measure the concentration of GB in the blood and brain (LC 10 AD model, Shimadzu, Tokyo, Japan).The HPLC setup included a C18 column (250 × 4.6 mm, 5 µm) with an auto-sampler, UV detector, and isocratic pump.Acetonitrile and phosphate buffer (0.023 M, pH 6.6) were used as mobile phase at a 39:61 v/v with a 1ml/min f flow rate.The 0.1% TEA was added into the mobile phase to decrease tailing.The run time was 10 min.The 20 µl of the sample was injected into a column at 25°C, and detection was done using a detector at 281 nm.
Formulation and Characterization of nanoparticles
The GB-PLGA-NPs and Tf-GB-PLGA-NPs were prepared using solvent evaporation and nanoprecipitation techniques with slight modifications (Kumar LA et al., 2024).The GB-PLGA-NPs were successfully conjugated by a two-step EDC/NHS activation and grafting method and denoted as Tf-GB-PLGA-NPs.
Physicochemical parameters and in-vitro and preclinical evaluation characterized the formulation.
Particle size and zeta potential analysis
The average PS of GB-PLGA-NPs and Tf-GB-PLGA-NPs was the presence of Tf on the GNP surface.Higher zeta potential would be helpful to maintain the stability of the formulation in the suspended state (Rai VK et al., 2018).
Encapsulation efficiency (EE, %)
EE of GB-PLGA-NPs and Tf-GB-PLGA-NPs were analyzed and found to be 78.3 ±1.65% and77.53±1.43%,respectively.The difference in EE might be due to the dissolution and eventual loss in surface adsorbed drugs over Tf-GB-PLGA-NPs (Cui Y et al., 2013) (Dercyke ASL et al., 2004).Higher encapsulation efficiency of the NPs would be crucial for effective in vivo applicability.Further, it was found that surface modification did not significantly affect the drug-carrying capacity of Tf-GB-PLGA-NPs.
Morphological analysis
TEM images of GB-PLGA-NPs and Tf-GB-PLGA-NPs were analyzed and depicted in Figure 3A-B.The morphology of GB-PLGA-NPs and Tf-GB-PLGA-NPs showed a smooth surface without any imperfections or aggregation.The TEM image was spherical with a clear core-shell structure, as shown in Figure 3.
Modifying Tf over the NP surface might result in a slight increase in the structure, as observed in the TEM image.
In vitro drug release
In vitro, the release of GB from GB-PLGA-NPs and Tf-GB-PLGA-NPs was analyzed by the dialysis bag, and the results are depicted in
In-vitro Cytotoxicity assay
To evaluate the cytotoxic potential of GB-PLGA-NPs and Tf-GB-PLGA-NP concerning free GB, an MTT assay was carried out on of Tf-GB-PLGA-NPs was found to be significantly less (81.17µM) than GB-PLGA-NPs (41.13 µM) and pure GB solution (81.17µM).
Tf-GB-PLGA-NPs exhibited a significantly higher cell inhibition than GB-PLGA-NPs and pure GB at each concentration due to internalization into the cells, which can be observed in Figure 5.
Internalization efficiency
To higher internalization into cells (high density of particles into U87MG cells) than blank PLGA-NPs and GB-PLGA-NPs, which can be observed in Figure 6. of the GB-PLGA-NPs, Tf-GB-PLGA-NPs and blank PLGA-NPs was found to be 98.70%, 28.10% and 24.09% respectively.However, the percent of early apoptotic cells (Lower right) was 0.35% for blank PLGA-NPs, 18.76% for GB-PLGA-NPs, and 14.68% for Tf-GB-PLGA-NPs.The % apoptosis of the cells was found to be 0.78%
Discussion
The recent trend in oncology-based research is the active targeting of conventional anticancer drugs through receptor-mediated endocytosis.The tf receptor is highly expressed by vascular endothelial cells in the brain.Thus, modification of Tf over NP surface could be a potential approach in ameliorating the targeting efficiency of drugs to the brain.The experimental Tf-modified PLGA-based NPs were formulated and evaluated successfully.
FTIR spectroscopy provided an essential clue for the successful conjugation of Tf over PLGA-NPs surface as a specific characteristic peak of Tf appeared in the Tf-modified GB-PLGA-NPs.However, there was no significant physical/chemical interaction between GB and NP components.
The PS of the GB-PLGA-NPs and Tf-GB-PLGA-NPswas were found in the acceptable nanosize range (>200nm) for oral delivery approaching the brain.The PS of Tf-GB-PLGA-NPs is slightly more than GB-PLGA-NPs, indicating Tf is decorated over the NPs.
Owing to the cationic nature (positive charge) of amine groups of Tf, the zeta potential of Tf-GB-PLGA-NPs showed a higher than GB-PLGA-NPs.Higher surface charge (positive or negative) on NPs is beneficial to maintaining required formulation stability at a dispersed state.Higher similar surface charges between particles, in fact, cause higher repulsive force among particles, which thus helps to maintain a flocculated suspension stage.The desired PS and zeta potential of Tf-GB-PLGA-NP would be helpful for effective brain delivery (Tanavano L et al., 2013).The morphology of NPs did not affect the coating of Tf, which was found to be spherical in both NPs and distributed homogeneously.
The % EE is an essential factor for NPs, which helps to decide the dose and therapeutic efficacy (Nogueria Librelotto D et al., 2017).A higher EE paves the path for the futuristic large-scale production of NPs, whereas a low EE hampers the industrial and clinical suitability.In our case, the EE of GB-PLGA-NPs and Tf-GB-PLGA-NPshave reasonable EE (̴ 80%), which suggests good formulation characteristics.Further, it noted that surface modification did not affect the drug-carrying capacity in Tf-GB-PLGA-NPs.
The release profile of GB from the GB-PLGA-NPs and Tf-GB-PLGA-NPs was evaluated using the dialysis bag and showed initial burst release, which might be due to the release of adsorbed GB onto the NP's surface.However, overall, the release profile was found to be sustained over an extended period of time.NP polymer matrix or erosion of polymer could be responsible for the extended release of drug molecules (Derycke ASL et al., 2004).Sustained release of the drug over an extended period would favor a decrease in dose and potential toxicity of the drug (Rai VK, Mishra N et al., 2018).
MTT assay showed that both GB-PLGA-NPs and Tf-GB-PLGA-NPs formulation exhibited significantly higher cytotoxicity in U87 MG cells than in the pure GB solution (Rai VK, Mishra N et al., 2018).Tf-GB-PLGA-NPs exhibited the highest activity due to drugsustained release and effective internalization into cells.The higher inhibition of cells of Tf-GB-PLGA-NPs could be due to more internalization into cells (cellular uptake) than GB-PLGA-NPs .However, Tf-GB-PLGA-NPs may decrease exocytosis than GB-PLGA-NPs and increase cellular uptake (Fazil M et al., 2015).The higher cytotoxic effect of Tf-GB-PLGA-NPs against glioma cells advocates its potential futuristic application in cancer therapy.A similar observation was reported in paclitaxel-PLGA-NPs conjugated Tf (Doijad RC et al., 2008).
Higher cellular uptake suggests the internalization of an adequate quantity of GB-PLGA-NPs and Tf-GB-PLGA-NPs inside the U87MG cells during the pre-fixed incubation period.Owing to higher internalization, the drug could effectively exert its anticancer effect.Further, FITC-labelled Tf-GB-PLGA-NPs showed much NPs.Tf-GB-PLGA-NPs showed nearly 4-fold higher late apoptotic cell death than the GB-PLGA-NPs (46.5 % Vs. 12.8 %) in 0.5 h incubation.There was a notable difference in the percentage of apoptosis between Tf-GB-PLGA-NPs and GB-PLGA-NPs during both early and late phases.Tf could direct the cells into U87MG cells through receptor-mediated endocytosis (Maghsoudi S et al., 2020).
The results of PK studies suggest a better understanding of conjugate systems over individual drug regimens.For in vivo application and futuristic pre-clinical/clinical suitability, the estimation of PK parameters is an unavoidable tool in targeted/non-targeted nanoformulations.Improved AUC/AUMC of the formulations straightly justifies higher bioavailability.
Similarly, higher MRT/T1/2 signifies higher residence time and therapeutic stability of the formulation in an in vivo environment.
Tf-GB-PLGA-NPs exhibited significantly higher Cmax, AUC0-t, AUC0-inf, AUMC0-t, AUMC0-inf, and Half-life in the brain than pure GB and GB-PLGA-NPs after oral administration revealing that Tf-GB-PLGA-NPs easily cross the blood-brain barrier (BBB) and reached more drug into the brain than plasma upon Tf decoration.Higher values of AUC and AUMC justify prolonged circulation profile in vivo with higher drug accumulation in brain tissue.The higher half-life and low elimination rate constant in the brain justify targeting the efficiency of Tf-GB-PLGA-NPs.Tf-GB-PLGA-NPs exhibited significantly higher (P<0.05)MRT in the brain than GB-PLGA-NPs and pure GB, revealing significant brain targeting.The poor brain uptake of GB is due to its inability to permeate across the BBB and its being a substrate to Pgp (MDR1), efflux transporters present as a blood-brain barrier (Mitra S et al., 2001).The unconjugated and Tf conjugated nanoparticles showed improved brain uptake of the drug.The preferential accumulation of Tf-PLGA-GB-NP across the BBB may result from different events.The abundance of transferrin receptors on BBB could have resulted in receptor-mediated endocytosis (Qian Zm et al., 2002).It can be well documented that surface modification of Tf was able to bring a significant increase in the PK profile compared to nontargeted nano-formulations and free drugs.
Conclusion
The work presents an active targeting approach to targeted However, the distinction between tumor and healthy brain tissue is immensely challenging due to uncontrollable infiltration and non-Significance | Glioma's high mortality and treatment challenges necessitate innovative strategies like nanoparticle-based drug delivery, enhancing therapy efficiency and prognosis.https://doi.org/10.25163/angiotherapy.8596341-12 | ANGIOTHERAPY | Published online Mar 27, 2024 were then suspended in fresh medium and incubated for 12 h in a humidified CO2 incubator at 37°C.The cells were treated with blank PLGA-NPs, GB-PLGA-NPs, and Tf-GB-PLGA-NPs and incubated for 24h.At the end of the treatment, harvest the cells directly into 12×75mm polystyrene tubes.Centrifuge the tubes for five minutes at 300 g and 25°C.Carefully decant the supernatant and wash the cells twice with PBS.After decanting the PBS completely, add 5 μl of FITC Annexin V in 100 µl of AnnexinV bindingbuffer.Gently vortex the cells and incubate for 15 minutes at 25°C in the dark.Add 5 μl of PI (Propidium Iodide) and 400 μl of 1X Annexin Binding Buffer to each tube and vortexgently.
FTIR
spectroscopy depicted successful conjugation of Tf over the NP surface and the absence of any significant interaction between the drug, excipient, or targeting ligands.FTIR spectra of pure GB showed the peaks at 3344 cm-1 (OH stretching), 1674 cm-1 (C=O stretching), and 1602 cm-1 (C-N stretching coupled with NH bending) which can observe in figure1.The PLGA showed peaks at OH stretching (3333 cm-1), -CH stretching (2997 cm-1), carbonyl -C = O stretching (1757-1628 cm-1) and C-O stretching (1237 cm-1) (Fig.1B).GB-PLGA-NPs andTf-GB-PLGA-NPs showed the distinctive bands at 3405−3349 cm-1 (O-H stretching), 1540 cm-1 and 1646 cm-1 (carbonyl C=O stretching) (Fig1C-D).These stretching vibration peaks are also present in the FTIR spectra of pure GB.It indicates no significant physical/chemical interaction between the drug and excipients(Nogueira Librelotto Dr et al.m 2017).Additionally, it was confirmed to prove that the characteristic peaks of Tf in the free Tf solution resulted from amide II (∼1540 cm−1), amide I (∼1650 cm−1) vibrations, and amine N-H stretching (3300−2500 cm−1) at a lower intensity, which was also observed in the Tf-GB-PLGA-NPs, confirming the presence of Tf molecules on the tailored NPs.In addition, the characteristic bands of PLGA and Tf showed minor shifting in the theTf-GB-PLGA-NPs, suggesting the chemical conjugation of PLGA with Tf molecules, with peaks in free Tf shifting from 1540 to 1539 cm-1 and from 1652 cm-1 to 1648 cm-1 respectively.The characteristic peaks of GB in the spectra of GB-PLGA-NPs and Tf-GB-PLGA-NPs are nearly identical, depicting the absence of any major shifting and justifying the absence of any significant physical/chemical interaction.
Figure.2C-D.An increase in surface charge could be attributed to
Figure 4 .
Figure4.The GB release from the GB-PLGA-NPs and Tf-GB-PLGA-NPs was found to be 80.13 ±5.24% and 76.54±4.08%,respectively, in 24 h.Both formulations exhibited sustained release of GB over 24h, shown in Figure4.Sustained release of drugs from the NPs would be helpful for prolonged therapeutic action with reduced dose-related side effects(Gurumukhi VC and Bari SB, 2022) U87MG glioma cells.The cell viability (%) of GB-PLGA-NPs, Tf-GB-PLGA-NPs, and pure GB are shown in Figure5.It was observed that cell inhibition depends upon the concentration of GB.The cell variability of the GB-PLGA-NPs, Tf-GB-PLGA-NPs, and pure GB was 46.31±8.24%,19.05±5.24%,and 9.04±3.41%respectively.IC50 Figures 6A-C show the results.The internalization efficiency formulation concerning fluorescent intensity was analyzed.The order of fluorescent intensity of NPs in U87MG cells is blank PLGA-NPs < GB-PLGA-NPs <Tf-GB-PLGA-NPs in 0.5 h of incubation.Tf-GB-PLGA-NPs showed a significant (P<0.05%)
Figure 6 .Figure 7 .Figure 8 .
Figure 6.Cellular internalization of labeled formulations against U87MG cells (A) blank PLGA-NPs (B) GB-PLGA-NPs (C) Tf-GB-PLGA-NPs higher internalization than GB-PLGA-NPs, confirmed by fluorescence microscopy and suggesting potential targeting efficiency with Tf-conjugation.Fluorescence intensity was highest with FITC labeled Tf-GB-PLGA-NPs due to increased cell membrane permeability and receptor-mediated adhesion of Tf-GB-PLGA-NPs to the cell surface(Zeb A et al., 2022).Modifying the Tfover NP surface undoubtedly brought higher internalization, favoring its in vivo applicability in the brain tumor model.In addition, FACS was used to assess cellular uptake based on Tf conjugation and incubation time.It have extensively used to quantify the percentage of cells in a sample whose cells are actively undergoing apoptosis through FACS studies.In early apoptosis phases, cells tend to lose membrane asymmetry.In cells staining positive for FITC Annexin V and PI, there is either an end stage of apoptosis, necrosis, or death.As a result, cells that stain negative for both FITC Annexin V and PI are alive and do not show measurable apoptosis.Tf-GB-PLGA-NPs showed sufficiently higher apoptosis than the unconjugated NPs (GB-PLGA-NPs) and blank PLGA- glioblastoma treatment through developed ligand-modified PLGA-NPs.As a result of surface modification with Transferrin, GB could be successfully loaded into PLGA-NPs and delivered to the brain effectively and sustainably.FTIR study demonstrated the presence of Tf on the NP's surface without any significant physical/chemical interaction.Conjugation of Tfdi does not significantly alter the entrapment efficiency of the drug in NPs.Tf-GB-PLGA-NPs showed significantly higher apoptotic (cell death) in U87MG cells compared to Tf-GB-PLGA-NPs and free GB.Tf-GB-PLGA-NPs also exhibited significantly higher internalization into U87MG cells within 0.5h of incubation.PK study further confirmed higher drug accumulation in the brain tissue.Owing to improved drug availability and higher cytotoxic potential, Tf-GB-PLGA-NPs might be taken for further studies in glioma-bearing xenograft models to pave the way for futuristic clinical applications.In conclusion, the developed formulation strategy could open exciting avenues for advanced glioma treatment.
|
2024-07-24T15:45:31.934Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "d619617c88166a0e5f7fed834b36b8e21f035d15",
"oa_license": null,
"oa_url": "https://doi.org/10.25163/angiotherapy.859634",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bf8adc937b913f0ef2e7cebd0ca1fb47224b3197",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": []
}
|
37860561
|
pes2o/s2orc
|
v3-fos-license
|
Florid Cemento-Osseous Dysplasia : A Case Report
The term florid cemento-osseous dysplasia (FCOD) refers to a group of fibro-osseous (cemental) exuberant lesions with multi-quadrant involvement (1). FCOD is a very rare condition presenting in the jaws. These lesions are most commonly seen in middle-aged black women, although it also may occur in Caucasians and Asians (2,3), and have been entitled as sclerosing osteitis, multiple enostoses, diffuse chronic osteomyelitis and gigantiform cementoma (3,4). The cause of FCOD is still unknown, and there is no satisfactory explanation for the reported gender and racial predilection (2-5). FCOD lesions have a striking tendency towards bilateral, often quite symmetrical, location, and it is not unusual to find extensive lesions in all 4 posterior (molarpremolar region) quadrants of the jaws (5). Clinically, these lesions are often asymptomatic and may present as incidental radiological findings. Symptoms such as dull pain or drainaige are almost always associated with exposure of the sclerotic calcified masses in the oral cavFlorid Cemento-Osseous Dysplasia: A Case Report
INTRODUCTION
The term florid cemento-osseous dysplasia (FCOD) refers to a group of fibro-osseous (cemental) exuberant lesions with multi-quadrant involvement (1).FCOD is a very rare condition presenting in the jaws.These lesions are most commonly seen in middle-aged black women, although it also may occur in Caucasians and Asians (2,3), and have been entitled as sclerosing osteitis, multiple enostoses, diffuse chronic osteomyelitis and gigantiform cementoma (3,4).The cause of FCOD is still unknown, and there is no satisfactory explanation for the reported gender and racial predilection (2)(3)(4)(5).
FCOD lesions have a striking tendency towards bilateral, often quite symmetrical, location, and it is not unusual to find extensive lesions in all 4 posterior (molarpremolar region) quadrants of the jaws (5).Clinically, these lesions are often asymptomatic and may present as incidental radiological findings.Symptoms such as dull pain or drainaige are almost always associated with exposure of the sclerotic calcified masses in the oral cav-and mandible and a partially edentulous area in the left mandible.Teeth were non-tender to percussion except for the left maxillary second molar.The left maxillary second molar exhibited significant degree of mobility.On palpation, there was bilateral buccal bone expansion noted on posterior mandible.The overlying gingiva and mucosa in mandible were normal without any clinical signs of inflamation.Orthopantomograph showed diffuse, lobular, irregularly shaped radiopacities or cottonwool appearance throughout the alveolar process of the both quadrants of the maxilla and mandible (Fig. 1).Cyst-like radiolucent areas were observed around some radiopaque lesions.There was no root resorption or fusion of the lesions to the involved teeth.Periapical radiograph showed presence of periodontal bone defect in the left maxillary second molar.Additionally, the left maxillary canine and the right mandibular premolar were impacted.
When these findings were compared with a panoramic radiograph taken one year earlier, it appeared that no radiographic modifications had occurred.The lesion was supposed to be FCOD, however in the absence of a more specific diagnosis a decision was made to perform additional imaging.Axial CT scans showed buccolingual bone expansion in posterior maxilla and mandible (Figs. 2 and 3).The relationship between the mandibular canal and the pathologic masses was visualized.Panoramic-like reconstructions showed bilateral extensions in to the floor of the antrum and at the level of the root apices were presented on both maxilla and mandible (Fig. 4).Complete blood count and other laboratory data were within normal limits.Biopsy was not done as the case was diagnosed as FCOD on the basis of the characteristic features seen on the radiographs.Because the periodontal defect and the mobility, a decision was made to extract the left maxillary second molar.The case has been followed up over the last 11 months.Patient is asymptomatic.
DISCUSSION
Exuberant fibro-osseous lesions occurring in multi-quadrants of the jaws were designated as gigantiform cementomas or familial multiple cementomas in the first edition of the World Health Organization's Histological Typing of Odontogenic Tumours, Jaw cysts and Allied Lesions (10).Melrose et al. ( 4) introduced the term florid osseous dysplasia.The authors, however, prefer the term FCOD proposed by Waldron (3) because the dense, sclerotic masses closely resemble cementum.This term was also advocated by Kramer et al. (11) in the most recent histological classification of odontogenic tumors.
In our school, we have diagnosed cases of FCOD based on clinical findings, localization of lesions, and on the patient's age, gender, and ethnicity, as well as radiological features.Since our patients have remained asymptomatic, the lesions have not been subjected to biopsy procedures.And as reassuring a biopsy might be in this condition, it may precipitate infection that is difficult to control without extensive surgical intervention.Several authors have observed that patients presented with poor socket healing and even sequestrum formation following extraction of teeth.It is recommended that every effort should be made to avoid extraction or even elective surgical procedures.After the initial acute lesion in the maxillary molar region resolved, the clinical situation was unremarkable.Clinical examination was normal, with no spontaneous pain or dental sensitivity.All teeth were vital.
FCOD is a reactive, non-neoplastic process confined to tooth-bearing areas of the jaws that is seen most frequently in middle-aged and older women of African descent (3,10).Melrose et al. ( 4) reported a study of 34 cases of such lesions, of which 32 were black women (in a predominantly Caucasian population) with a mean age of 42 years.In the Oriental population, Loh and Yeo (12) reported 9 cases diagnosed over a 34-year period, of whom 8 were middle-aged Chinese women and the other was an Indian woman.The definite female gender predilection of the condition cannot be explained.In the present case, the patient was a 43-year-old Caucasian female.
When patients are asymptomatic, conventional radiographs that exhibit multiquadrant diffuse radiopaque masses in the alveolar parts play an important role in the diagnosis.The lesions are typically found in the toothbearing areas of the jaws.The radiographic appearance, though not pathognomonic, is quite characteristic and very helpful in establishing the diagnosis (13).The use of CT has been previously reported (5,8).Axial CT images clearly show the location and extent of the lesion, especially in the maxilla.The expansion of the cortical bones is clearly evaluated on CT, even if it is slight.CT can be used to differentiate FCOD from lesions that exhibit a similar sclerotic appearance on conventional radiographs.In enostosis or exostosis, the high-density masses are more clearly observed on axial CT images than on occlusal radiographs, and they are found to be continuous with cortical plates (8).Odontogenic tumors, especially cemento-ossifying fibroma, usually exhibit more buccolingual expansion than does FCOD ( 14).Paget's disease of the bone may have a cotton-wool appearance (15).On the other hand, this condition affects the bone of the entire mandible and exhibits loss of lamina dura, whereas FCOD is localized above the mandibular canal and 2 thirds of its cervical are normal (16).Paget's disease is also characterized by deformities of multiple bones and produces biochemical serum changes, such as elevated alkaline phosphate levels (12).No biochemical alterations and other bone involvement were found in this case presented.Differential diagnosis of FCOD should also include sclerosing osteomyelitis, which can be a complication of the disease (5,17,18).
Once the diagnosis has been established in an asymptomatic patient, under normal circumstances, there is no need to exist further treatment.The patient should be regularly follow-up and recall examinations with prophylaxis and reinforcement of good home hygiene care to control periodontal disease and prevent tooth loss (5,13).In the absence of clinical signs, reevaluation with panoramic or CT imaging every 2 or 3 years is adequate (5).Management of the symptomatic patient is more difficult because chronic inflammation and infection develop within densely mineralized tissue.Generally antibiotics are not effective in FCOD so their tissue diffusion is poor.Indeed biopsy increases the risk of infection or may cause jaw fractures and it is not normally justified to surgically remove these lesions as this often requires extensive surgery (2,(5)(6)(7)(8)13).
Normally, a diagnosis of FCOD in the jaws is made by clinical findings, radiographic features and histology.However, this is a condition in which the diagnosis relies on radiological and clinical findings alone.As reassuring as a biopsy might be, in this situation, it may precipitate infection that is difficult to control without extensive surgical intervention.The patient of the present case has been followed up over the last 11 months and FCOD has remained asymptomatic.
Figure 1 .
Figure 1.Panoramic radiograph showing diffuse, lobular, irregularly shaped radiopacities or cotton-wool appearance throughout the alveolar process of the both quadrants of the maxilla and mandible.
Figure 2 .
Figure 2. Axial CT scan showing high-density masses and bilateral buccolingual bone expansion in posterior mandible.
Figure 3 .
Figure 3. Axial CT scan showing high-density masses and buccolingual bone expansion in premolar-molar region of left maxilla.
Figure 4 .
Figure 4. Panoramic-like reconstructions showing root apices, mandibular nerve, their relationship to radiopaque masses and bilateral extensions in to the floor of the antrum.
|
2017-06-18T03:18:08.207Z
|
2009-01-01T00:00:00.000
|
{
"year": 2009,
"sha1": "12d8fd536f49b41e9fb4e852b17cfd7b582a458e",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/bdj/a/kwBPPSN3Vt6H6vxgkF9f36k/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "12d8fd536f49b41e9fb4e852b17cfd7b582a458e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
133114038
|
pes2o/s2orc
|
v3-fos-license
|
Feasibility of measuring EDM in spin transparent colliders
A new polarization control mode called a Spin Transparency (ST) mode is currently being actively developed for new rings. The ST mode is an intrinsic feature of figure-8 rings such as the JLEIC at Jefferson Lab. A racetrack collider can be converted to the ST mode by inserting two identical Siberian snakes into its opposite straights as at NICA, JINR. The ST mode allows one to develop a completely new approach to the measurement of the EDMs of both protons and deuterons. The idea of the method is to use the significant enhancement of the EDM signal by the interaction of the EDM with arc magnets, which has an interference effect. A unique feature of this technique is the conceptual capability of measuring the EDM signal in the whole energy range of a ring without introducing additional electric fields using only its magnetic fields. We describe an experimental setup and provide estimates of the limiting EDM values that can be measured at the JLEIC and NICA colliders.
Introduction
The search for the proton and deuteron EDMs has been the goal of accelerators and storage rings for the past few decades. The concept of "frozen spin" was proposed at the US BNL for measuring the proton EDM in a ring consisting of only electric field elements [1]. The idea of this technique is to choose the "magic" energy when there is no effect of the magnetic dipole moment on the spin. However, this technique is not applicable to the search of the deuteron EDM. The ideas of this technique were further developed for measurement of the deuteron EDM using simultaneously magnetic and electric fields [2]. References [3,4] propose the use of spin rotators based on static combined electro-magnetic fields (Wien filter) for measuring the EDM of protons and deuterons. A technique for the search of the EDM signal in a purely magnetic ring based on the spin rotation in a Wien filter was developed in [5,6]. The accuracy limit for measuring the EDM in the above technique is determined by lattice element errors and beam orbital emittances. The Juelich Electric Dipole Investigation (JEDI) collaboration actively studies the question of possible systematic false effects using the COSY accelerator (Juelich, Germany) as an example.
Below we propose a technique to measure the proton and deuteron EDMs based on using the ring's spin transparency mode, which does not require neither additional electric fields nor Wien filters. * e-mail: kondratenkom@mail.ru
Generalized Thomas-BMT equation
The spin dynamics in electro-magnetic fields E and B is described by the generalized Thomas-BMT equation. The equation for a particle with the magnetic dipole moment (MDM) µ and the electric dipole moment (EDM) d: of a spin S takes the following form: where γ is the Lorentz factor, v is a speed in units of the light speed c, B rest and E rest are the field values in the particle rest frame related to the B and E fields in the laboratory frame by the Lorentz transformations. The first term of the equation describes the spin rotation due the particle MDM, the second one describes the spin rotation due to the EDM, while the third one is due to the Thomas precession. The particle orbital and spin motion are described using the accelerator reference frame e x (z), e y (z) and e z (z). The radial unit vector r is a function of the three unit vectors x, y, and z near the design closed orbit r c (z) given by: where z is the longitudinal coordinate along the closed orbit and x and y are the coordinates of the transverse deviation from it. The longitudinal unit vector e z (z) = d r c /dz is directed along the closed orbit.
There is a wide class of closed orbits which can have a selection of continuous unit vectors with the frame angular rotation rate determined only by the orbit curvature perpendicular to the curvature plane: The equation of motion in the accelerator frame is obtained by subtracting the frame angular rotation rate W b from the particle equation of motion (the Lorentz equation). The equations for the particle energy and velocity change are then given by: where E = γmc 2 is the particle energy, B = χ B/Bρ and E = χ E/Bρ are the normalized magnetic and electric fields, respectively, Bρ = −pc/e is the magnetic rigidity, and parameter here is a vector composed of the components in the accelerator reference frame. The generalized Thomas-BMT equation can be analogously written in the accelerator reference frame. However, when describing the spin dynamics, the characteristic vector is the exact direction of the particle velocity. The spin equation has a much simpler form in the natural reference frame connected with the velocity direction of each particle in the beam. The natural reference frame can be expressed in terms of the unit vectors of the accelerator frame in the following way: The unit vectors e 1 and e 2 are transverse to the particle velocity ( e 1 τ = e 2 τ = 0). The natural reference frame is only slightly different from the accelerator frame and, in the linear approximation, is related to the accelerator frame as: e 2 = e y − τ y e z , e 3 = e z + τ x e x + τ y e y .
To write the spin equation in the natural frame, the rate of change of the reference frame ( e 1 , e 2 , e 3 ) must be subtracted from the precession rate Ω. The angular rate of rotation of the natural frame with respect to the stationary frame is describes the oscillations of the unit vectors e 1 and e 2 about the velocity. Thus, the generalized Thomas-BMT equation in the natural reference frame becomes The above spin equation is equivalent to the generalized Thomas-BMT equation and can serve as a basis for beam polarization calculations in accelerators.
Spin dynamics in JLEIC in the ST mode
Let us consider the spin dynamics in the ST mode using the figure-8 collider of the JLEIC as an example. For relativistic particles the "direct" contribution of the electric field to the EDM signal is significantly weaker than the contribution of the arc dipoles. To demonstrate the technique, we limit our consideration to writing the spin motion equations in the linear approximation in the absence of additional electric fields. In case of a flat orbit (K x = 0), the angular velocity components W = W 0 + w in the magnetic field of a ring are given in the natural reference frame by Let us switch to the spin reference frame ( e s 1 , e s 2 , e s 3 ), which describes the dynamics of the spins initially directed along the accelerator frame unit vectors ( e x , e y , e z ) when a particle moves along the closed orbit: where Ψ y (z) is the phase accumulated by the spin in the arc magnets, which is a periodic function of the ring Ψ y (z) = Ψ y (z + L).
The components of the spin field ω in the spin reference frame have the following form: The angle brackets here mean averaging over the orbit while L is the orbit length. When averaging, using the fact that, in the ST mode, the phase Ψ y (z) contains only frequencies that are multiples of the revolution frequency, we get that, in a real ring structure, the components of the spin field lie in the plane of the ring (ω 2 = 0) and are equal to where D x is the radial dispersion function, ∆p is the momentum deviation, B z are the solenoid fields for spin control.
The spin field consists of three terms each having its own nature. The contribution of the EDM to the spin field is determined by the first term, which is proportional to the momentum deviation. The second term is due to the effect of the control solenoids. The third term determines the coherent part of the resonance strength ω coh , which is related to the closed orbit excursion.
After compensation of the coherent resonance part by small solenoids, the ring becomes equivalent to an ideal one, which has no errors in the construction and setup of its magnetic elements. The spin field is then determined only by one term: In the linear approximation, when averaging over the synchrotron oscillations, the dependence of the spin field ω on the EDM term ( ∆p = 0) disappears. The dependence on the EDM term appears only in the next, second-order approximation. The largest contribution to the spin field is related to the spread of the spin precession phase Ψ y when the spin makes a large number of turns (γG M 1) in the ring arcs. The average spin field lying in the orbit plane becomes: Taking into account the betatron oscillations in the second order of the averaging method gives a vertical component of the spin field proportional to the vertical emittance [7] ω 2 = ω emitt .
The "EDM signal" is related to the synchrotron oscillations and is determined by the ring structure. Let us emphasize that the contribution to the EDM signal from the structure depending on the spin rotation angle in the arc magnets (energy) has interference character. Thus, there is a possibility of manifold enhancement of the EDM signal by the correlated contributions of all arc dipoles of the ring.
Measurement of the EDM at the JLEIC
Measurement of the EDM signal can be done by increasing the momentum spread after compensation of the coherent part of the spin field. The number of turns N rev needed to rotate the polarization by an angle α s is determined by: where T exp is a time of experiment. The number of turns N rev is limited by the incoherent part of the resonance strength and cannot be greater than ω −1 emitt . The limiting precision of the EDM measurement occurs in the region of the interference maximum k E and minimum ω emitt : Figs. 1 and 2 show the values of ω coh and ω emitt for the JLEIC lattice [7], which indicate that the limiting precision of the EDM measurement at the JLEIC is G E ∼ 10 −5 for protons and G E ∼ 10 −6 for deuterons. Our calculations assume: ∆p/p = 10 −3 , α s = 0, 1 and that the EDM measurement takes place in the region of the interference maximum k E (∼ 10 3 for protons and ∼ 10 for deuterons) and minimum ω emitt (∼ 10 −6 for protons and ∼ 10 −9 for deuterons).
Measurement of the EDM at the NICA collider
The spin dynamics at the NICA collider with two solenoidal snakes becomes equivalent to the dynamics in a figure-8 ring and the EDM measurement method can be analogous. For the NICA collider, this estimate gives the limiting precision of the EDM measurement of G E ∼ 10 −4 for protons and G E ∼ 10 −5 for deuterons. These calculations assume that ∆p/p = 3 · 10 −3 , α s = 0, 1 and that the EDM measurement takes place in the region of the interference maximum k E (∼ 100 for protons and ∼ 1 for deuterons) and minimum ω emitt (∼ 10 −5 for protons and ∼ 10 −8 for deuterons).
An additional issue with the ST mode at the NICA collider is the necessity to account for a strong betatron oscillation coupling and alignment errors of strong snake solenoids.
Further enhancement of the limiting EDM measurement precision
To further enhance the limiting precision of the EDM measurement, one must complete a detailed analysis of: • the system for compensation of the coherent part of the spin resonance strength, which has all three components, using control solenoids (3D spin rotators); • the system for compensation of the incoherent part of the spin resonance strength, which is determined by beam emittances; • the impact of the betatron oscillation coupling on the spin dynamics in the ST mode; • the contribution of higher-order orbital and spin resonances; • the collider optics allowing one to increase the interference maxima of the EDM signal; • the EDM signal measurement methods; • the beam polarization measurement methods.
Conclusion
In conclusion, let us summarize the main results presented in this paper.
• A theoretical analysis of the spin dynamics of a particle with the MDM and EDM in ST mode rings is completed in the linear approximation.
• It is proposed to measure the EDM at the JLEIC and NICA colliders using the method based on the interference enhancement of the EDM signal in the whole energy range using only the magnetic fields of the ST mode ring.
• Estimates of the EDM signal in the ST modes of the JLEIC and NICA colliders are presented demonstrating the possibility of measuring the EDM of proton and, especially, deuteron beams.
• Questions are formulated about the study of further enhancement of the limiting precision of the EDM measurement in ST mode rings.
|
2019-04-26T14:16:18.079Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "148349a791562c78a7477431eed3968ca5542753",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2019/09/epjconf_ishepp2019_10013.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0c49c2a7b3ec041e6bdbec2525886201dcf5572a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
252458882
|
pes2o/s2orc
|
v3-fos-license
|
A systematic framework for urban smart transportation towards traffic management and parking
: Considering the wide applications of big data in transportation, machine learning and mobile internet technology, artificial intelligence (AI) has largely empowered transportation systems. Many traditional transportation planning and management methods have been improved or replaced with smart transportation systems. Hence, considering the challenges posed by the rising demand for parking spaces, traffic flow and real-time operational management in urban areas, adopting artificial intelligence technologies is crucial. This study aimed to establish a systematic framework for representative transportation scenarios and design practical application schemes. This study begins by reviewing the development history of smart parking systems, roads and transportation management systems. Then, examples of their typical application scenarios are presented. Second, we identified several traffic problems and proposed solutions in terms of a single parking station, routes and traffic networks for an entire area based on a case study of a smart transportation systematic framework in the Xizhang District of Wuxi City. Then, we proposed a smart transportation system based on smart parking, roads and transportation management in urban areas. Finally, by analyzing these application scenarios, we analyzed and predicted the development directions of smart transportation in the fields of smart parking, roads and transportation management systems.
Smart transportation
Smart transportation mainly relies on big data from mobile Internet and Internet-of-Things (IoT) technologies, providing answers to four typical questions: who, where, when and what they have done [1].Thus, these systems can precisely track individual characteristics and activity patterns of travelers.Using machine learning and other artificial intelligence (AI) technologies, smart transportation can mine and analyze big data on transportation, portray a depiction of individual trips, realize comprehensive data integration in the transportation field and support traffic planning and management [2][3][4][5].
Since the start of the 21 st century, governments worldwide have been progressively promoting research and application transformation systems from intelligent transportation to smart transportation.In recent years, a series of innovative technologies, such as big data, cloud computing, IoT and mobile communication, have pushed intelligent transportation systems to the new development stage of smart transportation [6][7][8][9][10][11][12].Studies have shown that smart transportation was first proposed in developed countries.For example, the United States formulated a two-stage development strategy: The first stage helped realize the goal of building a multi-mode transportation system nationwide in 2014, emphasizing vehicle, infrastructure and passenger interconnection and vehicle-vehicle interconnection; the second stage focuses on the construction goal of the full life-cycle development of smart transportation, emphasizing emerging technologies, network security, data access and exchange and autonomous driving [13,14].Since 2013, Japan has focused on developing autonomous driving and the Internet-of-Vehicles (IOV) to realize a highly informed traffic system.Furthermore, in 2019, Japan issued the ITS Handbook, outlining a strategic plan and development trends for autonomous driving and vehicular collaboration.This plan includes using the ETC2.0 technology application for smart roads [15].In 2020, to realize smart travel, Europe published a sustainable and intelligent transportation strategy for promoting the development of vehicle navigation, smart parking, carsharing and driving assistance [16][17][18].
Smart transportation systems are widely accepted by many local governments in China, and several key technologies have been applied in many cities.For instance, Hangzhou created an urban brain in 2016 by integrating big data, cloud computing, machine learning and other innovative methods.
These technologies can significantly improve traffic control and the quality of service [19,20].In 2018, Beijing established a smart traffic management system for subcenters.The aim is to offer refined and smart traffic management methods [21].In 2019, Shanghai constructed a smart traffic management system empowered by smart public security, thus enabling governance approaches for several transportation-related departments, including police, management and construction [22,23].
In addition, to promote the development of smart transportation, transportation network companies (TNCs) have developed information system platforms such as the urban brain system proposed by Alibaba, Tencent and Huawei [24].A typical example is the Apollo program launched by Baidu, which provides solutions for intelligent transportation, autonomous driving and autonomous vehicles [25].Smart transportation products and system functions offer many advantages for developing transportation systems.More comprehensive applications, wider monitoring areas, more precise supervision, quicker construction and more effective management are all benefits of these new technologies.However, there are still several issues with the top-level design, including standard specifications and system designs [26].
Key applications
This study focuses on studying smart parking, smart roads and smart management systems to address the problems caused by smart transportation development.This study builds a systematic framework of smart transportation and realizes an organic combination of these three aspects.
Communication technologies allow smart parking systems to collect, manage and analyze massive amounts of data on parking spaces in real time.In a smart parking system, high-quality services are provided, including mobile terminal searches, suggestions, reservations and navigation [27].Therefore, smart parking offers multiple benefits, such as increasing the rate at which available parking spaces are utilized, optimizing parking service levels, reducing instances of unlawful parking and resolving parking issues in metropolitan areas [28].A smart parking system is the key link of an intelligent transportation system, and IoT is at the core of its development [29].
A smart road collects and analyzes transportation data using roadside devices, such as radar, cameras and smart street lighting, and it monitors traffic flow and road hazards in real time.Hence, a smart road can be considered a combination of smart pavements, smart intersections and smart road perception systems that can perceive all objects.Through the processing and publishing of transportation information, a smart road can realize road operation status perception, traffic guidance, vehicle tracking, flow warning, pedestrian warning and rapid emergency response.Finally, it provides important support for emerging vehicle-road cooperation.
Smart transportation management systems promote urban transportation management and control technology applications that rely on artificial intelligence methods and fully integrate information, communication and software technologies.The aim is to enhance the traditional method of supplyside management to meet travel demand, improve the quality of service and adopt intelligent methods to fulfill a supply and demand balance to eventually accomplish the goals of ensuring safety, improving efficiency, perfecting the environment and saving energy.
Challenges and contributions
Table 1 summarizes the current status of smart transportation literature.We found that few studies have fully considered all three research areas and propose a framework for a smart transportation construction system.Therefore, to develop an entire ecosystem, in this study, we present the concept of a systematic smart transportation framework that results in the construction of three fields.
Although artificial intelligence technology has been applied in some typical scenarios in the field of transportation engineering, it still faces huge technical challenges, mainly reflected in the current technological level of artificial intelligence.It is still difficult to meet the needs of smart transportation development.The reliability of a smart transportation system needs to be verified.The regulations and standards of intelligent transportation systems still need to be established and improved.
This paper aims to solve a series of traffic problems in the Xizhang area by considering smart parking, smart roads and smart transportation management systems in a framework.The main contributions of the paper are as follows.
1) This paper proposes a new concept for a smart transportation systematic framework that relies on artificial intelligence methods, and it realizes the organic integration of smart parking, smart roads and smart transportation management systems.
2) The smart parking system proposed in this paper uses the multiscale parking warning technology to realize the multiscale parking warning, parking management and control and parking guidance in the Xizhang area.The smart road system can use radar-vision fusion machines to obtain the real-time traffic flow status and predict parking demand, parking duration and the number of empty parking spaces.
3) The smart transportation management system combines smart parking and road systems.It can predict road congestion situations based on the analysis of massive transportation datasets and achieve smart parking control based on multiscale parking warnings.[36] √ Shi (2011) [37] √ √ Wang (2021) [38] √ √ Wang and Guo (2021) [39] √ √ Wang (2020) [ The paper is structured as follows.Section 2 introduces the design of the framework for smart transportation.In Section 3, the problem statement and sub-system applications are presented.Section 4 includes remarks on our work.Section 5 concludes our work.
Design of the framework for smart transportation
This study proposes a new concept for a smart transportation systematic framework, as shown in The systematic smart transportation framework is focused on parking, roads and transportation management system scenarios.To build this systematic framework, three essential procedures must be followed: (i) It is important to develop data-sharing platforms and industry standards.They play an important role in developing big data transportation platforms and standard designs, which cover the entire smart transportation system.(ii) It is necessary to improve the smart perception ability of transportation networks.The realization of all-object perception is the primary goal, including applying roadside perception equipment for smart intersections, smart road facilities and other allobject perceptions.In addition, realizing full time-space perception is required, which includes the state of people, complex traffic flows, parking demand and environmental information.Moreover, based on full-granularity video acquisition and wide-area radar detection, lighting and vision can be integrated.Then, a smart transportation management system based on transportation analysis of the entire space-time road network is established.(iii) By relying on important support from smart parking and road systems, smart transportation management systems are enriched.
Smart parking system
Multi-scale parking warnings, which serve as the basis for smart parking systems, are based on single-point prediction and a group clustering algorithm [51,52].When conducting smart parking management, two basic algorithms for regional parking situation analysis are parking situation analysis and the prediction for a single parking space.Analyzing the historical parking time series data of parking space data is the first stage in implementing smart parking.To estimate the mass parking situation, it is necessary to first construct a short-term prediction of the occupancy rate of a single parking spot before expanding the scope to include all parking spaces in an area.Based on the parking occupancy prediction results, the second step is to integrate the spatial location of the parking space for three-dimensional spatial clustering.The last step involves conducting a multilevel parking warning from the viewpoint of the point-parking space, line-road and network area based on the prediction results regarding the parking situation by single-point prediction and spatial clustering.Areas with a prominent contradiction between parking space supply and demand can be identified, warnings of unusual happenings can be provided, and the rapid calculation of parking guidance can be realized.
Smart roads in urban areas
It is important to promote smart road construction in urban areas and adopt radar vision fusion machines.Compared with video flow detectors and lidar, a radar-vision fusion machine has the advantages of all-weather, wide-coverage, high-precision, multi-function and low-cost applications.The main operational process includes collecting data, sending information to an edge cloud platform, conducting fusion computing, commanding decisions and sending decisions back to the front-end display device.Finally, it can realize the functions of smart transportation timing and guidance [53].
Smart transportation management system
Based on machine learning methods and big data transportation technologies, we can accurately predict the traffic flow of a road network and then conduct transportation management [54,55].The main methods are to realize the encapsulation of an automatic large-scale complex road network based on GIS, OSM and XML.Important tasks are attaching rich traffic attributes, including road speed limitations, length and grade information, identifying the transportation operation status of each area of the road network in grid form and evaluating road network traffic efficiency.
Problem statement and sub-system applications
The Xizhang area is an import subcenter of Wuxi City located at the junction of the Huishan District and the Xishan District.This area was selected for the case study because of its high travel demand and intricate traffic management.Current transportation management measures face many difficulties in providing wide-range, high-precision, low-latency and quick-response transportation services.Considering the problem of increasing travel demand, a systematic transportation framework must be established for the Xizhang area.This can fully realize smart parking, roads and transportation management system applications, as shown in Figure 2. The development of the Xizhang area is limited by a series of parking problems, including serious illegal parking, randomly occupied road resources and inefficient operations.The main reasons for this are scarce land resources and insufficient parking space.At present, parking guidance in the surrounding area and even the entire Xizhang area relies on traditional static road signs and simple electronic signs.This can only realize guidance in the direction dimension but does not take into account the relationship between the supply and demand of parking spaces because of a lack of smart parking guidance.Recent data show that the Xizhang area has approximately 3400 parking spaces.However, the demand for parking spaces on a special working day is about 3300, on weekends it is over 5000, and on a special day with large events occurring it is 7000.Hence, it is evident that the number of parking spaces can only meet the demand on working days.
Road and intersection problem
There are many abnormal intersections in the Xizhang area, such as the Tianyi Road-Fengxiang Road intersection, as shown in Figure 3. Tianyi Road is the main road to the Tianyi Primary School.However, no railings separate the entrance and exit of the parking space for this school.Pedestrians and non-motor vehicles must cross the street, which further leads to a queue of vehicles on Tianyi Road.Because vehicles waiting at the west exit road cannot pass quickly, the problem of traffic congestion is further increased at this intersection.In addition, there is only one straight line at the western entrance of the intersection.This easily leads to a long queue of vehicles and leads to more serious road congestion.In addition, entering and leaving vehicles from branch roads significantly affect the passage of vehicles at intersections.
Transportation management system problem
With the rapid development of road networks in the Xizhang area, traffic demand and infrastructure construction have increased.It is important to build a smart transportation management system for addressing these increasing traffic problems.At present, two major problems exist.First, current traffic monitoring methods are outdated.Current technologies rely only on cameras for monitoring, which are unable to collect real-time traffic flow information and cannot help ease traffic congestion quickly.Additionally, standard text alarm information can only represent simple facts and has a restricted range of expressions.Hence, a smart transportation management system is required for traffic flow monitoring, congestion identification and emergency handling.Second, information-and data-dispersion problems exist.There are many applications and multisource transportation information in current transportation management systems, including mobile phone signaling data, video data and bayonet data.Consequently, it is challenging to make accurate decisions.
Smart parking application
A smart parking system can identify areas with demand and supply imbalance problems using multiscale parking warning technology.Then, early warning information on unusual stations, streets and regions of parking spaces is provided, which results in rapid deduction and parking guidance for vehicles.
To achieve smart parking, three important methods must be followed.First, we build a smart IoT for a front-end device.The parking space detector based on the principle of geomagnetic induction is used for collection.The parking space detector is installed at the entrance and exit.The obtained parking data is transmitted to the control center through 5th Generation Mobile Communication Technology.The main aim is to make a single-point prediction for parking demand, parking duration and vacant parking spaces, based on integrating the total factor information of front-end perception and the control equipment in various parking lots, information on real-time parking demand, number of parking spaces, parking times, parking duration and other key indicators.Second, group clustering is conducted based on the location and occupancy rate, which identifies areas with a prominent contradiction between parking space supply and demand.It provides parking guidance information to users via mobile and vehicle-mounted information-display terminals.Parking-related information includes parking searches and recommendations, parking reservations, integrated parking navigation and indoor navigation.Finally, variable electronic information signage was adopted for displaying real-time information on parking spaces for parking guidance.This smart parking system can realize the following functions to address the parking problem in the Xizhang area.
1) Multiscale parking warning: There is a huge parking space demand in the Xizhang area, but the utilization rate of each parking space is low because of weak information sharing.Hence, it is necessary to realize multiscale warnings of points, lines and planes.We consider the parking lots of outlets as points, roads such as Tianyi Road and North Ring Road as lines and the Xizhang area around the outlets as the plane.As shown in Figure 4, a shortage in the parking space supply at stations, in-road illegal parking problems and supply and demand imbalance problems in the region are identified.Then, the smart parking system can identify key areas with serious demand and supply imbalances, provide data support for smart parking guidance and conduct exact parking management.2) Parking management and control: Based on the results of the multi-scale parking warning and identification, the smart parking system can identify real-time in-road parking and illegal parking problems and then perform parking management and control, as shown in Figure 5. First, single-point prediction for in-road parking analyzes roads with major in-road parking issues and electronic signage to indicate places with serious in-road parking.Smart parking control was then implemented to inform about locations with serious in-road parking.Second, to address the issue of illegal parking, we should use a spatial clustering method for identifying high-probability areas, installing an electronic sign that forbids it and then guiding traffic to low-demand parking spots.These single-point prediction results can be transmitted to a smart parking control center in real time.In this way, the traffic management department can deal with illegal parking and reduce the impact of illegal parking.3) Parking guidance: A smart parking system can guide cars to recommended parking spaces in areas with significant supply and demand imbalance issues.This system is based on the results of single-point prediction, variable electronic signage setup and real-time and predicted information of empty parking spaces, ultimately generating the optimal recommendation scheme.As shown in Figures 6 and 7, multi-level smart parking guidance can optimize the utilization rate of each parking space of outlets, reduce the time users spend searching for empty parking spaces and finally address the parking difficulty problem in the outlet area.
Smart road application
In this scheme, radar-vision fusion machines are placed at intersections in the Xizhang area.The main methods used are obtaining the real-time traffic flow status and predicting parking demand, parking duration and the number of empty parking spaces.Furthermore, roadside parking space data and parking station information data are collected, which are gathered using flow detection sensors and parking detectors based on the principle of geomagnetic guidance.Figure 8 shows a smart road construction of Tianyi Road based on radar vision fusion machines.The smart road system serves smart intersection and smart road scenarios.1) Variable electronic signage is installed at the exit and entrance of the parking station at Tianyi Primary School.Based on the detection results of the radar-vision fusion machine, real-time control information is sent to the display device via an edge cloud platform, which displays the motor vehicle priority or non-motor vehicle priority, as shown in Figure 9. 2) Smart road systems can control the signal timing at intersections in real time for different traffic flow scenarios.When the detection result of a radar-vision fusion machine indicates that the traffic flow on the main road at an intersection is large, the edge cloud platform adjusts the signal timing according to the real-time traffic demand.Then, it appropriately increases the red-light duration of the branch road to improve traffic efficiency.As shown in Figures 10 and 11, when the detection results indicate that the traffic flow on the main road at an intersection is small, the edge cloud platform must adjust the signal timing and appropriately reduce the red-light duration of the branch road to improve traffic efficiency.
Smart transportation management application
A smart transportation management system combines smart parking and road systems, as shown in Figure 12.It has the functions of real-time prediction for road congestion and multi-scale parking warnings based on multi-source heterogeneous transportation big data.Different from the traditional intelligent transportation system, the system proposed in this paper is supported by the collected multisource traffic big data and information at all levels of the city, and it performs the real-time calculations.This requires high data quality and algorithm design which is more advanced than existing systems.A smart transportation management system can predict road congestion situations based on the analysis of massive transportation datasets.Such a system can comprehensively understand the perception of the human-vehicle-road-environment and provide rich traffic index data.By integrating traffic data of the road network and information on congestion caused by traffic accidents, intersection deadlocks, traffic overflow, traffic accidents and other unusual items and then adopting information on danger points and accident points reported by travelers, it can accurately discover the source of congestion and predict an evolution trend for unusual congestion.Furthermore, the system will recommend reasonable travel routes for the public after determining the degree of road congestion by analyzing the road traffic situation, average speed and congestion miles.
A smart transportation management system can achieve smart parking control based on multiscale parking warnings.It integrates parking space resources in a city through edge computing, AI algorithms, intelligent maps and time-space big data technologies.First, it conducts a single-point prediction of the parking demand, parking duration, empty parking spaces and parking space information in the station and roadside, and it then combines this with the spatial location to create three-dimensional spatial clustering.Finally, it extends this model to the entire parking space in the region to make mass parking situation predictions, aiming to provide convenient and efficient operation and comprehensive supervision of city-level parking problem solutions.Hence, a smart transportation management system can improve the intelligence level of parking resources and enhance the quality of parking services.
Remark
The smart transportation system of Xizhang provides a rich experience for the construction of a safe, green and smart systematic transportation framework.It builds a smart parking system based on multiscale parking warnings, a smart road system in an urban area based on multisource data fusion and a smart transportation management system based on real-time big data.It also integrates smart parking and road systems into a smart transportation management system, thereby contributing to building an integrated transportation ecosystem.The proposed framework has strong practicability and extendibility, which can be divided into several subsystems.For new application scenarios, it is easy to build corresponding intelligent transportation systems by inputting corresponding data.
Single-point prediction and group clustering are the core of smart parking system construction.Emerging technologies for edge computing, AI algorithms, AI maps and spatiotemporal big data provide technical support for smart parking systems.A prediction for the near future is that a map of smart parking data can be constructed through the integration of various parking resources in a city.Optimal solutions for parking space resource allocation can be obtained.
With radar-vision fusion machine applications and the support of smart transportation management systems, a smart road can provide information collected with the characteristics of the whole object, full time and space and full size of holographic awareness.In a smart road system, information and sensor technologies can provide a high-speed and reliable communication network, which can ensure the future integration of cloud-edge-end smart road collaborative management.Moreover, these technologies will provide support for building a highly data-sharing smart-road system.
In the future, smart transportation management systems can bridge the gap between government administration and new technologies and then promote transportation big data sharing.A series of innovative AI technologies, including 5G, cloud computing and machine learning, can realize scientific, standardized and refined transportation management.Proactive, accurate and timely information publishing can improve the utilization rate of urban transportation big data and increase the traffic capacity of roads.
Conclusions
This paper reviews the development status of smart transportation system construction and identifies three research directions: smart parking, smart roads and smart transportation management systems.The aim is to overcome the drawbacks of the current development of transportation systems.The application scenarios and key technologies of the three application directions are highlighted, and a case study is presented based on current transportation problems in the Xizhang area.
Studies have shown that a series of traffic problems exist in the Xizhang area, which can be addressed by adopting a smart transportation system.These problems, such as parking space demand and supply imbalance, illegal parking situations, unreasonable road and intersection designs and low levels of service, hinder rapid and smooth transportation operations.Moreover, the following issues also affect transportation planning and management: superfluous transportation equipment, outdated traffic flow monitoring technologies and scattered multi-source heterogeneous data.Hence, given the existing problems, including parking difficulties, traffic congestion and outdated management technologies in the Xizhang area, a framework for smart transportation construction based on smart parking, roads and transportation management systems in urban areas is proposed in this study.This can improve the level of transportation services in the Xizhang area.
Herein, a research framework is proposed for smart transportation systems, but theoretical research should be expanded in future studies.Here are the limitations.This paper has not considered the challenges and difficulties of emerging technological applications in the Xizhang area.How to combine innovative technologies and equipment has not been adequately addressed in this study.In addition, it cannot guarantee the optimality of the proposed framework, so demonstration applications should be carried out around Tianyi Road to verify the practicability of a smart systematic transportation framework.More analyses before and after case applications should be conducted as a
Figure 1 ,
Figure 1, to achieve an organic integration of smart parking, smart roads and smart transportation management systems.A full-range transport ecosystem will be established to improve the quality of travel services in urban areas, based on new technologies, including transportation big data, cloud computing, cloud-side collaboration and 5G communication.The goal of systematic smart transportation framework construction is to create a new pattern of regional smart transportation development.It primarily provides services to governments, enterprises and individual travelers.The following eight aspects are among the primary functions: more scientific government management and decision making, wiser traffic management, people-oriented transportation services, comprehensive transportation services, more intelligent transportation infrastructure equipment, sharing of traffic information, accurate perception of traffic information and more effective enterprise organization and operation.
Figure 1 .
Figure 1.The smart transportation systematic framework.
Figure 3 .
Figure 3. Complex traffic organization problem at Tianyi Road.
Figure 5 .
Figure 5. Parking management and control.
Figure 7 .
Figure 7. Vehicle guidance for entering parking spaces.
Figure 12 .
Figure 12.Framework of a smart transportation management system.
Table 1 .
Literature of related studies
|
2022-09-23T15:14:45.936Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "cf42f1ebee4bf5ec1681ffb709efc15ff741a14d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/era.2022212",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1acfaef2e0a6ce337944dc372ff8ad377005702c",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": []
}
|
10824893
|
pes2o/s2orc
|
v3-fos-license
|
Isolating Biomarkers for Symptomatic States: Considering Symptom-Substrate Chronometry
A long-standing goal of psychopathology research is to develop objective markers of symptomatic states, yet progress has been far slower than expected. While prior reviews have attributed this state of affairs to diagnostic heterogeneity, symptom comorbidity, and phenotypic complexity, little attention has been paid to the implications of intra-individual symptom dynamics and inter-relatedness for biomarker study designs. In this critical review, we consider the impact of short-term symptom fluctuations on widely-used study designs that regress the “average level” of a given symptom against biological data collected at a single time-point, and summarize findings from ambulatory assessment studies suggesting that such designs may be sub-optimal to detect symptom-substrate relationships. While such designs play a crucial role in advancing our understanding of biological substrates related to more stable, longer-term changes (e.g., grey matter thinning during a depressive episode), they may be less optimal for the detection of symptoms that exhibit show high frequency fluctuations, are susceptible to common reporting biases, or may be heavily influenced by the presence of other symptoms. We propose that a greater emphasis on intra-individual symptom chronometry may be useful for identifying subgroups of patients with a common, proximal pathological indicators. Taken together, these three recent developments in the areas of symptom conceptualization and measurement raise important considerations for future studies attempting to identify reliable biomarkers in psychiatry.
These terms resulted in an initial list of 477 articles. Of these 477, studies were included if they featured one or more self-completed assessments per day, lasted longer than four days, and included group average SDs of the individual EMA measures, and group SDs of the individual SDs, as measures of variability. These criteria resulted in 162 studies. Studies were further excluded if they did not include an AA protocol, if they did not measure an affect-related symptom, or if the authors measured the effect of a specific variable on mood, or only reported on the feasibility of a new technology or compliance rates in a specific population. Further, studies that did not involve a healthy control population or a target psychological patient population were excluded, as well as those where participants did not complete the assessments for themselves. Lastly, studies published before 2000, unavailable in English, or with outdated author contact information were excluded.
In cases where studies were missing some of the required descriptive statistics, study authors were contacted by email. Ultimately, we were able to use data from a total of 49 unique studies in total, representing data from a combined total of 9,628 unique healthy/low-symptom controls, and 2,815 psychiatric patients. A summary of studies and sample characteristics is provided in Table S1.
Within-Subject Variability Estimates
All included studies were selected based on the requirement that they provided descriptive statistics for both the mean level of positive or negative affect (or both) as well as mean within-subject variability for each group (psychiatric patients and/or healthy or low-risk controls). To compare group-average within-subject variability across studies, we first looked at the ratio of average within-subject variability to average mean level for a given group (patients or controls/low symptom group). This provided an index of the proportion of meanlevel affect (for a given group/rating instrument) that could be expected to vary within subjects over time. The distribution of within-subject variability across studies was non-parametric, and non-parametric tests were used for the analyses described below.
Relationships between Within-Subject Variability and Study Design
One important consideration for our estimate of within-subject variability is the extent to which affective variability measurement may be influenced by study design. To assess this, we looked at non-parametric Because AA studies can differ in their timing of data collection (e.g., at fixed intervals during the day or random in response to random signals sent to a portable collection device), we were also interested to see whether this appeared to have any effect on estimates of within subject variability. Using Mann-Whitney Tests, we did not find any evidence that fixed vs. random signals influenced within-subject variability for PA (p = 0.428). For measures of NA, within-subject variability was slightly higher for studies using random (M = 0.37) as compared to fixed (M = 0.34; Mann-Whitney p = 0.0486) modes of data collection.
Taken together, these data suggest that our estimates of within-subject variability were not heavily influenced by basic factors surrounding study design.
Relationships between Within-Subject Variability and Patients vs. Controls
Using the within-subject variability index described above, we compared healthy control/low-symptom groups to patient groups using a Mann-Whitney test. Variability was significantly higher for PA in patients than controls (p = 0.003), but not for NA (p = 0.785). For within-subject variability/between-subject variability there were no significant group differences between patients and controls for either PA (p = 0.98) or NA (p =0.24).
|
2017-11-08T18:02:49.192Z
|
2016-05-31T00:00:00.000
|
{
"year": 2016,
"sha1": "bbb8df089b8abdb6a94c6429b440f9cff48c674f",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/mp201683.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "63af99e607b17b54f5606aa6176e99e6ec6b7299",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
19617195
|
pes2o/s2orc
|
v3-fos-license
|
Lilac and honeysuckle phenology data 1956–2014
The dataset is comprised of leafing and flowering data collected across the continental United States from 1956 to 2014 for purple common lilac (Syringa vulgaris), a cloned lilac cultivar (S. x chinensis ‘Red Rothomagensis’) and two cloned honeysuckle cultivars (Lonicera tatarica ‘Arnold Red’ and L. korolkowii ‘Zabeli’). Applications of this observational dataset range from detecting regional weather patterns to understanding the impacts of global climate change on the onset of spring at the national scale. While minor changes in methods have occurred over time, and some documentation is lacking, outlier analyses identified fewer than 3% of records as unusually early or late. Lilac and honeysuckle phenology data have proven robust in both model development and climatic research.
Background & Summary
Although phenology is now understood to be a key indicator of climate change impacts 1-3 , large-scale, coordinated phenological monitoring of lilac and honeysuckle-which respond predictably to air temperature and accumulated heat in a regionally coherent pattern-was initiated in the United States to supplement the use of weather observations in agricultural forecasts. In the western United States, monitoring of purple common lilac (Syringa vulgaris) was initiated in the late 1950's, and monitoring of two cloned honeysuckle cultivars (Lonicera tatarica 'Arnold Red' and L. korolkowii 'Zabeli') was initiated in the late 1960s 4 . The effort was replicated in the eastern US in the early 1960's, but included a cloned lilac cultivar (S. x chinensis 'Red Rothomagensis') instead of the common lilac 3 .
The western program ended in the mid-1990s, except for a few dozen sites reactivated in 1997 (these data are also available at http://meteora.ucsd.edu/cap/lilac.html#Clim) 5 . The eastern network was terminated in 1986, re-initiated in 1988 and then expanded into a broader nationwide online phenology monitoring effort in 2009 (ref. 3).
The resulting dataset has been used for applications far beyond the original vision, from understanding vegetation feedbacks to climate 6 , to measuring large-scale variations plant climatic adaptation variations 7 . The Extended Spring Indices 8-10 , a set of bioclimatic models based on these lilac and honeysuckle data, have been used to calibrate remote sensing imagery 11 and to advance our understanding of the effects of climate variability and change on spatial and temporal variations in spring onset in the U.S. [12][13][14] .
The observational dataset is unique in both its geographic and temporal coverage, and has considerable potential to support additional research and applications. Beginning in 2009, data collection on common and cloned lilacs continued alongside new data collection for hundreds of native species following very similar protocols 15 . The deep temporal record of the lilac and honeysuckle dataset has the potential to extend our understanding of trends among native species 9 . Long-term, continental-scale datasets are both rare and critically important to understand the causes and consequences of changing phenologies among cloned plants, ornamentals and native species [16][17][18] .
Methods
In the historic Eastern 19 and Western 20 programs, participants were directed to plant new lilac or honeysuckle clones, and/or to observe established common purple lilac shrubs in unshaded, flat, convenient locations, away from roads and away from microclimatic pockets (e.g., cold air drainages). A single clone line for lilacs (Syringa x chinensis 'Red Rothomagensis') and two clone lines for honeysuckles (Lonicera tatarica 'Arnold Red' and L. korolkowii 'Zabeli') were planted and monitored throughout the period of record. Observers were asked to record on paper cards the dates that each of five phenological events, such as first leaf or full bloom, occurred on each of their selected or planted lilac or honeysuckle individuals (Table 1). Early program coordinators noted that in some cases all individuals of a species at a site were aggregated for reporting (i.e., the observer reported a single date representing an 'average' for all individuals at the site for each phenological event); however, there is no documentation to indicate for which sites this occurred.
After all five phenological events had occurred for the year, observers mailed the cards to the program coordinators. Data were collated on paper by program coordinators, and isobar maps of spring arrival were developed and shared with observers (e.g., Fig. 1) 21,22 . The duration of record at each site is mapped in Fig. 2. The Western and Eastern programs differed in number of active sites (those with at least one observation record) by decade, as shown in Fig. 3.
The data were digitized and curated by Mark D. Schwartz, University of Wisconsin-Milwaukee, until the establishment of the USA National Phenology Network (USA-NPN) and the launch of its broader native plant monitoring program, Nature's Notebook 3,23 . The remaining active lilac and honeysuckle observers and their observation sites were incorporated into this program in 2009.
In 2009, the lilac and honeysuckle protocols were changed from their original event-based format (e.g., 'What was the date of first bloom?') to a status-based format (e.g., 'Do you see open flowers today?') to be consistent with the new USA-NPN plant protocols 15 . The rationale for this change in approach is that recording of the phenophase status on each day that the plant is observed enables detection of repeated phenological events within a single year (e.g., a second flush of breaking leaf buds after a killing frost), as well as calculation of the uncertainty in the date of phenological events (e.g., open flowers were first observed on April 6, but the plant had not been checked since April 3) 15 . Definitions provided to observers for the five leaf and flower phenophases, along with their changes over time between 1956 and 2014, are in Table 1.
Nature's Notebook observers are still encouraged to plant and observe cloned and common lilacs as part of a campaign (https://www.usanpn.org/nn/campaigns), with electronic messages that provide real-time feedback on the progression of spring, information-rich species profile pages, and digital merit badges for tracking lilacs 23 . The cloned honeysuckle cultivars, however, are considered invasive and their planting is no longer encouraged.
Data Records
The lilac and honeysuckle phenological dataset is an Excel workbook, stored in the Dryad Digital Repository (Data Citation 1) and USGS ScienceBase (Data Citation 2). The first tab (observation_data) contains 116,662 records for the period 1956 to 2014 across the United States. Each record includes a unique identifier, a site identifier and geographical information (latitude, longitude and elevation), species name, individual plant identifier, phenophase identifier and description, date of onset for each phenophase, and quality control flags. The second tab (site_data) details the number of years in record, and years missing, by site.
For observational data prior to 2009, phenophase onset dates ('events') were the only records submitted for each of the five phenophases on each individual plant in a given year. Beginning in 2009, status records could be submitted for each phenophase on each individual plant 15 . For temporal consistency in this dataset, we converted status data for the period 2009-2014 to onset dates by using the date of the first 'Yes' record of the calendar year for each phenophase on each individual plant. Where one or more 'No' records precede this first 'Yes' record, the number of days since the last prior 'No' is also provided, from which the uncertainty in the onset date can be quantified. Because this conversion can only be applied to status data, this feature is available for roughly 5% of the records in the dataset.
Although 2009 marked significant changes to the program (including the conversion from a paper to digital database, and the conversion from 'event-based' to 'status-based' monitoring) the meaning of each phenophase definition remained unchanged from the Eastern program definitions, in terms of pinpointing an onset date. However, there are some differences to note between the Eastern and Western program definitions for a few phenophases. After 2009, observers were also able to provide additional information, including the absences of a phenophase prior to onset date (preceding 'No' records) and dates of phenophase presence after onset (subsequent 'Yes' records; e.g., indicating that open flowers were still present on the plant), as well as ancillary information about individual plants (e.g., shade status) and sites (e.g., degree of development). Descriptions of ancillary data are described at http://www.usanpn.org/files/shared/files/USA-NPN_ suppl_info_tech_info_sheet_v1.0.pdf. Ancillary data and raw status data (for the period 2009-2014) are available at https://www.usanpn.org/results/data. Dataset identifiers are also provided to distinguish data from the Eastern, Western and Nature's Notebook sets. The geographic division between the Eastern and Western programs falls along the Great Plain states, with the Dakotas, Nebraska, Kansas and Oklahoma in the Eastern program, and Texas in the Western program.
Technical Validation
While information on monitoring frequency, observers and data management practices prior to 2009 is not available, the dataset is otherwise well-documented and has been proven reliable in several publications. Early work with the dataset explored the relationship between biological and climatological spring 6,24,25 . Further demonstration of the utility and validity of the lilac and honeysuckle dataset is found in the Spring Indices 8 . These models, developed using the dataset presented here, predict lilac leafing and blooming dates based on the strong relationship between day of year, heat accumulation, number of high-energy synoptic events and the timing of these phenological events 8 . Recent work has extended these indices across the continental United States, using both weather station data 9 and gridded climate products 26 , and found strong relationships between Spring Indices (SI) predictions and the timing of phenological events for native species and crops (Fig. 4) 9 .
Minor cleaning of the Eastern and Western program data was conducted by the authors in 2013, and included the removal of duplicate records (4 records), those with null location information (35 records), and those with phenophase onset dates reported in an implausible order (72 records). For the publication of this dataset, we additionally excluded onset dates where status records conflicted (21 records; i.e., a 'Yes' and a 'No' status were reported on the same day; these flagged records are available via download at https://www.usanpn.org/results/data). In the field, under natural or managed ecosystems and for a variety of plant species, it is possible to have multiple onsets of the same phenophase on an individual plant in a calendar year (e.g., a second flush of breaking leaf buds after a killing frost, or a second round of flowering in the fall). These multiple onsets are detected in the status data when the first series of consecutive 'Yes' records is concluded with at least one 'No' record, then a subsequent 'Yes' record is present. The second onset date is the date of this subsequent 'Yes' record. A flag field ('Multiple_FirstY') in the data file indicates those cases where more than one onset for a phenophase occurs in a calendar year; this occurs in less than 1% of the records in the data file.
Occasionally, at sites where several observers monitor an individual plant, multiple onsets are calculated from the raw status data as a result of different observers 'leapfrogging' over each other with slightly different phenophase interpretations. For example, one observer might report a 'Yes' for 'Open flowers' when they judge conditions on individual lilac have just passed the threshold for occurrence. On a subsequent day, another observer might report 'No' if they judge conditions on the same individual have not quite passed the threshold for occurrence. If the first observer subsequently reports a status of 'Yes' there will appear to be two distinct onsets. Where these likely false onsets occur in the data (i.e., where associated 'MultipleFirst_Y' records are separated by very short time periods), it is reasonable to consider the first onset of the calendar year as the true onset, unless quality control flags indicate that the first onset is implausible. Groups are encouraged to resolve these issues and the raw status data can be explored for comments from observers, to confirm the occurrence of true multiple onsets within a calendar year.
We identified outliers in phenophase onset dates for individual plants, across the period of record, using, Tukey boxplots 27 . We flagged records with onset day of year values greater/less than 1.5 times the interquartile range for each individual. We excluded individual plants with less than 10 years of data, and used the first onset of the calendar year in cases where there were multiple first 'Yes' records. Of 67,068 tested records (57% of the dataset), 0.88% were flagged as unusually early ('Individual_Outlier' of −1 in the dataset) and 0.58% were flagged as unusually late ('Individual_Outlier' of 1).
We applied a more complex outlier analysis 28 for the dataset as a whole, to identify unusually early or late phenological events, based on the location of the observed plant and the interannual variation in climatological conditions at that location. This analysis relies on the assumption that plant phenology is influenced by environmental conditions, and therefore the variability of phenological observations made under similar environmental conditions must be relatively small (given unknown variation in microclimate, or genetic variation among organisms). Fig. 4 To apply this quality-control check, we used DAYMET, which is the highest spatial resolution, gridded dataset freely available for the United States. Because DAYMET is only available since 1980 and was not yet available for 2014 at the time of this analysis, we were able to apply this check to 39,791 records (34% of the dataset); this quality check could be applied to earlier data if additional fine-scale climatological data become available. For each observation year and location where observations were made (including all multiple first 'Yes' dates), we calculated cumulative values from 1 January to phenophase onset date (expressed as day of year; DOY), for the following meteorological parameters: surface minimum and maximum temperatures, precipitation, humidity, shortwave radiation, snow water equivalent, and day length. Using the daily maximum and minimum temperatures, average temperatures were calculated and then summed up in a similar way. To this set of meteorological data, we added the latitude, longitude and elevation of each observation. Then, we applied the t-SNE dimensionality reduction algorithm 29 to project the climatological and geographic variables into a two-dimensional space. This allows visualization and interpretation of the results. After that, we applied a model-based clustering 30 to group the observations into sets made under similar environmental conditions. Finally, the Tukey boxplot 27 was applied to highlight the outliers present in each of the clusters. These outliers (with values greater/less than 1.5 times the interquantile range of each cluster) are highlighted as inconsistent observations. We flagged 0.74% of the records as unusually early ('Inconsistency_Flag' of −1 in the dataset) and 1.35% of the records as unusually late ('Inconsistency_Flag' of 1). The impact of inclusion or exclusion of these observations has yet to be explored.
Usage Notes
The raw status data used to produce this dataset, and information about the Nature's Notebook sites, plants and observers is housed in the National Phenology Database and is available for download via the USA-NPN website (https://www.usanpn.org/results/data). Lilac observations submitted after 2014, as well as honeysuckle fruiting data from the Western program for the period 1968-2009, can also be downloaded from this website.
|
2018-04-03T04:15:51.920Z
|
2015-07-21T00:00:00.000
|
{
"year": 2015,
"sha1": "c08ad1a127bc793035fb03279960c18048a0ca83",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/sdata201538.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c08ad1a127bc793035fb03279960c18048a0ca83",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
246886892
|
pes2o/s2orc
|
v3-fos-license
|
SARS-CoV-2 Permissive glioblastoma cell line for high throughput antiviral screening
Despite the great success of the administered vaccines against SARS-CoV-2, the virus can still spread, as evidenced by the current circulation of the highly contagious Omicron variant. This emphasizes the additional need to develop effective antiviral countermeasures. In the context of early preclinical studies for antiviral assessment, robust cellular infection systems are required to screen drug libraries. In this study, we reported the implementation of a human glioblastoma cell line, stably expressing ACE2, in a SARS-CoV-2 cytopathic effect (CPE) reduction assay. These glioblastoma cells, designated as U87.ACE2+, expressed ACE2 and cathepsin B abundantly, but had low cellular levels of TMPRSS2 and cathepsin L. The U87.ACE2+ cells fused highly efficiently and quickly with SARS-CoV-2 spike expressing cells. Furthermore, upon infection with SARS-CoV-2 wild-type virus, the U87.ACE2+ cells displayed rapidly a clear CPE that resulted in complete cell lysis and destruction of the cell monolayer. By means of several readouts we showed that the U87.ACE2+ cells actively replicate SARS-CoV-2. Interestingly, the U87.ACE2+ cells could be successfully implemented in an MTS-based colorimetric CPE reduction assay, providing IC50 values for Remdesivir and Nirmatrelvir in the (low) nanomolar range. Lastly, the U87.ACE2+ cells were consistently permissive to all tested SARS-CoV-2 variants of concern, including the current Omicron variant. Thus, ACE2 expressing glioblastoma cells are highly permissive to SARS-CoV-2 with productive viral replication and with the induction of a strong CPE that can be utilized in high-throughput screening platforms.
Introduction
Undoubtedly, the current coronavirus disease 2019 pandemic has not only posed a serious threat to the international health, but it has also impacted people's daily lifestyle and work, and the global economy. Since its outbreak in December 2019, SARS-CoV-2, the causative agent of COVID-19, has quickly spread around the world, leading to over 400 million confirmed cases and more than 5.8 million deaths worldwide as of February 11, 2022 (https://covid19.who.int/). Despite the great success of the current vaccines against SARS-CoV-2, we still cannot control the spread of new variants and/or prevent re-infections.
This emphasizes the urgent need to develop effective antiviral countermeasures.
Major efforts are ongoing to develop novel therapeutics for COVID-19 treatment and/or effective prophylactic approaches to prevent viral spread (Yadav et al., 2021). In the context of early preclinical studies for SARS-CoV-2 antiviral assessment, screening platforms for drug libraries rely on the use of robust cellular infection systems, mostly based on immortalized cell lines originating from respiratory, but most often non-respiratory, tissues (Murgolo et al., 2021;Uemura et al., 2021;Chiu et al., 2022;Ramirez et al., 2021;Grau-Exposito et al., 2022;Ko et al., 2021;Chu et al., 2020). In fact, many initial screenings for SARS-CoV-2 antivirals have largely utilized the African green monkey kidney cell line Vero E6 as the host cell. However, it is known that Vero E6 cells exploit intrinsic nonspecific endocytic mechanisms for virus uptake and viral entry (Murgolo et al., 2021). Furthermore, these cells have a low expression of angiotensin converting enzyme 2 (ACE2), the main virus receptor (Hoffmann et al., 2020;Wu et al., 2020a;Zhou et al., 2020;Wrapp et al., 2020;Yan et al., 2020), and are deficient in expressing TMPRSS2, a membrane-anchored serine protease localized to the plasma membrane that is involved in the activation of the viral Spike (S) protein for cell surface membrane fusion (Hoffmann et al., 2020;Laporte et al., 2021;Bestle et al., 2020). In addition, SARS-CoV-2 uses an alternative route of cell entry via the endocytic pathway, in which endosomal cysteine proteases, such as cathepsin L (CTSL) and cathepsin B (CTSB) are able to substitute for TMPRSS2 to cleave and prime the S protein into its fusogenic state (Shang et al., 2020;Evans and Liu, 2021). Notably, reports on current circulating SARS-CoV-2 variants of concern demonstrate that Omicron infection is not enhanced by TMPRSS2 but is largely mediated via the endocytic pathway (Zhao et al., 2021;Meng et al., 2022).
Evaluation of virus-induced cytopathic effect (CPE) is a low-cost read-out for a quick and robust analysis in antiviral screening platforms. However, not every cell type permissive to SARS-CoV-2 infection will present CPE during effective virus replication. Also, the characteristics of CPE is cell type and virus dependent, ranging from mild damaging events such as cell rounding and detachment, over cell degeneration, to severe syncytium formation. The latter may ultimately result in cell lysis and complete destruction of the integrity of the cell monolayer.
The astroglioma cell line U87 has been successfully used in previous research on human deficiency virus (HIV) Hatse et al., 2001). Upon infection with HIV, these receptor-transfected U87 cells present a remarkable full-blown CPE that could be used as an easy readout in antiviral screening assays for HIV. Based on these characteristics, we explored in our current study the use of U87 cells for the infection with SARS-CoV-2. By means of several readouts we show that U87 cells that are stably expressing ACE2 (U87.ACE2 + cells), are highly permissive for SARS-CoV-2 with productive viral replication and with the induction of a strong CPE that can be utilized in high-throughput screening platforms.
Viruses. Severe Acute Respiratory Syndrome coronavirus 2 isolates (SARS-CoV-2) were recovered from nasopharyngeal swabs of RT-qPCRconfirmed human cases obtained from the University Hospital Leuven (Leuven, Belgium). SARS-CoV-2 viral stocks were prepared by inoculation of confluent Vero E6 cells as described in detail in a recent report (Vanhulle et al., 2022).
Plasmids
Second generation lentiviral support plasmids pMD2.G and pSPAX2 were obtained from Addgene (www.addgene.org, Cat. n • 12259 and 12260, respectively). Cell lines expressing ACE2 were made by second generation lentiviral transduction with a lentiviral transfer vector containing the human ACE2 coding sequence. Using the NEBuilder HiFi DNA assembly cloning kit (NEB), ACE2, derived from a pcDNA3.1-hACE2 vector (Cat. n • 145033, Addgene), was inserted into a pLenti6.3 vector (Thermo Fisher Scientific) in which the Cytomegalovirus (CMV) immediate-early enhancer/promoter region was replaced with a 0.6 kb minimal Ubiquitous Chromatin Opening Element (UCOE) sequence and a Spleen Focus Forming Virus (SFFV) promoter derived from pMH0001 (Cat. n • 85969, Addgene). An internal ribosomal entry site (IRES) was inserted behind the ACE2 sequence via restriction digestion of a pEF1a-IRES vector (Cat. n • 631970, Takara Bio Inc.) with NheI-HF and SalI-HF (NEB). A synthesized fragment (Integrated DNA Technologies [IDT]) containing ten betasheets of a modified mNeon-Green and a porcine teschovirus-1 2A peptide (P2A) coding sequence was inserted between the IRES sequence and the Blasticidin resistance cassette. pCAGGS.SARS-CoV-2_SΔ19_fpl_mNG2(11)_opt was generated through gibson assembly (NEBuilder, New England Biolabs) of a pCAGGS vector backbone cleaved using EcoRV-HF and HindIII-HF (New England Biolabs) and a PCR fragment of a codon-optimized SARS-CoV-2 Wuhan spike protein with a C-terminal 19 amino acid deletion as described in (Ou et al., 2020). A 12 amino acid flexible protein linker and a modified 11th betasheet of mNeonGreen (Feng et al., 2017) were added at its C-terminus. pcDNA3.1.mNG2(1-10) was generated through gibson assembly (NEBuilder, New England Biolabs) of a pcDNA3.1 vector (Thermo Fisher Scientific), amplified by PCR, and a PCR fragment of the first 10 betasheets of a modified mNeonGreen, synthesized by Genscript. Plasmids were sequence-verified with Sanger sequencing (Macrogen Europe, Amsterdam, The Netherlands) before use.
Lentiviral transduction of ACE2
For lentivirus production, 50-70% confluent HEK293FT cells in T-25 flasks were transfected with Lipofectamine LTX & PLUS Reagent (Thermo Fisher Scientific). LTX solution and transfection mixes containing 3 μg of lentiviral ACE2 vector, 5.83 μg of psPAX2 vector, 3.17 μg of pMD2.G vector and 12 μL PLUS reagent were prepared in serum-free Opti-MEM (Thermo Fisher Scientific). Following a 5-min incubation at room temperature, solutions were mixed and incubated for an additional 20 min. Cell medium was replaced with 5 mL of fresh medium, after which transfection complexes were added, followed by a 21-h incubation at 37 • C. Next, sodium butyrate (10 mM) was added, and cells were incubated for an additional 3 h, after which the medium was replaced with 5 mL of fresh medium. Virus-containing supernatants were harvested into 15 mL conical tubes 24 h after sodium butyrate addition and centrifuged at 2,000 g for 15 min at 4 • C to pellet cell debris. Transduction of cell lines with the harvested lentivirus was performed according to the ViraPower HiPerform T-Rex Gateway Expression System (Thermo Fisher Scientific) manufacturer's protocol. One μg/ml Polybrene (Sigma-Aldrich) was used to increase transduction efficiency.
Following transduction, cell medium was supplemented with 10 μg/ml blasticidin (InvivoGen) during passaging. Monoclonal cell populations were generated by seeding 96-well plates with a dilute cell solution of 0.5 cells/well, monitoring wells with monoclonal cell growth and expanding to larger culture vessels.
Reverse transcription quantitative PCR (RT-qPCR)
The reverse transcription quantitative PCR (RT-qPCR) assays for the detection of TMPRSS2 were performed using the Quantstudio 5 Realtime PCR system (Applied Biosystems). Premixes were prepared for each amplification reaction, using TaqMan Fast Virus 1-Step Master Mix (Cat. n • 4444434, Thermo Fisher Scientific) according to the manufacturer's protocol. The TaqMan gene expression assay for TMPRSS2 was obtained from Thermo Fisher Scientific (Hs01122322_m1). Standard curve DNA fragments were obtained via PCR of a full-length TMPRSS2 sequence. PCR fragments were purified with the Gel and PCR cleanup kit (Cat. n • 740609.250, Macherey-Nagel) according to the manufacturer's protocol. DNA purity and concentration was measured on the Nano-Photometer N60 (Implen). Samples, stored at − 80 • C, were thawed on ice in preparation of qPCR runs. 5 μL of sample was used in each reaction, with a total reaction volume of 20 μL. Final concentrations of 900 nM of each primer and 250 nM of probe were used in each reaction. PCR run conditions were 1 cycle of 50 • C for 5 min and 95 • C for 20 s followed by 40 cycles of 95 • C for 15 s and 60 • C for 60 s. Data analysis was performed with the Quantstudio 3&5 Design and analysis software (version v1.5.1, Applied Biosystems). Cq values were obtained via automated threshold determination. Standard curve runs were included in duplo on each plate, and efficiencies were calculated based on the slope. qPCR runs with standard curve efficiencies not ranging between 90% and 110%, or no-template controls showing amplification were excluded from data.
The duplex RT-qPCR assay for the detection of SARS-CoV-2 E and N gene has been described in detail in a recent report (Vanhulle et al., 2022).
Transient transfection
HEK293T cells were seeded at 400,000 cells/mL and incubated overnight at 37 • C prior to transfection the next day. Lipofectamine LTX (Invitrogen) was used for the transfection of plasmid DNA according to the manufacturer's protocol.
Cell-cell fusion assay
U87.ACE2 + cells were plated (20,000 cells/well of a 96-well plate) and incubated for 2 days at 37 • C to become a confluent cell monolayer. In parallel, HEK293T cells were transfected with transfection mixes containing 2.5 μg pCAGGS.SARS-CoV-2_SΔ19_fpl_mNG2(11)_opt plasmid encoding for SARS-CoV-2 spike protein. The next day, the transfected HEK293T cells were collected, resuspended, and counted on a Luna automated cell counter (Logos Biosystems), and administered to the U87.ACE2 + cells (20,000 cells/well). Fusion events were visualized using the IncuCyte® S3 Live-Cell Analysis System (Sartorius). Phase contrast and GFP images (4 per well) were taken using a 20x objective lens at 10 min intervals for a 5 h period, and 1 h intervals afterwards. Image processing was performed using the IncuCyte software.
Wild type virus infection assays
All virus-related work was conducted in the high-containment biosafety level 3 facilities of the Rega Institute from the Katholieke Universiteit (KU) Leuven (Leuven, Belgium), according to institutional guidelines. One day prior to the experiment, U87.ACE2 + cells were seeded at 20,000 cells/well in 96 well-plates. Serial dilutions of the test compound were prepared in cell infectious media (DMEM + 2% FCS), overlaid on cells, and then virus was added to each well (MOI indicated in the figure legends). Cells were incubated at 37 • C under 5% CO2 for the duration of the experiment. Viral cytopathic effect and GFP expression were scored in a blinded manner at different time points post infection, and the supernatants were collected and stored at − 20 • C.
Flow cytometry
For the cell surface staining of transfected HEK293T cells, cells were collected, washed in PBS, resuspended, transferred to tubes and samples were centrifuged in a cooled centrifuge (4 • C) at 500 g for 5'. After removal of the supernatant, cells were incubated with the primary (anti-Spike) antibody (30 ′ at 4 • C), washed in PBS, followed by a 30 min incubation at 4 • C with the secondary (labeled) antibody, and washed again. Finally, samples were stored in PBS containing 2% formaldehyde (VWR Life Science AMRESCO). For the intracellular staining of infected U87.ACE2 + cells, cells were collected at different time points as indicated in the figure legends, and the Fixation/Permeabilization kit from BD Biosciences was used (Cat n • 554714). At the time of collection, supernatant was removed, and cells were washed in PBS. Then, trypsin (0.25%) was added and plates were incubated for 3 ′ at 37 • C to detach the cells from the plate, followed by the addition of cold culture medium with 2% FCS. Next, cells were resuspended, transferred to tubes and samples were centrifuged in a cooled centrifuge (4 • C) at 500 g for 5'. After removal of the supernatant, cells were fixed and permeabilized by the addition of 250 μL of BD Cytofix/Cytoperm buffer and incubated at 4 • C for 20'. Samples were washed twice with Perm/Wash buffer before the addition of the primary (anti-Nucleocapsid) antibody (0.3 μg per sample). After a 30 min incubation at 4 • C, samples were washed twice in BD Perm/Wash buffer, followed by a 30 min incubation at 4 • C with the secondary (labeled) antibody, and washed again. Finally, samples were stored in PBS containing 2% formaldehyde (VWR Life Science AMRESCO).
Acquisition of all samples was done on a BD FACSCelesta flow cytometer (BD Biosciences) with BD FACSDiva v8.0.1 software. Flow cytometric data were analyzed in FlowJo v10.1 (Tree Star). Subsequent analysis with appropriate cell gating was performed to exclude cell debris and doublet cells, in order to acquire data on living, single cells only.
MTS-PES assay
Four days after infection, the cell viability of mock-and virusinfected cells was assessed spectrophotometrically via the in situ reduction of the tetrazolium compound 3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxy-methoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium inner salt, using the CellTiter 96 AQueous One Solution Cell Proliferation Assay (Promega), as described before . The absorbances were read in an eight-channel computer-controlled photometer (Multiscan Ascent Reader, Labsystem, Helsinki, Finland) at two wavelengths (490 and 700 nm). The optical density (OD) of the samples was compared with sufficient cell control replicates (cells without virus and drugs) and virus control wells (cells with virus but without drugs). The median inhibitory concentration (IC 50 ), or the concentration that inhibited SARS-CoV-2-induced cell death by 50%, was calculated from the concentration-response curve.
Statistical analysis
Data were visualized as means ± standard deviation (SD) and were analyzed by making use of the GraphPad Prism v8.3.0 software. Data were analyzed with unpaired t-tests to compare Wuhan with Omicron. P-values below 0.05 were considered statistically significant.
Generation of a stably ACE2-transduced U87 cell line
In our search for cells that can be used as an in vitro model for SARS-CoV-2 infection and high-throughput evaluation of antiviral drugs, we first tested different cell lines that were available in our antiviral screening platform (Sinisi et al., 2017) for their sensitivity to SARS-CoV-2 infection. One of the candidates was the glioblastoma cell line U87 that in previous work Hatse et al., 2001) has been reported as being highly valuable for Human Immunodeficiency Virus (HIV) research, based on the strong cytopathic effect (CPE) induced in these U87 cells. Preliminary experiments with U87 cells did not result in successful infection with SARS-CoV-2 (data not shown). Therefore, we determined the cellular expression of ACE2, known as the main receptor for SARS-CoV-2 (Hoffmann et al., 2020;Wu et al., 2020a;Zhou et al., 2020;Wrapp et al., 2020). As shown in Fig. 1A, basal level of ACE2 was undetectable in U87 cells, as measured by immunoblotting, whereas in control Calu-3 cells traces of ACE2 protein could be visualized. In contrast, Vero E6 cells expressed endogenous ACE2 at detectable amounts. Thus, we subsequently introduced the coding sequence for ACE2 in the U87 cells to enhance their sensitivity to SARS-CoV-2. Briefly, cells were stably transduced with a lentiviral vector that encodes for ACE2. After clonal expansion, the cellsdesignated as U87. ACE2 + -were evaluated for their ACE2 protein levels. As shown in Fig. 1B, the U87.ACE2 + cells were successfully transduced with ACE2 as evidenced by the very dense protein band on the immunoblot that corresponded to the ACE2 receptor. Furthermore, passaging of the cells for several months did not impact the high ACE2 expression level (Fig. 1B), which demonstrated that the U87.ACE2 + cells were stably expressing high amounts of the SARS-CoV-2 receptor of interest.
Following ACE2 binding, the spike protein also needs priming by cellular proteases before membrane fusion and cellular entry of the virus can take place (Hoffmann et al., 2020;Laporte et al., 2021;Shang et al., 2020;Evans and Liu, 2021). Thus, we additionally measured the expression of TMPRSS2 and CTSB/L in the U87.ACE2 + cells. As multiple attempts with different antibodies did not result in successful and reliable immunoblotting of TMPRSS2, we chose quantitative PCR (qPCR) as an alternative for the detection of endogenous TMPRSS2. As shown in Fig. 1C, U87.ACE2 + cells had detectable mRNA levels of TMPRSS2, comparable to that of A549 cells, but clearly less than what is measured in Calu-3 cells. In line with other reports (Murgolo et al., 2021;Laporte et al., 2021), our qPCR data verified that Vero E6 are deficient in TMPRSS2. In addition, immunoblots of CTSB demonstrated an enormous endogenous expression of this protease in the U87.ACE2 + cells, that outranged CTSB protein levels in A549, Calu-3 and Vero E6 (Fig. 1D). In contrast to the very weak detection of CTSB, Vero E6 cells expressed CTSL at high levels, as evidenced by the intense protein band that corresponded with the mature non-glycosylated form of CTSL (Fig. 1D). Immunoblotting results for CTSL in the U87.ACE2 + cells revealed detectable endogenous expression of CTSL at a level comparable to that measured in the A549 cells (Fig. 1D). However, in the U87. ACE2 + cells the glycosylated mature form of CTSL was mainly detected, whereas the parental U87 cells expressed low amounts of Pro-CTSL and both the non-glycosylated and glycosylated form of mature CTSL. Thus, the high abundance of CTSB and detectable levels of CTSL suggest the potential for an endocytic pathway for SARS-CoV-2 entry in the U87. ACE2 + cells.
U87.ACE2 + cells readily fuse with SARS-CoV-2 Spike expressing HEK293T
Having a U87.ACE2 + cell line established, we next determined if these cells express a functional ACE2 receptor that can activate the SARS-CoV-2 Spike (S) protein for membrane fusion. Therefore, we assessed the cell-cell fusion capability of the U87.ACE2 + cells with cells expressing the S protein of SARS-CoV-2 (see scheme in Fig. 2A). Briefly, HEK293T cells were transfected with a plasmid encoding for a codonoptimized S protein of SARS-CoV-2 (Wuhan strain). In parallel, U87. ACE2 + cells were plated in a 96-well plate to generate a confluent cell layer. Next, the transfected HEK293T cells, expressing the S protein at their cell surface in sufficient amount (Fig. 2B), were administered to the U87.ACE2 + cells. As depicted in Fig. 2C, a successful fusion between the U87.ACE2 + cells and S-expressing HEK293T cells was obtained, leading to the formation of many fused multinucleated giant cells (syncytia), that ultimately resulted in the complete destruction of the cell monolayer (see movie in Supplementary Fig. 1). This fusion event already manifested within a few hours of cell overlay and depended on the complementary expression of ACE2 and S. This was evidenced by the absence of cell-cell fusion either when U87.ACE2 + cells were combined with non-transfected HEK293T cells, or when receptor negative U87 control cells were combined with S-expressing HEK293T cells (Fig. 2C).
Supplementary video related to this article can be found at https:// doi.org/10.1016/j.antiviral.2022.105342
U87.ACE2 + cells are highly permissive to infection with SARS-CoV-2
Next, we determined if the U87.ACE2 + cells could be infected with wild-type SARS-CoV-2 virus. Therefore, the U87.ACE2 + cells were exposed to SARS-CoV-2 (strain 20A.EU2) and microscopically evaluated for cytopathic effect (CPE). As depicted in Fig. 3A, strong CPE was observed after 3 days post infection when a multiplicity of infection (MOI) of 0.3 or 0.03 was used. Moreover, the virus-induced CPE resulted in a complete destruction of the cell monolayer. Even at a low dose of virus input (MOI of 0.003), incubation of the infected cells resulted in a gradually increase in CPE over time that resulted in a complete fullblown CPE already after 3 days post infection (p.i.) ( Fig. 3B and Supplementary Fig. 2). In parallel, cells were exposed to a recombinant GFPexpressing SARS-CoV-2 variant (Wuhan strain) (Thi Nhu Thao et al., 2020). As shown in Fig. 3C, a productive SARS-CoV-2 infection was obtained when the U87.ACE2 + cells were exposed to a MOI of 3, as evidenced by the profound GFP expression at 22 h p.i. Further dilution of the virus stock resulted in a gradual lower infectivity rate of the cells (Fig. 3C). In fact, GFP expression was already clearly detectable in the U87.ACE2 + cells at 16 h post infection in a virus-concentration dependent manner, as quantified by flow cytometry (Fig. 3D). As expected, the permissivity of the U87.ACE2 + cells to SARS-CoV-2 was solely dependent on ACE2 expression, given that the receptor-negative U87 control cells remained resistant to virus infection (Fig. 3D). These data were further verified by complementary viral nucleocapsid (N) staining via flow cytometry, as described recently (Vanhulle et al., 2022). At 16 h post infection, 56% of the U87.ACE2 + cells stained positive for N (Fig. 4A), and this percentage gradually decreased with lower MOI of virus, whereas no infection could be detected in the receptor-negative U87 control cells (Fig. 4A, bottom).
The infected U87.ACE2 + cells efficiently initiated the viral replication cycle, as demonstrated by the gradual increase in viral RNA copies released in the supernatant over time, that reflects the amount of newly produced virions budding from the infected cells (Fig. 4B). Released virus particles could be detected as early as 14 h p.i. and, at 18 h p.i. an nearly three log increase in viral RNA was obtained (Fig. 4B).
U87.ACE2 + cells can be utilized for screening of anti-SARS-CoV-2 agents
Having successfully developed a cell line that is highly permissive to SARS-CoV-2 and that produces full-blown CPE, we next determined if this cell line could be implemented in an antiviral screening platform. Thus, the U87.ACE2 + cells were seeded in 96-well plates, treated with test compounds, and exposed to SARS-CoV-2. As proof-of-concept, we first tested Remdesivir (GS-5734), a nucleotide analog prodrug that inhibits the SARS-CoV-2 RNA dependent RNA polymerase (RdRp) (Murgolo et al., 2021;Frediansyah et al., 2021;Zhang and Tang, 2021;Wang et al., 2020;Choy et al., 2020), and a spike-neutralizing antibody (R001), acting as an SARS-CoV-2 entry inhibitor (Vanhulle et al., 2022). At 2 days p.i., samples were collected and analyzed, either by flow cytometry (Fig. 5A) or by duplex RT-qPCR (Fig. 5B). Both techniques demonstrated a full protection of SARS-CoV-2 replication by a 2 μM Remdesivir treatment, and a profound inhibition of virus entry by the addition of 10 μg/ml R001 antibody. Furthermore, RT-qPCR analysis on a dilution range of Remdesivir revealed a clear concentration-dependent antiviral effect of Remdesivir in SARS-CoV-2 infected U87.ACE2 + cells, Fig. 2. U87.ACE2 + cells readily fuse with SARS-CoV-2 Spike-expressing HEK293T cells. A). Cartoon of the fusion between U87. ACE2 + cells and transfected HEK293T cells expressing the SARS-CoV-2 S protein, created with BioRender.com. B) HEK293T cells were transfected with a plasmid encoding for a codon-optimized S protein. At 24h post transfection, cells were collected and stained with an anti-Spike antibody to determine the cell surface expression of S. Histogram plots show the mean fluorescence intensity (MFI) of S protein expression for non-transfected (top) and transfected (bottom) HEK293T cells. Flow cytometric data were collected from approximately 8,000 analyzed cells. C) U87.ACE2 + cells were overlayed with HEK293T cells either nontransfected (NT) or transfected (TF) with a plasmid encoding for the SARS-CoV-2 spike protein. As an additional control, U87 (without ACE2) were combined with NT or TF HEK293T cells (top panels). Pictures were taken 24h after the co-cultivation of both cell types (4X magnification). Only the combination of the U87.ACE2 + cells with the spike-transfected HEK293T cells resulted in cell-cell fusion with clear syncytia formation and destruction of the cell monolayer. Note that the U87.ACE2 + cells grow in a more uniform monolayer as compared to the parental U87 cells, suggesting that the morphology of the U87 cells has changed after the introduction of ACE2 and the clonal selection.
Finally, in view of a high-throughput antiviral screening application, we explored a colorimetric read-out of the SARS-CoV-2 infected U87. ACE2 + cells. First, a density range of the cells was evaluated in our MTS-PES assay to assess their ability for in situ reduction of the tetrazolium compound MTS. This cell dilution range revealed a clear densitydependent step-wise difference in optical density (OD), roughly between 100 and 10,000 cells/well (Fig. 6A). Also, a longer incubation time (3 days) of the U87.ACE2 + cells returned a comparable result with maximum values plateauing at 12,500 cells/well, and minimum values below 500 cells/well. Next, the U87.ACE2 + cells were exposed to a dilution range of SARS-CoV-2 (20A.EU2 strain), and CPE was assessed spectrophotometrically. As depicted in Fig. 6B, a clear difference in OD between virus-infected and mock-infected control cells was obtained, in a reproducible manner (4 replicate wells), and resulted in a virus concentration-dependent change in OD-values (Fig. 6B). Furthermore, the colorimetric read-out could be easily applied to determine the cytotoxicity ( Fig. 6C and Supplementary Fig. 3) and antiviral activity (Fig. 6D) of Remdesivir. Full protection of virus-exposed U87.ACE2 + cells was obtained with Remdesivir at concentrations as low as 80 nM, whereas the next compound dilution (16 nM) evoked a complete CPE. Based on these OD values an IC 50 value of 35 nM could be calculated for Remdesivir, which corresponded well with the value obtained through RT-qPCR analysis (Fig. 5C).
In the context of the continuous identification of new circulating SARS-CoV-2 variants, our CPE reduction assay with the U87.ACE2 + cells should also be applicable to new arising SARS-CoV-2 mutants. To present a complete analysis of our U87.ACE2 + cells in terms of susceptibility to SARS-CoV-2, we exposed the U87.ACE2 + cells to different SARS-CoV-2 variants of concern available in our laboratory (Vanhulle et al., 2022). As shown in Fig. 7A and B, our cells could be readily infected by all tested variants, including the recent circulating Omicron variant (Planas et al., 2022). Furthermore, all these SARS-CoV-2 variants induced strong CPE that could be exploited to evaluate antiviral compounds by our MTS-PES assay. A side-by-side comparison in the U87. ACE2 + cells of the original Wuhan strain with the latest Omicron variant revealed a significant improved infectivity of Omicron relative to Wuhan as quantified by MTS (Fig. 7B) and by RT-qPCR analysis (Fig. 7C), suggesting a productive endocytic viral entry pathway in the U87.ACE2 + cells as has been proposed for Omicron (Zhao et al., 2021;Meng et al., 2022). Interestingly, Remdesivir proved to remain active also in the Omicron-infected U87.ACE2 + cells. In addition, Nirmatrelvir (PF-07321332), an irreversible inhibitor of SARS-CoV-2 viral main protease Mpro, also exerted antiviral activity in the nanomolar range, as well as Molnupiravir (MK-4482 or EIDD-2801), a nucleoside analogue prodrug, although with at slightly higher IC 50 value in the low micromolar range (Fig. 7D and Supplementary Fig. 3).
Discussion
Based on the interesting characteristics (i.e., rapid and full blown CPE) of the U87 cells used in previous HIV work Hatse et al., 2001), we assessed in our current study the feasibility of employing U87 cells for the infection with SARS-CoV-2. By simply overexpressing ACE2 in the U87 cells we generated a human glioblastoma cell line that became highly permissive for SARS-CoV-2. These U87.ACE2 + cells initiated productive viral replication that induced a severe CPE, which in turn could be successfully exploited as an easy colorimetric read-out of viral infection.
Despite the numerous reports on COVID-19, it is still debatable if SARS-CoV-2 invades the Central Nervous System (CNS) via olfactory neurons (Brann et al., 2020) and infects neurons and glia of the CNS (Zhan et al., 2021;Jacob et al., 2020;Song et al., 2021;Dobrindt et al., 2021;Pellegrini et al., 2020;Pedrosa et al., 2021;Wu et al., 2020b;Yang et al., 2020). Different groups have shown that human neural progenitor cells, grown either as brain organoids or as neurospheres, are permissive to SARS-CoV-2 infection (Song et al., 2021;Yang et al., 2020;Bullen et al., 2020;Ramani et al., 2020;Zhang et al., 2020). However, studies exploring the putative mechanisms whereby SARS-CoV-2 could enter the CNS are still awaited. Although Bielarz et al. (2021) reported that SARS-CoV-2 was able to invade neuroblastoma and glioblastoma cell lines, including the U87 cells, our preliminary experiments with the parent U87 cells did not result in successful infection with SARS-CoV-2. Strikingly, in the study of Bielarz et al. (2021), high virus input (MOI of 5) was needed to detect low amount of viral RNA copies in the U87 cells, and the amount of viral RNA detected at 24 h p.i. was even lower as compared to the 2 h time point. These data are more indicative of an unproductive viral uptake, or nonspecific attachment of the virus to the U87 cells rather than a successful infection. In addition, our control experiments with a GFP-expressing SARS-CoV-2 variant indicated a lack Fig. 3. U87.ACE2 + cells are permissive to wild-type SARS-CoV-2. A). U87. ACE2 + cells were exposed to different MOI of a wild-type strain of SARS-CoV-2 (20A.EU2) and microscopically evaluated for cytopathic effect (CPE) after 3 d of infection (4X magnification). B) U87.ACE2 + cells were infected with a low MOI (0.003) of a wild-type strain of SARS-CoV-2 (20A.EU2) and daily monitored for CPE (4X magnification). C) U87.ACE2 + cells were infected with different doses of a GFP-expressing SARS-CoV-2. Pictures were taken 22 h post infection and show GFP expression in the infected cells (10X magnification). D) U87 and U87.ACE2 + cells were exposed to different doses of a GFP-expressing SARS-CoV-2. Samples were collected at 16 h post infection and 5,000-6,000 cells were analyzed by flow cytometry for GFP expression. Bars show the absolute number of GFP-positive cells (mean ± SD from 2 biological replicates, n = 2).
of active viral replication of SARS-CoV-2 in the U87 cells, given that GFP expression depends on the RdRp activity of the virus in the infected cells. Altogether, we concluded that the parent U87 cells are non-permissive to SARS-CoV-2.
Undoubtedly, successful infection of SARS-CoV-2 is correlated to cell surface expression of Angiotensin-Converting Enzyme 2 (ACE2), the well-accepted main receptor for SARS-CoV-2 (Hoffmann et al., 2020;Wu et al., 2020a;Zhou et al., 2020;Wrapp et al., 2020;Yan et al., 2020;Walls et al., 2020). An earlier report showed that in the brain, ACE2 is expressed only in endothelium and vascular smooth muscle cells (Hamming et al., 2004). Also, a single-cell sequencing study on glioblastoma demonstrated a relatively high expression of ACE2 in endothelial cells, bone marrow mesenchymal stem cells, and neural precursor cells (Wu et al., 2020b). In that study, Wu et al. (2020b) reported a low but detectable protein level of ACE2 in glioma cell lines, including the U87 cells, whereas our immunoblot analysis did not detect any ACE2 in U87 cells. Nevertheless, our study clearly demonstrated that ACE2 needs to be expressed abundantly before glioma cells become permissive to SARS-CoV-2. Supplementary, our data also reveal that (temporarily) upregulation of ACE2 in glioma cells (e.g., in some malignancies) might turn neuronal cells into highly permissive target cells for SARS-CoV-2. Thus, the U87.ACE2 + cells might represent useful tools Fig. 4. U87.ACE2 + cells actively replicate wild-type SARS-CoV-2. A). U87 and U87.ACE2 + cells were exposed to different doses of SARS-CoV-2 (Wuhan strain). Samples were collected at 16 h post infection and 5,000-6,000 cells were analyzed by flow cytometry. Cells were stained for intracellular nucleoprotein (N) expression. Dot plots show N expression in noninfected (lower left quadrant) and infected (lower right quadrant) cells. The numbers in each quadrant refer to the distribution of the cells (i.e., percentage of total cell population). The dot plot in grey (left panel) represents the noninfected cell control. B). U87.ACE2 + cells were exposed to SARS-CoV-2 (20A.EU2 strain; MOI 5) for 1 h (indicated by grey boxed V). Next, cells were washed extensively to remove unbound virus and were given fresh medium. Supernatant was collected at the indicated time points and a duplex RT-qPCR was performed to quantify the copy numbers of the N and E genes. The dotted horizontal line refers to the background signal. For each sample, the average (mean ± SD) of two technical replicates is given (n = 2), plotted on a log 10 scale.
to disentangle the entry and replication of SARS-CoV-2 in the CNS.
The viral spike protein of SARS-CoV-2 drives ACE2 receptor binding and subsequent membrane fusion of the viral envelope with the cell membrane. However, in order to initiate this membrane fusion reaction, the S protein needs to be proteolytic cleaved at the S2' cleavage site to make its fusion peptide accessible for membrane insertion. Several different proteases have been identified that can prime SARS-CoV-2 S, either at the plasma membrane (TMPRSS2) or in endosomes/lysosomes (cathepsins) (Hoffmann et al., 2020;Laporte et al., 2021;Tang et al., 2020;Muralidar et al., 2021). This flexibility in protease usage and entry route appears to be a consistent strategy used by coronaviruses (Murgolo et al., 2021;Evans and Liu, 2021). Thus, besides ACE2 expression viral tropism will also largely depend on the availability of those cellular proteases, generating a cell type-dependency for SARS-CoV-2 Evans and Liu, 2021). The U87 glioblastoma cells used in our study did express both TMPRSS2 and CTSB/L. Whereas the level of TMPRSS2 in the U87.ACE2 + cells was low as compared to the lung epithelia-derived Calu-3 cell line (often considered as a physiologically relevant cell type for SARS-CoV-2) (Laporte et al., 2021), it might still be sufficient to initiate cell surface SARS-CoV-2 entry. This is in contrast with the widely-used Vero E6 cells which are human TMPRSS2-deficient, as confirmed by our results. However, levels of CTSB were extremely high in the U87.ACE2 + cells, whereas Vero E6 cells had nearly detectable CTSB, and vice versa, protein levels of CTSL were high in Vero E6 cells and much lower in the U87.ACE2 + cells. These data suggest that the U87.ACE2 + cells might accommodate an endosomal entry route for SARS-CoV-2, and if so, that CTSB can also be a relevant cysteine protease for S priming. In that perspective, the U87. ACE2 + cells can be considered as a valuable additional cell line for further cathepsin studies in SARS-CoV-2 research. Furthermore, based on recent reports of the current circulating Omicron variant, demonstrating that SARS-CoV-2 Omicron infection is not enhanced by TMPRSS2 but is largely mediated via the endocytic pathway (Zhao et al., 2021;Meng et al., 2022), more attention should be given to the endosomal entry route for SARS-CoV-2. Of note, our U87.ACE2 + cells remained highly permissive for other SARS-CoV-2 variants of concern, including the Omicron variant. In fact, we observed a significant enhanced infectivity of Omicron in the U87.ACE2 + cells as compared to the Wuhan strain, suggesting that the U87.ACE2 + cells support an endosomal entry route.
An important parameter in antiviral screening is the determination of the cytotoxic effect of compounds in addition to their antiviral .ACE2 + cells were exposed to SARS-CoV-2 (Wuhan strain, MOI 1) in absence (Virus control) or presence of inhibitors (2 μM Remdesivir or 10 μg/ml antibody R001). At 2 d p.i. cells were collected and stained for intracellular nucleoprotein (N) expression. Dot plots show N expression in noninfected (lower left quadrant) and infected (lower right quadrant) cells, based on the analysis of 5,000-6,000 cells by flow cytometry. The numbers in each quadrant refer to the distribution of the cells (i.e., percentage of total cell population). The dot plot in grey (left panel) represents the noninfected cell control. B) U87.ACE2 + cells were exposed to SARS-CoV-2 (Wuhan strain, MOI 0.3) for 2 h in absence (Virus control) or presence of inhibitors (2 μM Remdesivir or 10 μg/ml antibody R001). Supernatant (= virus input) was then removed, and cells were washed in PBS and given fresh medium. In the conditions with the antiviral treatment, compound was administered together with the virus, and administered again after the wash step. At day 2 p.i., supernatant was collected and RT-qPCR was performed to quantify the copy numbers of the N gene and the E gene. The dotted colored lines refer to the residual amount (background) of virus that attached nonspecifically to the cells and was measured immediately after the wash step (2 h p.i.). For each sample, the average (mean ± SD) of two technical replicates is given (n = 2), plotted on a log 10 scale. C) Same as in (B) but for a dilution range of remdesivir. U87.ACE2 + cells were exposed to two different variants of concern of SARS-CoV-2 (MOI 0.3), as indicated. RT-qPCR data of the N gene were used to calculate the % inhibition of viral replication and to plot a concentration-response curve for Remdesivir. The calculated IC 50 value of Remdesivir for each virus strain is indicated in the matching color.
potency. Whereas the Vero E6 cells are known for high resistance to cellular toxicity, the U87.ACE2 + cells showed reasonable sensitivity ( Supplementary Fig. 3), making these cells more interesting for the selection of potential candidate drugs with promising therapeutic indices that can be further explored in follow-up in vivo studies. In addition, Vero E6 cells might be less suitable for the antiviral screening of small molecules due to their limited capacity to metabolize drugs compared to human-derived cells, as demonstrated for remdesivir in several reports (Uemura et al., 2021;Chiu et al., 2022;Ramirez et al., 2021) and confirmed by our study (Supplementary Fig. 3). Moreover, the antiviral potency of both remdesivir and nirmatrelvir was approximately 1,000 times higher in the U87.ACE2 + cells as compared to the Vero E6 cells, Fig. 6. Colorimetric analysis of SARS-CoV-2 infected U87.ACE2 + cells. A) U87.ACE2 + cells were plated in 96-well plates at a 1:2 dilution range (starting from 50,000 cells/well) and incubated for 1 day (left) or 3 days (right) before analysis by MTS-PES. Graphs show optical density (OD) values (mean ± SD; n = 2). The dotted line represents the OD value measured for medium only. B) U87.ACE2 + cells were exposed to a virus stock dilution range (1:10) of SARS-CoV-2 strain 20A.EU2 (starting from a 10 1 x dilution), incubated for 4 days and analyzed by MTS-PES. On the left is a picture taken from the 96-well plate after the in situ reduction of the tetrazolium compound MTS, for 4 replicates. The wells indicated by the black rectangle represent the virus-infected cells. The 2 columns outside the black rectangle are the mock-infected control cells. The graph on the right shows the corresponding OD values (mean ± SD; n = 4). The dotted blue line represents the OD value measured for cells only without virus (CC). The dotted black line represents the OD value measured for medium only. C) U87.ACE2 + cells were incubated with Remdesivir, incubated for 3 days, and analyzed by MTS-PES. Graph shows OD values (mean ± SD; n = 2) for a concentration range of compound, with calculated cytotoxic CC 50 value of Remdesivir. The blue dotted line represents the cell control (CC) and the black dotted line represents the OD value measured for medium only. D) U87.ACE2 + cells were infected with SARS-CoV-2 (Wuhan strain, MOI 0.1) in absence or presence of Remdesivir, incubated for 4 days and analyzed by MTS-PES. Graph shows OD values for a concentration range of compound, with calculated IC 50 value of Remdesivir. The blue dotted line represents the cell control (CC) and the red line the virus control (VC).
indicating that potential new SARS-CoV-2 antivirals can be more successfully identified in the U87.ACE2 + cells as compared to the Vero E6 cells.
Finally, one of the main advantages of the human U87.ACE2 + cells over existing SARS-CoV-2 infection models is their ability to induce a rapid (within 2 days) and full-blown CPE with complete cell lysis (e.g., compared to Vero E6 cells that slowly present clear CPE as cell detachment rather than cell lysis) in a consistent and reproducible way. This feature of cell destruction is of great value for an easy and clear-cut read-out in a CPE reduction assay, especially when a colorimetric Fig. 7. U87.ACE2 + cells are permissive to different SARS-CoV-2 variants of concern and preserve remdesivir activity against Omicron. A) U87.ACE2 + cells were exposed to a virus stock dilution range (1:10) of different SARS-CoV-2 variants of concern (starting from a 10 1 x dilution), incubated for 4 days and analyzed by MTS-PES. The blue dotted line represents the cell control (CC) and the dotted black line represents the OD value measured for medium only. Graphs show optical density (OD) values (mean ± SD; n = 2). B) U87.ACE2 + cells were exposed to a virus stock dilution range (1:10) of SARS-CoV-2 variant Wuhan and Omicron (starting at an MOI = 1), incubated for 4 days and analyzed by MTS-PES. The blue dotted line represents the cell control (CC). Graphs show optical density (OD) values (mean ± SD; n = 3). *, P < 0.05; Unpaired t-test. C) Same as in (B) but supernatant was collected at 24 h p.i. and RT-qPCR was performed to quantify the copy numbers of the E gene and to determine the amount of virion release into the culture medium. For each sample, the average (mean ± SD) of two technical replicates is given (n = 2), plotted on a log 10 scale. *, P < 0.05; Unpaired t-test. D) U87.ACE2 + cells were infected with SARS-CoV-2 (Omicron strain, MOI 0.08) in absence or presence of Remdesivir, Nirmatrelvir or Molnupiravir, incubated for 4 days and analyzed by MTS-PES. OD values were used to calculate the % inhibition of viral replication and to plot a concentration-response curve. For each compound, the IC 50 value is given. Values are mean ± SD; n = 4. quantification of CPE-induced reduction in cell viability is utilized. Thus, infection of glioblastoma cells might serve as an easy and low-cost model for testing drugs in high-throughput screening platforms, and hopefully contribute to the development of effective antiviral countermeasures to tackle SARS-CoV-2.
Declarations of interest
None.
Author contributions
K.V., E.V. and J.S. designed research; E.V., A.C., B.P., J.S. and S.N. performed research; E.V., S.N. and K.V. analyzed the data; E.V. and K.V. wrote the manuscript; P.M. contributed new reagents/analytic tools. All of the authors discussed the results and commented on the manuscript.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Declaration of interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2022-02-17T16:02:47.208Z
|
2022-02-14T00:00:00.000
|
{
"year": 2022,
"sha1": "1b4db340526bd735140f9de4e552c96dcc4b29c7",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.antiviral.2022.105342",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf783cdc793a72ce2c8dde982cc998f3ca588234",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
54939964
|
pes2o/s2orc
|
v3-fos-license
|
Bioremediation of Chlorpyrifos Contaminated Soil by Microorganism
India is agricultural based country where 70% of the population survives on it. In order to increase the production of field various pesticides are used. Chlorpyrifos (O,O-diethyl O-3,5,6-trichloro-2-pyridyl phosphorothioate) is an organophosphate pesticide which is widely used as insecticide for crop protection. But due to its persistent nature into the environment, it is leading to various hazards including neurotoxic effects, cardiovascular diseases and respiratory diseases. Bioremediation is a technology to eliminate chlorpyrifos efficiently from the environment. In bioremediation of chlorpyrifos the potential degradative microorganisms possess opd (organophosphate degrading) gene which hydrolyses the chlorpyrifos and utilizes it as a sole carbon source.Thus the present review discusses about how through bioremediation the pesticide chlorpyrifos can be degraded using potential soil microorganisms.
I. IMPACT OF MODERN AGRICULTURE ON ENVIRONMENT
Until about four decades agoin agricultural systems, crop yields depended totally on internal resources like recycling of organic matter, biological control mechanisms and rainfall patterns. Agricultural yields were modest, but stable. To safeguard the production, variety of crops were grown in space and time in a field as insurance against pest outbreaks or severe weather. Inputs of nitrogen were gained by rotating major field crops with legumes. In turn rotations suppressed insects, weeds and diseases by effectively breaking the life cycles of these pests. In these types of farming systems the link between agriculture and ecology was quite strong and signs of environmental degradation were seldom evident. But as agricultural modernization progressed, the ecology-farming linkage was often broken as ecological principles were ignored or overridden. In fact, several agricultural scientists have arrived at a general consensus that modern agriculture confronts an environmental crisis. Evidence has accumulated showing that whereas the present capital-and technology intensive farming systems have been extremely productive and competitive; they also bring a variety of economic, environmental and social problems (Altieri and Rosset, 1995). The very nature of the agricultural structure and prevailing policies in a capitalist setting have led to an environmental crisis by favouring large farm size, specialized production, crop monocultures and mechanization. Today as more and more farmers are integrated into international economies, the biological imperative of diversity disappears due to the use of many kinds of pesticides and synthetic fertilizers, and specialized farms are rewarded by economies of scale. In turn, lack of rotations and diversification take away key self-regulating mechanisms, turning monocultures into highly vulnerable agro-ecosystems dependent on high chemical inputs. Also, fields that in the past contained many different crops, or a single crop with a high degree of genetic variability, are now entirely devoted to a genetically uniform single crop. The specialization of farms has led to the image that agriculture is a modern miracle of food production. However, excessive reliance on farm specialization has negatively impacted the environment (Altieri and Rosset, 1995). earliest pesticides were either inorganic products or derived from plants, for example burning sulphur to control insects and mites. Other early insecticides included hellebore to control body lice, nicotine to control aphids, and pyrithrin to control a wide variety of insects. Lead arsenate was first used in 1892 as an orchard spray while about the same time it was accidentally discovered that a mixture of lime and copper sulphate controlled downy mildew, a serious fungal disease of grapes. It is still one of the most widely used fungicides. Many of these early chemicals had disadvantages. They were often highly toxic, were very persistent, posing a threat to the environment (Jayashree and Vasudevan, 2007).
III. USE OF CHLORPYRIFOS IN CROP PROTECTION
It is used in field protection of corn, cotton, peaches, apple etc. Termites and insects are susceptible to chlorpyrifos [3]. While originally used primarily to kill mosquitoes, Chlorpyrifos is effective in controlling cutworms, corn rootworms, cockroaches, grubs, flea beetles, flies, termites, fire ants, and lice. It is used as an insecticide on grain, cotton, field, fruit, nut and vegetable crops, and well as on lawns and ornamental plants. It is also registered for use in domestic dwellings, farm buildings, storage bins, and commercial establishments. Chlorpyrifos acts on pests primarily as a contact poison, with some action as a stomach poison. It is available as granules, wettable powder, dustable powder, and emulsifiableconcentrate .
V. HAZARDS OF CHLORPYRIFOS
Chlorpyrifos is moderately toxic to humans. It primarily affects the nervous system through inhibition of cholinesterase, an enzyme required for proper nerve functioning. Poisoning from chlorpyrifos may affect the central nervous system, the cardiovascular system, and the respiratory system. It is also a skin and eye irritant. Symptoms of acute exposure to organophosphate or cholinesterase-inhibiting compounds may include the following: numbness, tingling sensations, incoordination, headache, dizziness, tremor, nausea, abdominal cramps, sweating, blurred vision, difficulty breathing or respiratory depression, and slow heartbeat. Very high doses may result in unconsciousness, incontinence, and convulsions or fatality (Eaton, 2008). Transport of chlorpyrifos to human results in neural disorders, inhibition of DNA synthesis, interference with gene transcription, altered function of neurotrophicsignaling cascade and synaptic function ).
VI. SOIL PERSISTENCY AND ENVIRONMENT FATE OF CHLORPYRIFOS
Chlorpyrifos is moderately persistent in soils. The half-life of chlorpyrifos in soil is usually between 60 and 120 days, but can range from 2 weeks to over 1 year, depending on the soil type, climate, and other conditions. Chlorpyrifos was less persistent in the soils with a higher pH. Soil halflife was not affected by soil texture or organic matter content. In anaerobic soils, the half-life was 15 days in loam and 58 days in clay soil. Adsorbed chlorpyrifos is subject to degradation by UV light, chemical hydrolysis and by soil microbes. Chlorpyrifos adsorbs strongly to soil particles and it is not readily soluble in water. It is therefore immobile in soils and unlikely to leach or to contaminate groundwater (Ajaz et al., 2005). TCP(3,5,6-trichloro-2-pyridinol)is the principle metabolite of chlorpyrifos, adsorbs weakly to soil particles and appears to be moderately mobile and persistent in soils. The US EPA considers that there is insufficient data to fully assess the environmental fate of Chlorpyrifos. Chlorpyrifos is tightly adsorbed by soil and not expected to leach significantly. Volatilization from soil surface will contribute to loss. Depending on soil type, microbial metabolism of Chlorpyrifos may have a half-life of up to 279 days. Higher soil temperatures, lower organic content and lower acidity increases degradation of chlorpyrifos. Chlorpyrifos inhibits nitrification and nitrogen fixation marginally, many bacterial strains were unable to degrade
VII. FATE IN HUMANS AND ANIMALS
Chlorpyrifos is readily absorbed into the bloodstream through the gastrointestinal tract if it is ingested, through the lungs if it is inhaled, or through the skin if there is dermal exposure. In humans, chlorpyrifos and its principal metabolites are eliminated rapidly. Chlorpyrifos is eliminated primarily through the kidneys. It is detoxified quickly in rats, dogs, and other animals. Chlorpyrifos is moderately to very highly toxic to birds. Chlorpyrifos is very highly toxic to freshwater fish, aquatic invertebrates and estuarine and marine organisms. Cholinesterase inhibition was observed in acute toxicity tests of fish exposed to very low concentrations of this insecticide. Aquatic and general agricultural uses of chlorpyrifos pose a serious hazard to wildlife and honeybees (Cho et al., 2004).
VIII. BIODEGRADATION OF CHLORPYRIFOS
BY MICROORGANISM Availability of different pesticides in field provides exposure of several different kinds of microorganisms to pesticides. Most of the organisms die under toxic effect of pesticides but few of them evolve in different ways and use pesticide compounds in metabolism. Several reports are available indicating degradation of different pesticides when they are available in nature in excess (Horvath, 1972, Hussain et al., 2007. Successful removal of chlorpyrifos by the addition of bacteria (bioaugmentation) had been reported (Singh et al., 2004). Degradation strategies exhibited by microbes include: co metabolize the biotransformation of a molecule coincidental to the normal metabolic functions of the microbe; catabolism-the utilization of the molecule as a nutritive or energy source; and extracellular enzymes (phosphatases, amidases and laccases)secreted into the soil, which can act on the molecule as a substrate. Three basic types of reactions can occur: degradation, conjugation, and rearrangements, and all of which can be microbially mediated. Complete degradation of a chemical in the soil to carbon dioxide and water involves many different types of reactions. Microorganisms are key players in determining the environmental fate of novel compounds because they can be used as carbon and energy sources by microorganisms (Singh, 2008).
Biodegradation of chlorpyrifos by Bacteria
Bacteria use natural organics such as proteins, carbohydrates, and many others as carbon and energy sources. Many of the xenobiotic compounds of environmental concern are naturally occurring relatives of these organics. For other xenobiotics, repeated exposure has resulted in the adaptation and evolution of bacteria capable of metabolizing these man-made compounds . Microbial degradation of organophosphate pesticides like chlorpyrifos is of particular interest because of the high mammalian toxicity of such compounds and their widespread and extensive use. Chlorpyrifos has been shown to be degraded cometabolically in liquid media by bacteria [14]. Pseudomonas aeruginosa is the most common Gram negative bacterium found in soil. Isolates of this bacterium have been found to have potential to degrade chlorpyrifos (Fulekar and Geetha, 2008). Enhanced degradation of chlorpyrifos by Enterobacter strain B-14 was reported (Singh et al., 2004). Yang et al., (2005), isolated Alkaligenes faecalis DSP3, which is also capable of degrading chlorpyrifos and results in the formation of by product 3, 5, 6-trichloro-2-pyridinol (TCP) (Rani, et. al., 2008). A chlorpyrifos degrading Flavobacterium sp. is reported by Jilani and Khan (2004). A few chlorpyrifos-degrading bacteria, including Enterobacter strain B-14, Stenotrophomonas sp. YC-1, and Sphingomonas sp. Dsp-2, have been studied. The metabolism of chlorpyrifos by microorganism in soil has been reported by many scientists. Chlorpyrifos gets oxidized to exon analogue [O,Odiethyl-O-(3,5,6-trichloro-2-pyridinyl) phosphate, III] of insecticide and finally into 3,5,6-trichloropyridinol (II) (Mukherjee et. al., 2004). Various opd (organophosphate degrading) genes have been isolated from different microorganisms from different geographical regions, which have been shown to hydrolyze chlorpyrifos (Hussain et al., 2007). Chlorpyrifos has been shown to be degraded co metabolically in liquid media by bacteria, and various opd genes have been isolated from different microorganisms from different geographical regions, some of which have been shown to hydrolyze chlorpyrifos. Chlorpyrifos has been reported to be degraded co metabolically in liquid media by Flavobacterium sp. and also by an Escherichia coli clone with an opd gene (Singh et. al., 2004). Enhanced degradation of chlorpyrifos by Enterobacter strain B-14 was reported by Singh et al., (2004). Six chlorpyrifosdegrading bacteria were isolated using chlorpyrifos as the sole carbon source by an enrichment procedure (Rani, et. al., 2008). Chlorpyrifos hydrolysis was greatly accelerated under low moisture conditions, both in acidic and alkaline soils (Ajaz, et al., 2005). Arthrobacter sp. strain B-5 hydrolyzed chlorpyrifos at rates dependent on the substrate. Chlorpyrifos (10 mg/L) was completely degraded in the mineral salts medium by Flavobacterium sp. ATCC 27551 for 24 h and by Arthrobacter sp. for 48 h, respectively.The
(isolated from a flooded soil retreated with methyl parathion) was reported (Xu, 2007). The degradation of chlorpyrifos was reported in mineral salt medium by an Arthrobacter species that was initially isolated from methyl parathion-enriched soil (Jilani S. and Khan, 2004). Bacillus sp. And Micrococcus sp. possess potential to degrade chlorpyrifos (Getzin, 1981;Gomez et al., 2007).
Biodegradation by Fungi
Chlorpyrifos has also been reported to be effectively degraded by two soil fungi, Trichoderma viride and Aspergillus niger [16]. Several chlorpyrifos-degrading fungi, such as Phanerochaete chrysosporium, Aspergillus terreus, and Verticillium sp. DSP have also been reported [17]. Verticillium sp. and Brassica chinensis are reported for degradation of chlorpyrifos in culture medium ranging from 1 to 100 mg/L. Methods of in-situ bioremediation of Verticillium sp. are also developed and achieved good results (Arisoy, 1998; Trejo and Quintero, 2000;Bhalerao and Puranik, 2007). Fungal degradation of chlorpyrifos was reported by Verticillium sp. DSP in pure cultures and its use in bioremediation of contaminated soil (Xu, 2007). (Smith et al., 1967).
International Journal of Environment, Agriculture and Biotechnology (IJEAB)
Vol -2, Issue-4, July-Aug-2017 http://dx.doi.org/10.22161/ijeab/2.4.21 ISSN: 2456-1878 www.ijeab.com Page | 1628 A fungal strain capable of utilizing chlorpyrifosas sole carbon and energy sources from soil and degradation of chlorpyrifos in pure cultures and on vegetables by this fungal strain and its cell-free extract is also reported. This strain was identified as an unknown species of Verticillium. It opens a new research direction for development of novel bioremediation process .The chlorinated pyridinyl ring of chlorpyrifos undergoes cleavage during biodegradation by P. chrysosporium. But the degradation of chlorpyrifos proves more efficient by mixed populations than by pure cultures of fungi. Mixed population of fungi, such as Alternaria alternata, Cephalosporium sp., Cladosporium cladosporioides, Cladorrhinum brunnescens, Fusarium sp., Rhizoctonia solani, and Trichoderma viride, reveal the degradation of chlorpyrifos in liquid culture more efficiently (Singh et al., 2004).
IX. ENZYMATIC ACTIVITY IN CHLORPYRIFOS BIODEGRADATION
Organophosphorus hydrolase is one of the important hydrolytic enzymes in detoxification technology that hydrolyze chlorpyrifos pesticides containing P-O, P-F and P-S bonds. The OPH enzymes, including O-Phenylenediamine Dihydrochloride (OPD), Methyl Parathion Hydrolase (MPH) Mevalonate Pyrophosphate Decarboxylase (MPD) etc., were identified for the hydrolysis for chlorpyrifos (Meysami and Baheri, 2003). The degradation of chlorpyrifos induced Organophosphorous Phosphatase (OPP) production and concentration were 28 times higher in the extracellular than inside the cells. The chlorpyrifos degradation efficiency for L. fermentum, L. lactis and E. coli were reported to 70 per cent, 61 per cent and 16 per cent with 3,5,6-trichloro-2-pyridinol (TCP), chlorpyrifos-oxon and diethyl-phosphatase end products respectively. Purification and characterization of a novelchlorpyrifos hydrolase from the fungi Cladosporium cladosporioides Hu-01 was done (Bhagobaty et al., 2007).
X. GENES INVOLVED IN CHLORPYRIFOS BIODEGRADATION
The organophosphate-degrading opd, mpd genes were isolated from species that were capable to degrade chlorpyrifos. Most of them were plasmid based or located on the chromosome. The opd gene from Agrobacterium radiobacter was located on the chromosome. Identification of a novel phosphortriesterase enzyme from the coding of gene differs from organophosphate degradative gene (opd) inEnterobacter strain (Yang et al., 2005).
XII. CONCLUSION
The organophosphate pesticide chlorpyrifos used against pests not only protects crops but also causes havoc in the environment by its accumulation. Bioremediation is emerging as a beneficial tool in order to create pesticide free environment. The potential microorganisms have the ability to degrade pesticide to the fullest. But still there needs more research to be done in order to bring the technique into field practice and make it more efficient. For large scale culture of such bacterial isolates to be used for bioremediation purpose, it is essential to determine the optimum growth conditions like temperature and pH. These isolated strains of bacteria are highly adapted to existing environment conditions and thus could be effectively utilized for bioremediation and metabolic detoxification of chlorpyrifos.
|
2019-04-27T13:05:44.664Z
|
2017-07-20T00:00:00.000
|
{
"year": 2017,
"sha1": "c168abcb062515604c48ab926abc125203fc5ebd",
"oa_license": "CCBYSA",
"oa_url": "https://ijeab.com/upload_document/issue_files/21%20IJEAB-JUL-2017-36-Bioremediation%20of%20Chlorpyrifos%20Contaminated.pdf",
"oa_status": "GOLD",
"pdf_src": "Neliti",
"pdf_hash": "1251c75deba1f36757f6992af0ef3d2f3d36723c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
258365980
|
pes2o/s2orc
|
v3-fos-license
|
Role of untargeted omics biomarkers of exposure and effect for tobacco research
Tobacco research remains a clear priority to improve individual and population health, and has recently become more complex with emerging combustible and noncombustible tobacco products. The use of omics methods in prevention and cessation studies are intended to identify new biomarkers for risk, compared risks related to other products and never use, and compliance for cessation and reinitation. to assess the relative effects of tobacco products to each other. They are important for the prediction of reinitiation of tobacco use and relapse prevention. In the research setting, both technical and clinical validation is required, which presents a number of complexities in the omics methodologies from biospecimen collection and sample preparation to data collection and analysis. When the results identify differences in omics features, networks or pathways, it is unclear if the results are toxic effects, a healthy response to a toxic exposure or neither. The use of surrogate biospecimens (e.g., urine, blood, sputum or nasal) may or may not reflect target organs such as the lung or bladder. This review describes the approaches for the use of omics in tobacco research and provides examples of prior studies, along with the strengths and limitations of the various methods. To date, there is little consistency in results, likely due to small number of studies, limitations in study size, the variability in the analytic platforms and bioinformatic pipelines, differences in biospecimen collection and/or human subject study design. Given the demonstrated value for the use of omics in clinical medicine, it is anticipated that the use in tobacco research will be similarly productive.
Introduction
The worldwide burden from the use of combustible and non-combustible tobacco products remains a substantial public health issue.There are known methods to foster cigarette smoking cessation, which have broad applicability to other tobacco products.Biomarkers in tobacco studies are intended to elucitate pathways for disease risk, and more recently for the comparison of emerging tobacco products compared to conventional tobacco products and never use.The primary use of biomarkers in smoking cessation studies focus mainly on compliance of specific tobacco-smoke constituents, such as cotinine, anatabine, anasabine and nicotelline [1,2].However, with a rapidly expanding tobacco marketplace, mostly with non-combustible products (e.g., electronic cigarettes [e-cigs], heat and not burn tobacco products [HTPs] and oral nicotine pouches), there is a need for additional study of the relative harms for tobacco users switching to other products rather than complete cessation of all nicotine products.There are additional needs for biomarker studies to assess compliance of some of these new products.A summary of commonly used biomarkers has been published following a 2015 Food and Drug Administration (FDA) Center for Tobacco Products (CTP) workshop for tobacco-related biomarkers of exposure and harm [3,4], and other reviews [5,6].However, this workshop only superficially addressed omics biomarkers and appropriately referred to them as novel.
Biomarkers are generally classified in several ways, with some overlap, as biomarkers of exposure and biomarkers of effect (or harm).The biomarkers of exposure are typically chemically-specific measurements of smoke constituents and their metabolites, such as for tobacco-specific nitrosamines and tobacco alkaloids that only can come from tobacco leaves [6].Other chemically-specific biomarkers are not unique to tobacco exposure, but also derive from other sources (e.g., diet and environment).Biomarkers of effect represent changes in cell function and cannot be specific to any particular exposure, and includes those derived from endogenous exposures, such as markers of DNA damage, proliferation, cell death, alterations in xenobiotic metabolism, and changes in gene expression, methylation, proteins, and endogenous metabolites.
Biomarkers may be targeted, e.g., the measurement of a specific protein or panel of proteins, carcinogen metabolites, or specific type of DNA damage.Untargeted biomarkers, which include omics platforms, are broad screen approaches that are generally agnostic to specific hypotheses and are used for biomarker discovery (including profiles and unique phenotypes) with the intent to be useful as clinical assays, or for large scale studies impacting policy.They also are used to discover effects on disease pathways and networks.There is a desire for validated tobacco-related omics biomarkers that would be clinically relevant as predictive of future harm.
Omics strategies focus on specific biological types of molecules, including proteins, gene expression (including non-coding RNAs such as micro-RNAs), gene methylation, small molecules, and lipids [7][8][9].These all yield complex datasets ranging from 100′s to 100,000 features (e.g., chromatographic mass spectroscopy peaks that have not yet been identified) that require deep understanding of data processes (pipelines) and analysis incorporating systems biology approaches [10,11].They can be done using focused arrays that offer ease of use through manufacturer supplies reagents and software analysis, or they can be more comprehensive such as through next generation sequencing of RNA transcriptomics, or mass spectroscopy methods for proteomics, metabolomics and lipidomics.
The focus of this manuscript is on the use of untargeted screening strategies through omics assay for biomarker discovery and correlative assessments for the biological impacts of tobacco products relative to each other, complete cessation, smoking reinstatement and relapse prevention.(Epigenomics and genomic risks will not be discussed herein as they are addressed in a separate article in this monograph.)The various omics are summarized in Table 1, including a ranking for the number of available studies and applications.This manuscript reviews general principles for validation and use of omics biomarkers.
Methods: systematic review of tobacco use and untargeted omics in healthy individuals
This systematic review reports studies of tobacco use and cessation only in healthy individuals that therefore avoid disease bias (e.g., not studying lung cancer or COPD patients) and provide data for toxicological changes prior to the onset of disease.Most available studies were about the latter and only a few cessation studies have been conducted.A systematic search for all papers was conducted through PubMed and Google Scholar.Search terms including smoking and either transcriptomics, proteomics, metabolomics and lipidomics.Studies must have used untargeted methods, and those with specific a priori hypotheses assessing individual or groups of related biomarkers were excluded.Studies that did not provide data for healthy individuals were excluded.Additionally, cited papers in those publications were reviewed, as well as the related articles and publications citing the study as listed in PubMed.As this manuscript shows, there are only limited studies across many types of biospecimens (serum, plasma, urine, saliva, sputum and lung samples).The cited studies are summarized in Supplementary Table 1.Cited studies in this review are those that provided data for healthy individuals of any age (often referred to as "controls") using untargeted omics methods and provided data specifically about the healthy individuals.
General principles for validation and use of omics biomarkers
Omics approaches for use in epidemiology and prevention intervention studies is an evolving field, and the application to tobacco research has been quite limited.The use of omic's derived signatures or profiles has been somewhat validated for some applications in the clinical setting for susceptibility, early detection, diagnosis, treatment decision making and prognosis of various illnesses [12][13][14][15].Thus, there is proof-of-principle applications for the use of omics in tobacco risk and cessation studies (and relapse).Validation of omics biomarkers such that they can be applied to the clinical setting or inform policy include both technical (laboratory) validation and clinical validation [7].
Technical validation
The technical validation of a biomarker addresses the variability of assay results due to methodological issues and instrument performance, with the intent of improving accuracy and reproducibility at levels observed in human biofluids [3,7,14].Consideration should be given to each step of the process, namely how biospecimens are collected, time to processing, quality and consistency of reagents, instrument variability, choice of control samples, data pre-processing for analysis (transformation and normalization), and data analysis (Table 2) [16][17][18].In the research setting, samples should be blinded to laboratory staff in the context of experimental design, subject characteristics and intervention.Although not typically available for omics assays, "gold standards" or well-defined standards are used to develop and monitor the assay performance.Errors could be random or systematic.Some assays may be less informative for disease pathways when study subjects have some concurrent disease (e.g., high blood lipids or bilirubin), at the levels that features are found in human biospecimens (e.g., below a limit of detection or very high), or might be affected during sample collection and transport.Several statistical approaches should be used, including the analytic sensitivity, coefficient of variation (and for each feature), precision, accuracy, interferences (including from specimen containers), and allowable total error [16].These can vary based on the type of biospecimen and can be different based on the detectable levels, e.g., more low levels of the biomarkers might have higher coefficients of variation and higher biomarker levels may lead to cross-contamination of samples.Quality control procedures are put into place to monitor the assay performance over time, for example using representative control samples (e.g., pooled subject samples with high and low results), spiked samples and blank reagent samples.It should be noted that quantitative biomarker levels are only applicable to the laboratory and method being used, and should not be extrapolated to results from other laboratories, unless a cross-laboratory validation has been done, or assays are developed with the same biospecimen standards.Manufacturer's claims for assay performance should be independently validated by the investigator.In the clinical laboratory, or for research where the biomarker results are used to influence decision making no matter how trivial (e.g., subjects receive an intervention based on an omics assay result needs to be developed under the Clinical Laboratory Improvement Amendments (CLIA) methodology as "Laboratory Developed Tests" as high complexity assays.The CLIA process does not ensure laboratory validation, only that the laboratory has addressed this, has standard operating procedures and regularly follows them.
The laboratory validation for tobacco-focused omics studies is not commonly done, although it should be.The above procedures to validate assays for the clinical arena are described by the FDA for therapeutic drug development.The FDA CTP does not have similar guidance for the analysis for biospecimens, but does have a draft guidance for the validation of chemical analytic testing for tobacco products that includes many of the same principles that would be used for tobacco biomarker assays (https://www.regulations.gov/docket/FDA-2021-D-0756).Other resources that are available is a summary of a CTP workshop for biomarkers of exposure and harm, which does not address the validation of omics assays, but also has principles that can be applied to omics biomarkers [3,4].
Clinical validation (Fig. 1)
Clinical validation refers to the biological implications for assay results [7].The intent of clinical validation is to justify use of the biomarkers in the clinical setting that are directly predictive of future disease risk.Biomarkers of harm generally reflect a surrogate endpoint, e.g., a urine or blood test predicting lung or heart disease [3].However, there are currently no validated tobacco-related biomarkers of harm of any type, although some epidemiology studies indicate the potential.To become validated, the best evidence shows predictivity in long term observational studies where human disease develops after many years of use.The challenge of these studies is the limited availability of biospecimens, but these are available when there is sufficient preliminary data to justify the use of the precious biospecimens, including for omics research.While study designs and human interventions for omics biomarkers might be used to assess the biological impact of tobacco cessation, or switching from combustible to non-combustible tobacco products, the data analysis requires different approaches than typically used for behavioral or other interventions.The large number of features increases the risk of false positives, and so statistical analysis plans include the use of statistical significance testing using false discovery rates, or other methods such as Bonferroni corrections.As with all human studies, replication of omics studies is critical, which is often initially done within a study as internal replication, or across studies as external replication [7,12].Internal replication is done within a single data set with a two-step process for splitting the sample set into two groups as an initial discovery set as an unbiased approach and then a validation set seeking to replicate the results of the discovery set [10].The best replication would come from external datasets in different populations and using different study methods showing the same results (e.g., the perturbation of the biological pathway in two study sets associated with the same omics profile).During this process of replication, consideration also should be given to differing results by gender, race, ethnicity, and/or age [17].An important research gap for any omics method is the understanding of the variability of results over time unrelated to changes in tobacco use, for example day to day variation in diet, environmental exposures, season, health, and physical activity.It should be noted that with the use of omics, perturbation of biological systems (e.g., increased xenobiotic metabolism) alone may not necessarily infer an adverse toxic effect increasing disease risk, but also can be a normal host response to reduce the harm from a toxic exposure.It also is important to note that statistical significance may not equate to clinical significance, and a substantial limitation of omics studies is the lack of clinical correlates to statistically altered features or pathways [12].Thus, the determination of a biomarker's clinical utility is a function of how well a biomarker is a surrogate for a risk or disease outcome [7].
The CTP provides regulations, guidance and premarket approvals based on a population health standard, and therefore assessing human impacts are central to their considerations.In 2019, the CTP recommended for product applications seeking premarket approval of e-cigs and other electronic nicotine delivery devices to include biomarkers of exposure and harm (https://www.fda.gov/regulatory-information/search-fda-guidance-documents/premarket-tobacco-product-applications-electronic-nicotine-delivery-systems-ends).There is no guidance, however, about how to assess the clinical implications of biomarkers.
Omics studies utilize the full range of tissues.Urine, blood, sputum, and nasal epithelia, for example, are considered surrogates for more difficult tissues to collect, such as in the lung and bladder epithelia.How well surrogate studies represent target organs are generally unclear and need further study.Target organ analysis has typically had limitations as these tissues may require invasive procedures (e.g., bronchoscopy) and are typically collected from unhealthy subjects with cancer or COPD.These studies are less informative because the presence of the disease may affect assay results rather than a pathogenic process, even if surrounding "normal" tissues are used [19].Biospecimens such as sputum are predicated on the concept that there will be a combination of broncho-epithelial cells and inflammatory cells, or that proteins in sputum are secreted from lower airways; optimally this is done with sputum induction techniques that are often not successful and has some risk [20].Nasal epithelium is considered to have similar transcriptomic changes as broncho-epithelium within the context of field cancerization [21].While there is a strong rationale to provide greater emphasis on interpretation for studies using target organ tissues, databases that provide correlations for target to surrogate tissues enhance the value of surrogate studies, although there is limited availability for the omics assays discussed herein for tissues from healthy individuals (in contrast projects such as the Genotype-Tissue Expression (GTEx) Project).
Analytics
Omics approaches can focus on individual features and their biological significance, establish profiles or molecular phenotypes related to a particular trait (e.g., smokers v. quitters), and/or identification of perturbation of disease pathways.Profiling can be focused on a priori hypotheses, for example in tobacco research this can assess pathways known to be involved in carcinogenesis as hallmarks of cancer [1], or xenobiotic metabolism of carcinogens and other toxicants.For each omics type bioinformatic methods and databases are available to associate features with pathways and assess the magnitude of impacts on the pathway [2].
Pre-processing of data readies the data for analysis by assessing quality, removing assay artifacts, summarizing raw data into analysis ready features, normalization, and transformation of the data [3].Further processing can include QC based filtering (e.g., samples or features with a high coefficient of variation, low signal, or excessive missingness) and feature selection for analysis (e.g., pathway specific feature analysis) Analytic approaches have been summarized by Kaur and coworkers [4], Kim and Tagkopouos [3] and Reel and coworkers [2].They note advantages and disadvantages for analytic tools for machine learning.Frequently, principal components analysis is used to show the clustering of subjects or other characteristics by omics, check for batch effects, and identify outliers and the features driving the clustering.Correlation, regression, group comparisons, and other statistical methods are used to determine features associated with an outcome/measurement.Due to the large number of tests required for most omics analysis, the false discovery rate [22] or similar methods such as Bonferroni corrections are typically employed to identify features of interest.Analytic methods for modeling/prediction can be done with supervised (labeled data where the inputs or features are known, and their effects) or unsupervised learning (drawing inferences blind to data labels seeking associations without prior knowledge of feature function) [2][3][4]23].As an example of the latter, 'natural' clusters (unsupervised) derived from the data may be indicative of different disease states and be associated with outcome measures of interest.Supervised learning methods can be more accurate and are simpler to apply but more upfront knowledge is needed.Methods incorporating both as semi-supervise learning also are available [3].
Pitfalls include overfitting and the curse of dimensionality both related to having complex models with too few data points, and imbalanced class size dues to skewed distributions of data [2,3].Specific analytic applications are reviewed by Kaur and coworkers (2021) [4].
For most, if not all, of the omics types, databases and informatics approaches are available to relate omics features to specific molecules and biological pathways [5].For many omics types there is also a plethora of previously analyzed datasets available for analysis or comparison of results.For example, there are more than 800,000 gene expression cancerfocused datasets in the Gene Expression Omnibus (GEO) database of the National Center for Biotechnology at NIH.
Integrative analysis of multi-omics data holds the promise of a better understanding of the biological impact of conventional and new tobacco products [6].The combined analysis will, for example, employ machine learning methods to profile individuals using a combination of gene expression, epigenetics, proteins and small molecule metabolomics to provide a comprehensive molecular picture of a disease state or the involvement of particular biological pathways [7].Analytic methods and deep learning methods have been summarized by Correa-Auila and coworkers [8] and Grapov and coworkers [9].Integrative analysis can include, for example, the combination of different omics data for cluster analysis, clustering of clusters, and interactive clustering, which has been summarized recently by Zhang and coworkers [6].However, there are substantial challenges that must be met before the promise of integration for better predictions is realized.It is important to note the challenges of using single omics for both laboratory and clinical validation are not reduced by integrative analysis, rather they are amplified [9].Integrating omics platforms will require larger datasets, novel analytics not yet developed, do not obviate the need for good study design, and have limitations because omics platforms frequently do not have overlapping features to integrate related genes, proteins etc.
Omics methods and examples of applications (Table 1) Transcriptomics
Transcriptome-wide analysis through mRNA profiling can be done by both array methods and next generation sequencing (e.g., RNASeq) [24].Both coding mRNA and non-coding mRNA have been studied (e., miRNAs and long non-coding RNA).A note of caution for mRNA based assays is that mRNA is not stable and needs to be processed and collected in media that stabilize the mRNA; are subject to sample quality; and, mRNAs may have different stability biasing results, with miRNAs being the most stable because of their small size.Kopa and Pawliczak (2018) summarized various human studies for transcriptomics associated with smoking and e-cig use [25], while Silva and Kamens (2020) published a transcriptomics review for smoking [26].A recent review by Kopa-Stojak and Pawliczak (2022) summarized reported differential miRNA expression in smokers and e-cig users, in experimental animal models and in vitro cell cultures noting that e-cig exposures had lower impacts compared to smoking, but there were some unique effects in the experimental models compared to air controls [27].Devadoss and coworkers (2019) provided a summary of long non-coding mRNA associated with smoking and the pathogenesis of COPD, including a summary of available databases [28].
Blood testing comparing smokers to never-smokers use white blood cells and sometimes mononuclear cell subsets for DNA.This may or may not be corrected for the relative numbers for white blood cell subtypes, which can vary greatly among individuals as does the gene expression for the different subtypes.A meta-analysis of whole blood transcriptomics in 6 cohorts of smokers, former smokers and never smokers yielded associations with platelet and lymphocyte activation, inflammatory response, and protein biosynthesis [29].Interestingly 12 genes including LRRN3, GPR15, and CLDND1, remained differentially expressed in former vs. never-smoker 30 years after quitting.Consistently, blood studies show differences for xenobiotic metabolism, immune-related pathways and AHRR pathways in both cross-sectional studies and switching studies [29][30][31][32][33].
Lung sampling by bronchoscopy in a cross-sectional study of smokers, never smokers and long-term e-cig users demonstrated clear differences between smokers and never-smokers, and the e-cig users had similar gene expression patterns as the never-smokers, indicating an attenuation of smoking effects [34].In a very small study using nasal epithelium, the e-cig users for 6 months or more did not show the same attenuation, indicating that in this case the nasal epithelium is not a surrogate for the lung [35].In a separate lung study, never-smokers were provided nicotine-and flavor-free e-cigs for one month, where a small increase in inflammatory cells and immune cytokines was noted, but there was no change in gene expression [36].A smaller study with pre-and post-bronchoscopies following only 10 e-cig puffs, but still there was some genes affected within the immune pathways, but only minimal effects were reported likely due to the limited exposure [37].The brief exposure is a limitation of this study and whether the lack of changes was due to a lesser impact of e-cigs or the duration of exposure is unclear.
Biopsies have been used to assess transcriptomics in the oral mucosa of current smokers and never smokers, and similar to other tissues, expression differences were found for xenobiotic metabolism, especially CYP1A1, although not effects were observed for AHRR (there were changes reported for AHRR methylation) [38].
Sputum is another surrogate biospecimen used to assess lung effects.For example, Titz and coworkers (2015) induced and processed cell free sputum from smokers with and without COPD, and never smokers [39].They found transcriptome differences for xenobiotic metabolism and oxidative stress response, and former smokers profiling was the same as never smokers.The former smokers were on average 8.85 years since quitting, and so it is unclear how long it takes after cessation to attenuate the smoking effect.
In lung tissue, Szymanowska-Narloch, et al., (2013) reported in a small number of subjects that gene expression of untreated lung cancer patients differed by smoking status in the lung tumors, but not surrounding non-cancer tissues, indicating that the impact of smoking may have happened after the development of cancer [40].Morissette et al., (2014) studied adjacent non-cancer tissues in lung cancer patients, comparing smokers to never smokers [41].They concomitantly identified differences in gene expression for the cancer patients and used a mouse model for cigarette smoke exposure (BALB/c female mice, 8-week whole body exposure).It was reported that 17 genes, 11 pathways, and 58 affected biological functions were identified in both.Included were previously known effects such as the AHRR, CYP1B1, and NQ01, Separately, there were 6 new associations with smoking, namely ACP5, ATP6V0D2, BHLHE41, NEK6, DCSTAMP and LCN2.Using Ingenuity Pathway Analysis, the impacted pathways were xenobiotics response/detoxification, phospholipids metabolism/degradation and oxidative stress defense/ generation.As biological functions affected in both humans and mice, there were associations with immune processes, metabolism and cell functions, diseases, tissue injury and repair, and organ development.Han and coworkers (2021) used publicly available datasets from the GEO Database that had transcriptomics data for smokers and heathy non-smokers for alveolar macrophages collected by lavage [42].They reported by pathway analysis effects on myeloid leukocyte migration, cytokine activity, and leukocyte chemotaxis and migration.They also studied smokers with COPD, but the relationship of the healthy subjects' differences to the development of COPD and its transcriptomics was unclear.
Proteomics
The study of proteins can be more attractive than gene expression studies because gene expression may not correlate with protein expression and function [43].The study of proteins is somewhat more complex than gene expression because their biological function may be modified by post-translational modifications, so that protein levels might not reflect actual biologically relevant levels [43].Proteomics assays typically use mass spectroscopy or arrays.The work flow for assay development and implementation, and assay procedures was recently described by Nakayasu and coworkers [17], Khan and coworkers [44] and Smith and coworkers [43].Normalization of results, for example per total protein, can be complicated for some biospecimens such as broncho-alveolar lavage or sputum because recovery can be different by smoking status, but is critical to do for any type of biospecimen.
A literature search for the use of proteomics to study tobacco use in blood identified only a few studies.While urine has been used for proteomic analysis, no studies were identified assessing smokers and non-smokers.Methods and studies for plasma analysis have been summarized by Smith and Gerszten [43].Several large cohorts have been utilized for cross-sectional analysis plasma studies by mass spectroscopy.This includes the assessment of 897 Framingham Heart Study participants by Corlin and coworkers (2021) to find smoker and never-smoker comparisons with statistical differences for cytokine-cytokine receptor interaction, Th1 and Th2 cell differentiation, interleukin-17 signaling pathway, chemokine signaling pathway, and complement and coagulation cascades [45].The authors used a second Framingham dataset for replication and were able to assess exposure-response relationships (pack-years).Iglesias and coworkers (2021) studied 1005 subjects from the population-based Swedish Cardiopulmonary Bioimage Study reporting somewhat different results with statistical significance, such as differences for ERG, VWF, RHOJ, GABRE and GucY1A3.Bortner and coworkers (2010) used plasma in 14 smokers and non-smokers noting differential levels of proteins in immunity and inflammatory responses.A single pilot study of 5 smokers who smoked one cigarette in the laboratory comparing pre-and post-smoking saliva; the saliva was polled and compared to 4 control subjects [46].In this limited study, changes were found for stress response including fibrinogen alpha, cystatin A and sAA.Some proteomics studies have been conducted examining bronchoalveolar lavage and sputum in smokers and non-smokers.Bronchoscopies were performed on 25 smokers and 17 never-smokers reported by Yang and coworkers (2018) selected from the Karolinska COSMIC cohort [47].They found 610 significantly altered proteins and 15 molecular pathways (i.e., oxidative phosphorylation, citrate cycle, ribosomal and antigen presentation, phagosome pathway and lysosomal pathway).Ghosh and coworkers (2018) conducted a cross-sectional study of 22 smokers, non-smokers and e-cig users [48].They found some unique effects for e-cig users, namely an increase in For example, CYP1B1 (cytochrome P450 family 1 subfamily B member 1), MUC5AC (mucin 5 AC), and MUC4 levels were increased in vapers.Franiosi, et al., (2014) conducted a broncho-alveolar lavage study of occasional smokers who were abstinent for 2 days and then smoked 3 cigarettes followed by bronchoscopy [49].The context was that acute responses are predictive of future COPD risk, and so they studied these occasional smokers with and without a family history of COPD (susceptible and non-susceptible).While differences were found for the two groups, the protein changes were not consisted with those reported for COPD.Titz and coworkers (2015) assessed the proteome in the sputum for smokers and early-stage COPD, using cell free material (so as to assess secretory proteins from the lung lining), reporting alterations in mucin/trefoil proteins and a prominent xenobiotic/oxidative stress response [39].Importantly, former smoker's sputum were more similar to never-smokers than current smokers, indicating the utility of biomarkers to assess changes in biomarkers for smoking cessation studies.Baraniuk, et al., (2015) also conducted a study of 20 smokers and nonsmokers using sputum, and identified increases in Mucin 5A consistent with other studies, and also SGNB1A1, Prr4, AZGP1, DEF1&3 and S100A8 [50].
There are numerous methodological studies about sample collection and processing, for example summarized by Bi and coworkers [78], and Dominguez and coworkers [79].Serum and plasma (and by type of anticoagulant) may not yield the same results [80][81][82].Some methods studies have assessed time of day of blood collection and season, season, hours of fasting, physical activity, NSAID use, tobacco use [and time since last cigarette], and alcohol consumption had variable results, and so given the large metabolic impact of these parameters and this did not apply to all types of features, careful study design should assess these as possible confounders [52,[83][84][85].Gender, race, ethnicity, and age are other confounders that should be assessed (because nicotine metabolism differences by race, metabolism decreases with aging, and gender differences may be related to hormonal metabolism) [84,86].Specimen and laboratory processing should be standardized within studies to increase the accuracy and validity of metabolite measurements [87,88].Repeated freeze-thaw cycles should be avoided, although this might only affect <3% of features [89].Another study indicated that storage temperature can affect some features, and that a serum glutamate/glutamine ratio greater than 0.20 as a biomarker of storage at −20 °C vs − 80 °C [90].
Two studies for smoking cessation were identified.Goettel and coworkers (2017) assesses the urine, plasma and saliva among 39 males before and after 3 months of smoking cessation [91,92].There were 26 altered features in plasma, 20 in saliva, and 12 in urine, including those in fatty acid and amino acid metabolism.Liu and coworkers (2021) studied smokers switching to an e-cig for 5 days, assessing plasma and urine (the number of subjects was unclear from the manuscript) [93].A 30-metabolite signature was reported that could distinguish smoking from e-cig use, many of which were related to smoking xenobiotics or nicotine-derived metabolites.No non-smoking controls were used and so it is unknown how other features reflect smoking cessation and reversal of smoking effects or features that result from e-cig use.
The blood of ex-smokers and never-smokers were compared in a cross-sectional study of 252 subjects by Liang,et al. [94].No features were statistically different among the groups, suggesting the application of metabolomics in tobacco cessation studies.A larger study of 1252 participants incorporating the assessment of smoking-related weight loss where eight plasma xenobiotics were associated with former smoking and 22 xenobiotics and 94 endogenous metabolites were significantly associated with current smoking, including α-ketobutyrate, homoarginine, β-cryptoxanthin, 1-linoleoyl-GPE (18:2), 1-Palmitoyl-GPE (16:0), .
Du Toit et al., (2022) studied the urine of healthy blacks and whites aged 20-30 (n = 363 smokers and 166 controls) in a cross-sectional analysis reporting statistical differences for arginine, asparagine, glycine, serine, glutamine and other amino acids [96].Dator and coworkers (2020) conducted a small urine study of 60 African American and white smokers reporting global differences for the metabolism of carbohydrates, amino acids, nucleotides, fatty acids, and nicotine, with known differences for nicotine degradation pathway (cotinine glucuronidation) [97].
Lipidomics
The study of lipids and lipid homeostasis by lipidomics is another emerging omics approach [98][99][100].Lipids are the major component of the biological membranes serving as a barrier to the cell body, and are critical to membrane protein functions, storing energy, signal transduction, cell growth, differentiation, and apoptosis.Altered lipid homeostasis has been implicated in carcinogenesis and respiratory disease pathways [101][102][103][104]. Surfactant, which is more than 90% lipids, plays a role in lung immune regulation [102].There are eight major eight major categories of lipids: a) fatty acyls, b) glycolipids, c) glycerophospholipids, d) sphingolipids, e) sterol lipids, f) prenol lipids, g) saccharolipids, and h) polyketides [101,105,106].Like other omics, workflow assays and analytics can be challenging and newer methods are being evaluated [107][108][109][110][111][112][113][114].Hoffman and coworkers (2022) has published a comprehensive review of available methods [115], and Zullig and coworkers (2020) has summarized sample preparation methods [116].Liakh and coworkers (2020) also reviewed methods for lipidomics in studies of obesity as a unique consideration given the impact on lipids by obesity [117].Given the newness of this approach, a major limitation is lack of common nomenclature, and robust databases for feature identification standardized reporting [115,118].
Very few studies are available for assessing lipidomics by smoking status.One study assessed sputum in 17 subjects after 2 months of smoking cessation [119].They found for 17 smokers (with and without COPD) that 28 lipids were decreased after smoking cessation (3 ceramides,6 dihydroceramides,and 17 GSLs).And they also identified solanesol and its esters in the sputum, which is derived from tobacco.This indicates that solanesol could be used as a marker of compliance in smoking cessation, although there are better methods with easier to obtain biospecimens.
Middlekauff and coworkers (2020) conducted a cross-sectional analysis of plasma in 119 smokers, e-cig users and non-users [120].While the authors concluded that most features were no different among groups, cholesterol esters, ceramides and hexosylceramides were different for the smokers and the other groups, and e-cig users were much more similar to the non-users.T' Kindt and coworkers (2015) examined sputum in 20 heathy smokers and 14 never smokers [121].While most lipids were not different by smoking status, several prenol lipids were associated with smoking, and they also detected solanesol esters.
Discussion
Biomarkers for prvention studies assess disease risk for individual tobacco products compared to others or those who have never used tobacco products.Tobacco cessation studies are frequently used to assess compliance and the reduction of tobacco toxicants.The only validated biomarkers for tobbacco use are those that are chemically-specific carcinogens metabolites as biomarkers of exposure.In contrast, there are no validated biomarkers of effect or harm (e.g., disease risk) for tobacco use, cessation, or later return to smoking.Omics approaches assess effect and harm, but the field is still early and research is applied only to small studies, and essentially not to intervention cessation studies.Also emerging for early detection and guiding therapeutics are "liquid biopsies", for example with circulating DNA and extracellular vesicles.The potential for these as tobacco disease risk biomarkers or prediction of smoking cessation or later relapse has not been studied.Omics assays have the potential to uncover a much deeper understanding of biology and disease, but also have the considerable cost of added complexity in terms of both analytics and interpretation.While they are technically feasible to conduct in a variety of biospecimens, the analytics are complex and continuously developing.An additional challenge is the interpretation of alternations on future disease risk.For example, differential gene expression might be an adverse toxic response, a pro-active response to prevent toxicity, or neither.
Long term tobacco cessation, including for e-cigs, is a consistent goal to improve individual and population health.Pharmacological and behavioral methods to reduce tobacco use continue to evolve and predictive markers for tailoring cessation therapies are sorely needed.Also, biomarkers for future disease risk are needed.In both cases, omics methods have the potential to become validated biomarkers for cessation (and later return to smoking) and risk given their broad assessment of biological function, and are agnostic at the outset to specific biological pathways.Combining omics platforms as an integrated approach might be informative than using any single omics technology.However, the added complexity is substantial, becomes more problematic for small studies, and sometimes more data is not more helpful, especially if it is bad data.Having larger datasets for small studies may increase false discovery rates reducing chances for discovery of true associations and provide data for more features with unclear biological function.
There are well established compliance biomarkers for complete cessation for nicotine products by measuring nicotine and nicotine metabolites, and for tobacco products compared to non-tobacco products (e.g., tobacco alkaloids).For products such as e-cigs switching to other nicotine products, such as nicotine replacement therapy, there are no validated biomarkers of compliance.It may be that such can be discovered using a metabolomic approach.An important gap in the literature are studies for the omics discussed herein that may predict smoking reinitiation, and then consequently having tailored approaches to relapse prevention.
The various omics studies demonstrate wide interindividual variation among smokers and never-smokers, which are due to exogenous factors (diet, lifestyle, medications) and endogenous factors (age, gender, race, ethnicity, and comorbidities).Another important consideration for inter-individual variation are host and heritable traits.While for the general population, genetic predisposing genes are typically low penetrant, specific genetic traits might be important for omic analysis because of wide variation in xenobiotic metabolism.Some aspects of interindividual variation may be related to technical issues such as differences for sample collection (time of day, fasting, season), batch effects for sample process, and the reproducibility of the technologies.
While there has been a variable number of tobacco studies across the omics, currently there is a heterogeneity of results that preclude finding consistency in results for individual features or genes.This is due both to the variability in the analytic methods that identify features not consistently detectable across platforms or are identified in bioinformatic pipelines, small number of studies, limitations in sample size, differences in biospecimen collection or due to the human subject study design.Also, results from the same platform may vary by the bioinformatic methods and pipelines for data analysis that provide differing results.Heterogeneity also exists for study design, and the best evidence when applied to clinical trials for cessation are non-existent.Thus, there are major research gaps that may need to be addressed before any omics approach can be useful clinically for tailoring cessation therapies or prediction of future disease risk.The gaps need to address both the technical and clinical validation, as discussed above.Also, devoid in current studies are considerations of outcomes based on race and ethnicity (measured by self-report or through genetics), and thus there may be substantial health disparities that are not yet considered.
|
2023-04-28T15:02:41.648Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "d3f209fab284773d68abd79591f242f7280e8427",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "a95f358812dd6d73a3b4941b1c17e81f6961aa8a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
18812423
|
pes2o/s2orc
|
v3-fos-license
|
15,16-Dihydrotanshinone I from the Functional Food Salvia miltiorrhiza Exhibits Anticancer Activity in Human HL-60 Leukemia Cells: in Vitro and in Vivo Studies
15,16-Dihydrotanshinone I (DHTS) is extracted from Salvia miltiorrhiza Bunge which is a functional food in Asia. In this study, we investigated the apoptotic effect of DHTS on the human acute myeloid leukemia (AML) type III HL-60 cell line. We found that treatment with 1.5 μg/mL DHTS increased proapoptotic Bax and Bad protein expressions and activated caspases-3, -8, and -9, thus leading to poly ADP ribose polymerase (PARP) cleavage and resulting in cell apoptosis. DHTS induced sustained c-Jun N-terminal kinase (JNK) phosphorylation and Fas ligand (FasL) expression. The anti-Fas blocking antibody reversed the DHTS-induced cell death, and the JNK-specific inhibitor, SP600125, inhibited DHTS-induced caspase-3, -8, -9, and PARP cleavage. In a xenograft nude mice model, 25 mg/kg DHTS showed a great effect in attenuating HL-60 tumor growth. Taken together, these results suggest that DHTS can induce HL-60 cell apoptosis in vitro and inhibit HL-60 cell growth in vivo; the underlying mechanisms might be mediated through activation of the JNK and FasL signal pathways.
Introduction
15,16-Dihydrotanshinone I (DHTS) is extracted from the Salvia miltiorrhiza Bunge (Tanshen), which is used as a dietary supplement or as an ingredient in functional foods in Asian countries. Recent reports demonstrated that Tanshen have many biological functions, such as treating cardiovascular diseases, especially angina pectoris and myocardial infarction [1,2]. Studies by us and others found that extracts of Tanshen exhibit significant antitumor activity through different mechanisms in various types of tumor cells. Among the compounds of Tanshen, DHTS has the strongest inhibitory activity against breast cancer cells through inducing G1-phase arrest and increasing loss of the mitochondrial membrane potential and cytochrome c release [3]. DHTS also significantly induced apoptosis in colorectal cancer cells, and ATF-3 might be involved in inducing apoptosis [4]. In addition, DHTS can induce apoptosis of prostate carcinoma cells via induction of endoplasmic reticular stress and/or inhibition of proteasome activity [5], and may have therapeutic potential for prostate cancer patients. In human hepatoma cells, DHTS also induced cell apoptosis through inducing reactive oxygen species (ROS) and the p38 pathway [6]. Tanshinone I, a compound of Tanshen, was shown to induce cancer cell apoptosis in human myeloid leukemia cells [7] and human non-small cell lung cancer [8], whereas another of Tanshen's compounds, tanshinone IIA, also induced apoptosis in human HeLa [9] and rat glioma cells [10].
At present, two major apoptotic pathways have been addressed, including the intrinsic mitochondrial pathway and extrinsic death receptor pathway [11,12]. The mitochondrial membrane plays a crucial role in initiating the intrinsic apoptosis pathway, which can occur by decreasing antiapoptotic Bcl-2 family proteins, such as Bcl-2 and Bcl-xL, and increasing proapoptotic Bcl-2 family proteins, such as Bad and Bax, with various apoptotic stimuli. Overall, a decrease in the antiapoptotic protein/proapoptotic protein ratio results in cytochrome c release into the cytosol and causes pro-caspase-9 cleavage. On the other hand, the extrinsic apoptotic pathway is activated by various death receptors, such as Fas, and finally induces pro-caspase-8 cleavage. Cleaved caspase-8 can cleave Bid into truncated (t)Bid which interacts with Bax or Bak to cause cytochrome c release from mitochondria. Cleaved caspase-9 and -8 can subsequently activate downstream effector caspases, including caspase-3, which destroys the cellular machinery and leads to eventual cell death [13].
15,16-Dihydrotanshinone I (DHTS) Inhibited Cell Proliferation and Triggered Apoptosis
To examine whether DHTS can inhibit cell proliferation in a human hematopoietic malignancy, we used human promyelocytic leukemia HL-60 cells in all experiments of this study. HL-60 cells (5 × 10 4 cells/mL) were cultured in RPMI medium containing 10% FBS and treated with various concentrations of DHTS for 24 h. Viable cells were determined by an MTT assay, and the results showed that DHTS dose-dependently inhibited cell proliferation in HL-60 cells with a 50% inhibitory concentration (IC50) of about 0.51 μg/mL ( Figure 1A). Lactate dehydrogenase (LDH) is a stable cytosolic enzyme that is released upon cell lysis. Next, we determined the cytotoxicity of DHTS toward HL-60 cells by measuring the released LDH in culture supernatants. As shown in Figure 1A, DHTS treatment significantly increased the LDH release in a dose-dependent manner, indicating that DHTS caused significant cell death of HL-60 cells. In addition, DHTS also significantly inhibited proliferation of another human K562 chronic myelogenous leukemia cells, but it was less effective in K562 cells than that in HL-60 cells ( Figure 1B). Cell numbers and cytotoxicity were measured by counting viable cells using an MTT assay and lactate dehydrogenase (LDH) release, respectively; (B) K562 cells were treated with various concentrations of DHTS for 24 h. Cell numbers was measured by counting viable cells using an MTT assay; and (C) Cells were treated with various concentrations of DHTS for 24 h, and apoptotic cells were determined by FACS using the Annexin V-Alexa Fluor488 Apoptosis Assay Kit. Data are expressed as the mean ± S.D. of three independent experiments. * p < 0.05, compared to the control.
Next, we examined whether DHTS-caused cell death was accompanied by an induction of apoptosis in HL-60 cells. HL-60 cells were treated with 0.5, 1.0 or 1.5 μg/mL of DHTS for 24 h and stained with PI and Annexin V-Alexa Fluor 488 followed by quantification of apoptotic cells by flow cytometry. Cells stained with Annexin V (Alexa Fluor 488 dye) represent early apoptotic cells and are shown in the lower right quadrant of the FACS histogram, and those stained with both Annexin V and propidium iodide (PI) represent late apoptotic cells and are shown in the upper right quadrant of the FACS histogram. As shown in Figure 1C, the late apoptotic cell population increased from 10.96% to 89.49% in HL-60 cells treated with 1.5 μg/mL DHTS. However, the early apoptotic cell population only increased in cells treated with 1.0 μg/mL DHTS. These results suggest that DHTS can suppress cell proliferation through inducing apoptosis in HL-60 cells.
DHTS Induced Apoptosis through Increased Bad/Bax Expression and Activation of Caspases
It is known that intrinsic and extrinsic apoptotic pathways can activate initiator caspase-8 and caspase-9, respectively, which then results in activation of effector caspase-3 and poly ADP ribose polymerase (PARP) cleavage by active caspase-3. To investigate whether intrinsic and/or extrinsic apoptotic pathways are involved in DHTS-induced apoptosis, we examined the expressions of cleaved caspase-8, cleaved caspase-9, cleaved caspase-3, and PAPR by Western blotting. HL-60 cells were treated with 0.5~2.5 μg/mL DHTS for 24 h, or 1.5 μg/mL DHTS for different time periods. As shown in Figure 2, DHTS significantly increased expressions of cleaved caspase-3, -8, and -9, and cleaved PARP in dose-and time-dependent manners in HL-60 cells. Bcl-2 family members are major regulators of apoptosis, such as the proapoptotic Bad, Bax, Bid, and Bim, and the antiapoptotic Bcl-2 and Bcl-xL. We found that DHTS dramatically increased proapoptotic Bad and Bax expressions and slightly increased Bim expression, but did not significantly change the expressions of Bid, Bcl-2, or Bcl-xL ( Figure 3A,B). Interestingly, a shifted band of Bad and Bax (Figure 3, arrowhead) appeared in the Western blot of DHTS-treated cells and could be decreased when they incubated with alkaline phosphatase ( Figure 3C), suggesting that DHTS might induce phosphorylation of Bad and Bax in HL-60 cells. To confirm the roles of Bad and Bax on the DHTS-induced apoptosis, we used specific siRNAs to knockdown Bad and Bax in HL-60 cells. As shown in Figure 3D, neither Bad siRNA nor Bax siRNA could reverse the DHTS-induced cell death, however being combined with Bad siRNA and Bax siRNA significantly increased the cell viability. These results suggest that DHTS might induce apoptosis through both intrinsic and extrinsic apoptotic pathways. (C) Total cellular lysate were collected from the cells treated with 1.5 μg/mL DHTS for 24 h, and incubated with or without alkaline phosphatase (ALP) in tube, and then determined protein expressions by Western blotting; and (D) Cells were transfected with Bad siRNA and/or Bax siRNAs by electroporation and then treated with 1.0 μg/mL DHTS for 24 h. Cell numbers was measured by counting viable cells using an MTT assay (left panel), and total cell lysates were used to detect the Bad and Bax protein levels by Western blotting (right panel). Arrowhead, a shifted band of Bad or Bax. Data are expressed as the mean ± S.D. of three independent experiments. * p < 0.05, compared to the column 2.
Both FasL and c-Jun N-Terminal Kinase (JNK) Contributed to DHTS-Induced Apoptosis
Since caspase-8 is activated by DHTS treatment, we next examine whether DHTS increased expression of the caspase-8 upstream signal molecule, FasL. HL-60 cells were treated with 0.5~1.5 μg/mL DHTS for 24 h or 1.5 μg/mL DHTS for different time periods. As shown in Figure 4A,B, DHTS induced FasL mRNA expression and also FasL protein expression, indicating that DHTS increased the FasL expression at the transcriptional level. To investigate whether FasL involved in DHTS-induced cell death, we used the anti-Fas blocking antibody to block the Fas/FasL interactions. The results indicated that anti-Fas blocking antibody significantly reversed the cell death in DHTS-treated cells, suggesting FasL was involved in DHTS-induced apoptosis in HL-60 cells ( Figure 4C). NF-κB is a heterodimer or homodimer composed of NF-κB p50 and NF-κB p65 (or RelA), and NF-κB activation by phosphorylation and degradation of I-κB are linked to the induction of FasL gene expression. To examine whether DHTS-induced FasL expression is mediated through activation of NF-κB, we detected the expression of phosphorylated I-κB, NF-κB p65, and NF-κB p50 in HL-60 cells treated with DHTS for different time periods. As shown in Figure 4C, the expression of phosphorylated I-κB did not change; however, NF-κB p65 significantly increased in DHTS-treated cells. Moreover, c-Jun N-terminal kinases (JNKs) are members of MAPKs, and sustained phosphorylation of JNK was found to be associated with cell death in many reports. JNK phosphorylation occurred at 1 h and persisted to 24 h after DHTS treatment ( Figure 4D). Moreover, phosphorylation of the other two MAPK members, ERK and p38, also increased in DHTS-treated cells in a time-dependent manner.
To further confirm the role of JNK in apoptosis, cells were treated with the JNK inhibitor, SP600125, and expressions of cleaved caspases and PARP were detected. As shown in Figure 5, 5 μM SP600125 reversed increases in cleaved caspase-3, -8, and -9, and PARP in DHTS-treated cells. These results suggest that DHTS-induced apoptosis might be mediated through increases in FasL expression and JNK activation in HL-60 cells.
DHTS Inhibited Leukemia Tumor Growth in Nude Mice
To examine the antitumor effects of DHTS on leukemia cells in vivo, we used a nude mice xenograft model. Athymic mice bearing HL-60 tumors were treated with 12.5 and 25.0 mg/kg DHTS once a day for a week. At the end of the experiment, the body weight and tumor weight were measured. As shown in Figure 6, 25 mg/kg of DHTS significantly inhibited tumor growth by about 68.0% compared to control tumors, and body weights remained unchanged between control and drug-treated mice. These results provide further evidence that DHTS might have significant applications for cancer therapeutic purposes.
Discussion
We herein showed that DHTS significantly induced apoptosis of human HL-60 leukemia cells at very low concentrations, as evidenced by a decrease in viable cell numbers and an increase in PI/annexin V-positive cells. We found that the intrinsic initiator, caspase-9, and the extrinsic initiator, caspase-8, were activated in DHTS-treated cells. DHTS also induced mRNA and protein expressions of FasL and sustained JNK phosphorylation. Blocking FasL/Fas interactions by anti-Fas blocking antibody could reverse the DHTS-induced cell death, and pretreatment with the JNK inhibitor, SP600125, limited the cleavage of caspase-3, -8, and -9, and PARP. Finally, we used an animal model to verify the antitumor potential of DHTS. These results suggest that DHTS induce apoptosis of HL-60 leukemia cells and inhibit tumor growth in vivo, and the underlying mechanisms might be mediated through both intrinsic and extrinsic apoptotic pathways.
Bcl-2 family proteins can be divided into three categories according to the BCL-2 homology (BH) domains, including multi-domain antiapoptotic proteins, multi-domain proapoptotic proteins, and BH3-only proapoptotic proteins [14]. BH3-only proapoptotic proteins, such as Bad, are cell death initiators that promote cell death by displacing Bax from binding to Bcl-2 and Bcl-xL. Evidence indicates that Bad is a latent killer until activated through transcriptional or post-translational mechanisms. The most common post-translational modification is obviously phosphorylation [15]. Phosphorylation of multiple serine residues was identified in Bad. Most findings demonstrated that Bad phosphorylation events inactivate its proapoptotic function; however, Bad phosphorylation at Ser 128 by Cdc2 limits it binding to 14-3-3 and therefore enhances the proapoptotic function of Bad in rat neuron cells [16]. Bax is a key component for inducing the intrinsic apoptosis pathway. Active Bax oligomers cause the release of cytochrome c from mitochondria and activate caspase-9 by cytosolic cytochrome c. In mammalian cells, phosphorylation of Ser163 and Ser184 of Bax may also participate in regulating Bax activity [17]. Phosphorylation of Ser184 by nicotine-activated Akt led to inhibition of Bax-dependent apoptosis [18]; however, dephosphorylation of Ser184 by PP2A resulted in Bax activation [19]. Akt is a key signal molecule for cell survival and proliferation, and it can phosphorylate and inactivate GSK3β. Another study showed that activated GSK3β phosphorylated Bax at Ser163 and then induced activation of Bax [20]. In this study, shifted Bad and Bax bands were found in Western blots of DHTS-treated cells ( Figure 3A,B, arrowhead) and could be removed from the phosphate group by alkaline phosphatase ( Figure 3C). They seem to be phosphorylated Bad and phosphorylated Bax; however more experiments are needed to fully understand which residues of Bad and Bax are phosphorylated and the actual roles of phosphorylated Bad and Bax on the DHTS-induced apoptosis. Bid is a BH3-only proapoptotic protein that is associated with extrinsic apoptosis pathways. FasL binding to the Fas receptor results in activation of the proapoptotic caspase-8 pathway by direct recruitment of FAS-associated proteins with a death-domain (FADD) [21]. Activated caspase-8 both leads directly into caspase-3 activation and cleaves Bid to generate truncated (t)Bid [22]. tBid helps the oligomerization of Bax into the outer mitochondrial membrane and stimulates the release of cytochrome c from mitochondria. Free cytochrome c finally activates caspase-3 by mediation through Apaf-1 and caspase-9. Our results found that DHTS activated caspase-8 ( Figure 2) but did not change pro-Bid expression (Figure 3), suggesting that caspase-8 activated downstream caspase-3, thus bypassing the tBid/cytochrome c pathway.
Previous studies demonstrated that sustained JNK activation plays an important role in anticancer drug-induced apoptosis [23]. Activation of the JNK/c-Jun pathway can induce FasL expression in cisplatin-induced apoptosis [24]. JNK is able to induce FasL expression mediated by production of reactive oxygen species (ROS) in Epstein-Barr virus (EBV)-transformed B cells [25]. On the other hand, JNK was found to activate proapoptotic Bad, Bim, and Bax through different mechanisms. For example, JNK can phosphorylate 14-3-3, which results in Bad release from 14-3-3 [26]. In addition, JNK directly phosphorylates and inactivates the antiapoptotic Bcl-2 and Bcl-xL, thereby activating Bim and Bax [27]. In this study, we found that DHTS treatment induced JNK phosphorylation for long periods of time and FasL expression by HL-60 cells (Figure 4). However, the JNK-specific inhibitor, SP600125, and the ROS scavenger, N-acetyl cysteine (NAC), did not block FasL expression in DHTS-treated cells (data not shown). Additional experiments demonstrated that SP600125 could decrease the cleavage of PARP, and caspases-3, -8, and -9 ( Figure 5), suggesting that DHTS-activated JNK might contribute to apoptosis through inhibiting antiapoptotic proteins and activating proapoptotic proteins.
The most common form of NF-κB is the p65/p50 heterodimer, which is known to participate in many physiological functions, such as immunity, inflammation, cell proliferation, and survival [28]. NF-κB is sequestered in the cytoplasm by binding with I-κB and is activated by a cascade of events leading to the phosphorylation of I-κB and subsequent degradation of I-κB. Free NF-κB is therefore translocated to nuclei and induces gene transcription through binding with cis-acting κB elements [29]. The FasL gene promoter contains several κB elements and can be induced by activated NF-κB [30]. In this study, we found that DHTS did not increase I-κB phosphorylation but significantly increased NF-κB p65 protein expression ( Figure 4D), suggesting that DHTS-induced FasL expression might be mediated through the NF-κB pathway.
Previously, our studies have found that DHTS could induce apoptosis through intrinsic mitochondrial apoptotic pathway in breast cancer cells. Here, we further demonstrate that not only the mitochondrial pathway proteins, Bad and Bax, but also the extrinsic death receptor pathway, FasL were involved in DHTS-induced apoptosis in HL-60 leukemia cells. Upregulation of FasL expression by DHTS might be mediated through increasing JNK phosphorylation. Increase of both FasL as well as Bad and Bax results in activation of caspase cascade that finally contributes to cell apoptosis.
Cell Culture and Transient Transfection
The human promyelocytic leukemia cell line, HL-60 (BCRC 60027), and human chronic myelogenous leukemia cell line, K562 (BCRC 60007), were obtained from the Food Industry Research and Development Institute (Hsinchu, Taiwan) and cultured in RPMI medium containing 10% heat-inactivated fetal bovine serum (FBS; Biological Industries, Kibbutz Beit Haemek, Israel). For knockdown of Bad and/or Bax, HL-60 cells (1.0 × 10 7 cells) were transiently transfected with 200 nM of Bad siRNA and/or Bax siRNA (Santa Cruz) by electroporation at 250 V and 500 μF and then seeded in a 24-well plate for treatment with DHTS.
3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyl Tetrazolium Bromide (MTT) Assay and Lactate Dehydrogenase (LDH) Release Assay
HL-60 cells or K562 cells (5 × 10 4 cells) were seeded in a 24-well plate for 24 h and then treated with DHTS for another 24 h. The cell suspension was collected and centrifuged at 700× g for 1 min, and the cell pellet and supernatant medium were respectively used to determine viable cells and LDH release. To determine viable cells, the cell pellet was supplemented with 200 μL of fresh medium and 50 μL MTT (2 mg/mL) at 37 °C for 2 h in the dark. The medium was removed, and 200 μL DMSO and 25 μL Sorensen's glycine buffer were added. The supernatant (100 μL) was put into a 96-well plate, and the absorbance at OD 570 nm was measured by an enzyme-linked immunosorbent assay (ELISA, HIDEX OY, Turku, Finland) plate reader. To determine LDH release, 50 μL of the above supernatant medium was added to a 96-well plate, and LDH activity was detected by a CytoTox 96 ® Non-Radioactive Cytotoxicity Assay kit [31] (Promega, Madison, WI, USA) according to the manufacturer's instructions.
Western Blot Analysis
Equal amounts of total cellular protein (50 μg) were resolved by 10% sodium dodecylsulfate (SDS)-polyacrylamide gel electrophoresis (PAGE) and transferred onto a polyvinylidene difluoride membrane (Millipore, Bedford, MA, USA) as described previously [32]. The membrane was then incubated with various primary antibodies and subsequently incubated with an anti-mouse or anti-rabbit immunoglobulin G (IgG) antibody conjugated to horseradish peroxidase (Santa Cruz Biotechnology) and visualized using enhanced chemiluminescence kits (Amersham, Arlington, IL, USA). For removal of phosphate groups from proteins, total cellular protein were collected in Gold lysis buffer [32] without phosphatase inhibitors and EDTA. Before SDS-PAGE, the total cellular protein (50 μg) were incubated with 50 units of calf intestinal alkaline phosphatase (ALP) in ALP buffer (100 mM NaCl, 50 mM Tris-HCl, 10 mM MgCl2, 1 mM dithiothreitol, pH 7.9) at 37 °C for 1 h.
RNA Isolation and a Semiquantitative Reverse-Transcription Polymerase Chain Reaction (RT-PCR)
Cells were collected by centrifugation, and total RNA was prepared by directly lysing cells in Trizol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. Two micrograms of total RNA was reverse-transcribed to synthesize complementary (c)DNA using the SuperScript III First-Strand Synthesis System for RT-PCR (Invitrogen). Two microliters of the first-strand reaction was used for the PCR amplification with specific primers for FasL or glyceraldehyde 3-phosphate dehydrogenase (GAPDH) primers, and it was carried out in the Gene Amp PCR system 2400 (Perkin-Elmer). Primers for the PCR analysis were as follows: FasL forward 5′-TCTCAGACGTTTTTCGGCTT-3′ and reverse 5′-AAGACAGTCCCCCTTGAGGT-3′; and GAPDH forward 5′-ACCACAGTCCATGCCATCAC-3′ and reverse 5′-TCCACCACCCTGTTGCTGTA-3′. The PCR parameters comprised a cycle of 5 min at 94 °C, 30 cycles of 50 s at 94 °C, 50 s at 58 °C, and 50 s at 72 °C, followed by a cycle of 5 min at 72 °C. PCR products were separated on 2% agarose gels and visualized with SYBR green.
Flow Cytometric Analysis
Briefly, cells were collected in phosphate-buffered saline (PBS) and resuspended in annexin-binding buffer (10 mM HEPES, 140 mM NaCl, and 2.5 mM CaCl2, pH 7.4). Cells were stained with 5 μL Annexin V-Alexa Fluor 488 and propidium iodide (PI, 1 μg/mL) at room temperature for 15 min in the dark. Apoptotic cell death was analyzed by flow cytometry using the Annexin V-conjugated Alexa Fluor488 Apoptosis Detection Kit according the manufacturer's instructions (Molecular Probes, Eugene, OR, USA). Stained cells were analyzed by FACScan flow cytometry using CellQuest 3.3 analysis software (Becton Dickinson, San Jose, CA, USA) [33].
Antitumor Nude Mice Experiment
Male Balb/c AnN-Foxn 1 nu/CrlNarl nude mice (4~5 weeks old) were purchased from the National Laboratory Animal Center (Taipei, Taiwan) and kept in a specific pathogen-free environment within barrier systems. All animal experimental procedures were conducted and approved by the Institutional Animal Care and Use Committee of Taipei Medical University. At 24 and 72 h before tumor cell transplantation, mice were intraperitoneally (i.p.) administered with 35 mg/kg cyclophosphamide to improve the engraftment rates. HL-60 cells (10 7 cells) were first mixed with an equal volume of Matrigel™ Basement Membrane Matrix (BD Biosciences Taiwan), and then nude mice were subcutaneously injected between the scapulae with HL-60/Matrigel. After transplantation for 10 days, mice received an i.p. injection of either 25 μL DMSO (vehicle), 12.5 mg/kg DHTS, or 25.0 mg/kg DHTS once a day for a week. At the end of the experiment, mice were sacrificed, and tumor specimens were excised, photographed, and weighed.
Statistical Analysis
Data are presented as the mean and stand deviation (SD) of the indicated number of independently performed experiments. Statistical analysis was done using a one-way analysis of variance (ANOVA) test, and differences were considered significant at p < 0.05.
|
2015-09-18T23:22:04.000Z
|
2015-08-01T00:00:00.000
|
{
"year": 2015,
"sha1": "e7e2807fdabae4a625cbbdeffc6198c04d9e3292",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/16/8/19387/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7e2807fdabae4a625cbbdeffc6198c04d9e3292",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
210980844
|
pes2o/s2orc
|
v3-fos-license
|
Bacteriological Risk Assessment of Borehole Sources of Drinking Water in Some Part of Port Harcourt Metropolis of Niger Delta, Nigeria
Safe drinking water is still a challenging public health issue
in the midst of lots of borehole water in-use in our various
communities..
Introduction
Safe drinking water is still a challenging public health issue in the midst of lots of borehole water in-use in our various communities; thus, these further justify the trend that the integrity of such a source of drinking water might not be guaranteed in the end. Access to safe drinking water is a human right; and it is one of the basic requirements for good health as reported by UNICEF/ WHO [1]. This has resulted to more level of awareness campaigns and Public Health echo regarding quality of borehole water. WHO in conjunction with UNICEF [2] developed an indicator known as "use of an improved source", this indicator is used to monitor access to safe drinking water globally? Nevertheless, this indicator does not really measure the quality of water [2]. Furthermore, water sources are either from the periphery called surface water or underneath referred to as ground water. Globally, ground water is the largest and most important source of potable water [3]. Also, according to Department of International Development (DFID) Strategies for Achieving the International Development Targets [4], an estimate of 1.5 billion people has ground water as their daily potable water source and has proved to be the most reliable source for meeting rural water demand in the sub-Saharan Africa as reported by some studies [5,6].
Owing to failure of governments to meet the ever-increasing water demand, most people resort to groundwater sources such as boreholes as an alternative water source. Thus, humans can access groundwater through a borehole, which is drilled into the aquifer for industrial, agricultural and domestic use. Access to water does not mean access to safe water, according to United Nations International Children Emergency Fund (UNICEF) and World Health Organization (WHO) classified Nigeria among group of ten (10) countries which is about two-thirds of the world population; without access to improved drinking water sources. Statistics have shown that over sixty million Nigerians have no access to potable water [2,7]. It is strongly believed that such trend will definitely favour the geometric increase of water borne, and water wash epidemic outbreaks in our communities, if urgent multi-sectorial approach is not deployed to save the situation in good time. Nonetheless, borehole water is a form of groundwater that is usually believed to be a "safe source" of drinking water because it comes with low microbial load thus; it requires little or no need of treatment before drinking the water, nevertheless, it is strongly believed that this advantage is because of long retention time and natural filtration capacity of aquifers according to Aiyesammi et al. [8].
On the other hand, groundwater sources are commonly susceptible to contamination, which may reduce its quality. In general, groundwater quality varies from place to place, sometimes depending on seasonal changes as revealed by Trivede et al. also, Vaishali & Punita [9,10], and the types of soils, rocks and surfaces through which it runs through [11,12]. Naturally occurring contaminants are present in the rocks and sediments. In addition, human activities can alter the natural composition of groundwater through the disposal or dissemination of chemicals and microbial matter on the land surface and into soils, or through injection of wastes directly into groundwater. Industrial discharges [13], urban activities, agriculture [14] groundwater plumage and disposal of waste [15] can potentially affect the groundwater quality, even as pesticides and fertilizers applied to lawns and crops can accumulate and migrate to the water tables thus, affecting both the physical, chemical and microbial quality of water respectively. Borehole is extensively used in Nigeria with over 120,000 million people using it as a source drinking water, because it is probably suggested to be cheap, accessible and easy to manipulate by the local people in remote communities. Also, pit latrines as well as water closet are the most common toilet systems used in the area where this present study was undertaken as at the time of sample collection.
Nonetheless, it is believed that the poor sanitary toilet system poses a great risk on the microbial quality of drinking water. A septic tank can introduce bacteria to water; pesticides and fertilizers that leak into farmed soils can eventually end up in the water drawn from a borehole. Poor sanitary completion of boreholes may lead to contamination of groundwater. Proximity of some boreholes to solid waste dumpsites and animal droppings being littered around them [16] could also contaminate the quality of groundwater. Therefore, groundwater quality monitoring and testing is of paramount importance both in the developed and developing world [17]. In addition, high salinity and hardness outside microbial problems have also been reported in groundwater. Water quality problems have partly been associated with inadequate sanitation as reported by Van [18]. However, study has shown that elevated microbial load seen in drinking water could be affected by climate weather like high temperature. This explained the effect of temperature on the rate of microbial replication [16]. Furthermore, the fact that; potential cause of high coliform count is traceable to location, and proximity of certain boreholes to toilet systems like pit latrines as well as poor sanitary completion of boreholes, as this would obviously lead to contamination of groundwater, hence it cannot be overemphasized as a huge sources of contamination.
In addition, there are environmental sources like soil or biofilms which can affect borehole water by its contamination with microbes. Nonetheless, microbial contamination of borehole water is linked to the depth of the borehole as reported by previous studies [11,12]. Though the depths of the various borehole studied were not known, borehole depth remains critical and very imperative factor in the investigation of bacterial contaminants in borehole water. A recommended minimum depth of a borehole is 40m. As ground water passes through saturated sand, as well as non-fissured rock, microbial contaminants tend to be removed within the first 30m depth however, in unsaturated zone, no more than 3m may be necessary to purify groundwater. However, in fractured aquifer, microbial contaminants can rapidly pass through the unsaturated zone to the water table [14]. Most boreholes are constructed using an electrical device such that the water is pumped into pipes for distribution. Old and rusty pipes affect the quality of water by allowing leakage of microbial contaminants into the borehole [19]. Furthermore, microbial contaminants in borehole water might pose a serious health threat to individuals after consumption. These microbial contaminants are pathogenic in nature and possess various genetic determinants some of which are harmful i.e. the virulent strains like E. coli strains (E. coli O157:H7 and E. coli O104:H4) known to cause diseases in man [20].
It is strongly believed that virulence of Enterohaemorrhagic Escherichia coli and established that animals are the natural reservoir hosts of Enterobacteriaceae and indiscriminate release of faeces within the surrounding contributes to the presence of these bacteria in the environment. Nevertheless, it is firmly suggested that provision of potable drinking water, and good sanitary habit is crucial for all, particularly among vulnerable groups like children and immuno-compromised individuals, because unsafe drinking water could add to several water borne diseases however, this challenge is only common in developing countries as documented by Mark et al. [21], who reported that there are over one billion annual incidences of diarrhoea and other myriad of gastroenteritis infections that are linked to lack of access to potable water. Borehole water is prone to physical, chemical as well as microbial contaminants. This present study considered only the bacteriological quality of the borehole water in Port Harcourt.
Borehole water been a ground water should be one of the best type of water recommended for drinking free from contaminants as opposed to surface water; however, the quality of borehole water in recent time have deteriorated significantly thereby, faulting the quality of borehole as it was initially perceived.
The microbial contamination of borehole water has posed health threats and complications with some imminent waterborne disease outbreaks in some regions, especially in the developing countries like Nigeria. The issue of microbial contamination in borehole water is of public health concern because most of the water for consumption in these regions (developing countries) is from borehole, either from government facilities or private/homemade facilities as such, the quality of borehole water should be guaranteed been the common source of drinking water available.
Also, since microbial quality of borehole water is related to sanitary habits and environmental activities like agricultural practices, storage tanks which can be controlled; therefore, it is important to evaluate the quality of these boreholes to ascertain the presence of pathogenic microbes. Nonetheless, water remains the major source of transmission of enteric pathogens in developing countries. According to Federal Ministry of Health, Nigeria (1991) [22], notified cases are mostly in under five year (s) old children.
Water born infections like gastroenteritis characterized with acute diarrhoea have a high incidence and this can be attributed to lack of the availability of safe drinking water.
Alabi et al. [23] studied some pathogenic agents associated with infantile diarrhoea in Nigeria; they reported 48.6%, 30.6%, 8.2% 6.9% for bacteria, viruses, enteric parasites (protozoa and helminths) and of dual aetiology respectively. The construction of borehole involves laying of pipes underground; these pipes are supposed to be checked periodically i.e. periodic maintenance also; drainages ought to be far from water pipes to avoid cross contamination. The quality of domestic water supplies and the potability of the drinking water could be in doubt, due to the afore-mentioned reasons thus, both the water for home use, and the commercially sold water quality is not guaranteed. With the increasing demand for water in Nigeria, most times the people only demand for quantity and not quality; that is to say an individual can have access to water but the quality of the water is poor, this could in-turn cause water borne disease which is a major public health problem therefore, the need for risk assessment of microbial contamination of borehole water cannot be over stressed. The prime health risk of water borne diseases is from consumption of faecal contaminated water i.e. faecal oral route of transmission; the faecal material contains pathogenic microbes that can cause infectious diseases. These diseases range from cholera, other diarrheal diseases, dysenteries, and enteric fevers. Nonetheless, epidemiological evidence of water borne diseases have been reported by Federal Ministry of Health of Nigeria (1991) on diarrhoea, haemorrhagic colitis [22] as well as Aeromonassobria in chlorinated drinking water supplies [23].
In addition, some outbreak investigations have suggested the presence of microbial contaminants in drinking water, as seen in a Uganda based study by Legros et al. [24] on the epidemiology of cholera outbreak relating the effect of faecal oral route of transmission, and the need for a good hygiene and potable drinking water for consumption. In the same view, a previous study by LeChevallier et al. [25] reported same. Furthermore, other studies are evident of poor water quality consumption resulting to health issues, due to the susceptibility of the individuals to the danger and risk of water borne infections [26]. Globally, over two billion morbidity and mortality cases due to water borne diseases have been reported, with over 50,000 death rate per day and about four million child mortality rates (under five children) in developing countries [27,28]. Against the above backdrop, the present study was aimed at investigating the bacteriological risk quality of borehole sources of drinking water in Diobu area of Port Harcourt.
These would specifically be evaluating the coliform count as a critical indicator bacteria of water pollution or contamination. It is, therefore, firmly believe that data generated would help to uncover how potable is the drinking water source in the studied location.
Also, it will also guide and direct the policy makers and especially Health agencies towards underpinning their strategies and policies towards improving the water treatment and sanitation outcome for the good of all.
Methodology
Faecal indicator bacteria (FIB) and source indicator (sanitary inspection) are tools for assessing safe and potable drinking water.
The recommended Guideline for Drinking-water Quality (GDWQ) (4 th Edn.), by WHO that include criteria for assessing health risks and setting targets for improving water safety. Here, the WHO GDWQ recommended E. coli was used to assess faecal contamination in drinking water using faecal indicator bacteria (FIB) since direct measurement of pathogens is complex but techniques for assessing are well established and widely applied. The WHO guideline value for E. coli ("none detected in any 100-ml sample") [25].
Risk Classification of Faecal Indicator Bacteria (FIB) in Drinking Water
A commonly used risk classification outcome was based on the number of indicator organisms in a 100 ml sample, as shown below: [29,30]. <1, "very low risk" 1-10, "low risk" 10-100, "medium risk". >100, "high risk" or "very high risk" Nonetheless, faecal indicator bacteria (FIB) is imperfect and the level does not necessarily equate to risk [31]; since quality varies both temporally and spatially, occasional sampling may not accurately reflect actual exposure. As a result, sanitary inspection is a complementary approach in the assessment of safe drinking water, it involves the identification of hazards and other measures like risk management through hygienic check of a drinking water source, as well as its surrounding environment [30,32].
Study Location/Area
Diobu is a densely populated area within the neighborhood of
Laboratory Analysis
Microbial analysis of the water sample was assayed by testing for total coliform count (faecal indicator bacteria) using the heterotrophic plate counts and the most probable number techniques (MPN). Heterotrophic plate count involved spreading of the sample on a plated media; a serial dilution of 1:10 was employed.
With the use of a sterilized spreader, the sample was first inoculated on the nutrient agar i.e. primary inoculum and then spread out by streaking; after an overnight incubation at a temperature of 37 o c.
The plated media were in triplicates for each sample and an average value was taken from discrete colony count which became the total heterotrophic count for aerobes. Furthermore, discrete colonies were sub-cultured to obtain a pure culture [34].
Total Coliform Count
Total Coliform count was carried out firstly, to establish the presence of microbial contamination in the water samples using a presumptive test method to identify the most probable value [35].
Presumptive Test Analysis of Most Probable Number
Presumptive test used in this study was multiple tube The test tubes were shaken to mix the contents properly and incubated at 37C for 24 hours. After incubation, the test tubes were examined and those that produce acid and gas, the tubes were further incubated for 48 hours, after which they were then observed for positive and negative reaction and the results were again recorded. Acid production was indicated by changing the colour of the medium from purple to yellow and gas production by collecting of gas in inverted Durham tubes. Tubes that were found to produce acid and gas were tallied according to their number of occurrence and then referred to the MacCardy table to estimate the most probable number of coliforms found in each tube containing the water sample in the broth [35]. This was the presumptive test for coliform bacteria in water [35].
Statistical analysis for this study includes: Mean, Standard
Deviation, paired t-test, ANOVA and Correlation statistics were used at p<0.05 using SPSS version 21.
Results
A total of 30 samples collected from the ten different street boreholes at the various stipulated locations at three sessions (different times: morning 8-10am, afternoon 12-2pm and evening 4-6 pm) had the results shown below. The probable organism isolated in this study using various biochemical identification tests includes
Similarly, sample from Lumumber Street had the highest mean (2.1x10 4 cfu/ml) while sample from Elechi Street had the lowest (4.5x10 2 cfu/ml) for samples collected in the evening. Furthermore, samples collected in the evening hours showed Obodo lane to be the highest count (1.8x10 4 cfu/ml) and Azikiwe Street had the lowest count (3.5x10 2 cfu/ml). A paired t test of Bacterial counts of samples collected at different period of time showed no significant difference (p>0.05) for pair 1 (Morning*Afternoon) and pair 2 (Morning*Evening) however, the third pair (Afternoon*Evening) revealed a statistically substantial variation (p<0.05) ( Table 1). (Table 3). properly treated before use to avoid the potential outbreak of water borne epidemic.
The observation in this study supports the fact that high heterotrophic count in water reflects high coliform count.
The presence of high coliform count reported in some boreholes in this present study could probably be attributed to the proximity of these boreholes to septic tank located near the boreholes and the general unhygienic and poor sanitary environment surrounding the boreholes, this agrees with Bello et al. [17]. Moreover, shallow depth of less than 40 m could be the problem because it contradicts the 40 m minimum depth recommendation. Depth is an issue as reported by previous studies [12,11]. Though the depths of the various boreholes studied were unknown, borehole depth is an imperative factor in the investigation of bacterial contaminants in borehole water. Furthermore, it could probably be that the pipes used for water distribution were bad thus, allowing seepage of microbial contaminants into the boreholes, it was strongly suggested that damaged pipes allow leakage causing contamination of microbial pathogens into the body of aquifer as seen in this study and a previous study by Ibe et al. [19] reported same trend as a major factor to be considered when planning for potable water safety.
These and other human activities could have probably contributed to water contamination as revealed in other studies [13,14].
In addition, the bacterial isolated from the water belong to the genera of potential pathogenic bacteria, hence it is recommendation that water from all the boreholes need to be boiled before use.
There is need to increase the awareness of the public towards the [9] and Vaishali et al. [10]. Hence, the variation found in the microbial counts in the various borehole water sources support the notion that underground water quality varying from place to place which could be due soil types, rocks and surface through which it flows as reported by some studies [9][10][11][12]. The area of this present study (Diobu) has a soil type topography within its environ which has not been consolidated, it is temperately porous and made up of permeable sands with the high propensity to rapidly drain its content thereby making even the borehole water probably disposed to contamination over time of long exposure.
The coliform enumeration (Most Probable Number) reported in this study showed that four, out of the ten borehole water sampled contained microbial load which make these borehole water not good for consumption with borehole water from Lumumber street and Obidianso street been the most unsuitable borehole water for human consumption, followed by borehole water from Obobo lane and Naka street whereas, others showed no evidence of contamination which prove their suitability for human consumption and domestic use. Using the commonly risk classification which is based on the number of indicator organisms in a 100 ml sample according to Cowan and Steel; APHA, [35][36][37][38][39] respectively. Lumumber street and Obidianso street borehole water were classified as "high risk" or "very high risk" because the indicator organisms present in the borehole water samples were over a hundred per one hundred millilitres of sample of water. In addition, Obobo lane and Naka borehole water were categorized as "medium risk" whereas, the other six borehole water was classified as "very low risk".
Based on this classification only the six-borehole water classified as "very low risk" seem to be potable for consumption sanitary inspection (hazards identification and hygienic checks) of both the water and its surrounding is a complementary approach in the assessment of safe drinking water in a cosmopolitan society like ours with massive anthropogenic activities.
Conclusion
The building of borehole requires underground pipe laying which supposedly need to meet up with its minimum requirement of Health and Safety rules; also, periodic maintenance, dumpsites, pit latrines, suck-away and drainages ought to be far from borehole water pipes to avoid cross contamination, this was likely not the case in some of these areas thus, making the quality of borehole water and its potability in-doubt. Consequently, both the water for home use and the commercially sold water quality was not guaranteed. In the midst of the increasing demand for water in Diobu area due to the high number of people and its commercial activities, the people look out for availability of water only, irrespective of the quality.
This proves the fact that access to water may not guarantee water potability for consumption and use. Nonetheless, this is a strong side-line of Public Health concern, because consumption of anunsafe water could lead to water borne diseases thereby, increasing the burden of disease epidemic outbreak especially among the vulnerable ones, like the children in our Communities. However, the collective result from this study showed a strong need for prompt and regular risk evaluation of microbial contamination of borehole water since citizens residing and those doing all kind of businesses in this area make use of this water ignorantly, not knowing the health implication it could pose in their health.
It is therefore, strongly recommended that borehole water sources within the Diobu axis be properly and routinely investigated, even as the water should be treated accordingly, to improve the microbial water quality. Also, periodic maintenance should be carried out to avoid damages to the pipes and other materials.
Furthermore, the borehole water without contamination as reported in this study should continually be checked and monitored over time for its portability to be retained, and good environmental sanitation strategy should be provided and subsequently be maintained, so as to keep our drinking water sources safe for the entire populace especially, for those within this area, bearing in mind that water is a fundamental human social right and thus, every human must have access to safe drinking water as declared by United Nation bill of right of 1948. Nonetheless, it will not be out of place still, for the Local, Regional and National Government of the region to embark on construction and provision of well treated potable water for her citizens, it is firmly believed that such dynamic leadership steps would certainly reduce the trend of water based, water wash and water borne infection in the country in general, and in return, the ripple effects of such gesture would be a healthy nation and high level of productivity in all strata of human endeavors.
|
2020-01-30T09:03:31.688Z
|
2020-01-21T00:00:00.000
|
{
"year": 2020,
"sha1": "4f6f4c9a224783d688afffb132abe105e16f84df",
"oa_license": "CCBY",
"oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.004099.pdf",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "4f6f4c9a224783d688afffb132abe105e16f84df",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
211564349
|
pes2o/s2orc
|
v3-fos-license
|
Mass Spectrometry-Based Identification of MHC-Associated Peptides
Neoantigen-based immunotherapies promise to improve patient outcomes over the current standard of care. However, detecting these cancer-specific antigens is one of the significant challenges in the field of mass spectrometry. Even though the first sequencing of the immunopeptides was done decades ago, today there is still a diversity of the protocols used for neoantigen isolation from the cell surface. This heterogeneity makes it difficult to compare results between the laboratories and the studies. Isolation of the neoantigens from the cell surface is usually done by mild acid elution (MAE) or immunoprecipitation (IP) protocol. However, limited amounts of the neoantigens present on the cell surface impose a challenge and require instrumentation with enough sensitivity and accuracy for their detection. Detecting these neopeptides from small amounts of available patient tissue limits the scope of most of the studies to cell cultures. Here, we summarize protocols for the extraction and identification of the major histocompatibility complex (MHC) class I and II peptides. We aimed to evaluate existing methods in terms of the appropriateness of the isolation procedure, as well as instrumental parameters used for neoantigen detection. We also focus on the amount of the material used in the protocols as the critical factor to consider when analyzing neoantigens. Beyond experimental aspects, there are numerous readily available proteomics suits/tools applicable for neoantigen discovery; however, experimental validation is still necessary for neoantigen characterization.
Introduction: Neoantigens in Cancer Immunotherapy
Personalized immunotherapies and patient-specific methods emerged as an attractive alternative to the conventional methods and targeted therapies available to fight cancer [1,2]. Immune checkpoint inhibitors are used to modulate systems of T-cell regulation and some of them (e.g., ipilimumab) are now the standard of care for melanoma and a growing set of cancers exhibiting a high mutational burden. Furthermore, targeting of the cytotoxic T-lymphocyte-associated antigen (CTL4) immune checkpoint with ipilimumab (a human monoclonal antibody) improved overall survival of melanoma patients [3], and a combination of treatments that target the PD-1(Programmed cell death protein 1) checkpoint inhibitor resulted in more prolonged progression-free survival compared to ipilimumab alone [4].
However, some other immunotherapies consider more personalized treatments that target tumor-associated antigens (TAAs), which can serve as biomarkers and are overexpressed by the tumor cells, or with peptide neoantigens, which are derived from the tumor-specific DNA. Peptide epitopes present on the major histocompatibility complex (MHC) of the cancer cells [5,6] can uncover tumor-specific mutations and motifs. MHC molecules are encoded by human leukocyte antigen genes (HLA) which are the most polymorphic gene cluster in the human genome. Consequently, immunopeptidomes are a highly versatile group of peptides presented on the surface of the cells to T-cell receptors (TCRs), which, after their recognition, activate T-cells. However, a patient's response to therapy is a complex event related to tumor mutations, as well as genetic and transcriptomic changes. Today, the emerging field of cancer proteogenomics integrates the information obtained from the genomic and proteomic studies toward identifying and validating peptide sequences as potential therapeutic candidates [7].
Currently there are continuous efforts to apply neoantigen-based therapies in real clinical settings, and ongoing clinical trials are mostly in phase I (https://clinicaltrials.gov/). Published cases focus on assessing the safety of the vaccines, as well as on the evaluation of the generation of CD4+ and CD8+ cells reactive to the neoantigens. This approach showed the efficiency of such vaccines in melanoma, which is characterized with high mutational burden [8,9]. Ott et al. [9] applied an in silico antigen prediction method with expression validation at the RNA level to compose a peptide library which was further used to formulate a peptide-based vaccine. Authors were able to confirm the immunogenicity, as well as show the promising patient outcomes, as four out of six patients did not experience recurrence after 25 months, and, in the remaining two cases, there was complete tumor regression after therapy with anti-PD-1. In cases of trials involving glioblastoma, which shows a substantially lower level of mutations, there was a detectable immune response, as shown by the presence of neoantigen-specific lymphocytes [10] and tumor-infiltrating lymphocytes [11]. Remarkably, Migliorini et al. [10] used antigen sequences which were directly derived from mass spectrometry-based immunopeptidome sequencing. In all published cases, the vaccines were well tolerated by patients. There is a high potential of integration of knowledge about neoantigens into other methods of cancer immunotherapy, such as endogenous T-cell therapy [12]. In this application, detected antigens are used to sort reactive T-cells, which can further be expanded in vitro and used for immunotherapy. However, despite the clear advantages of predicting neoantigens from genomic data, aberrations appear across each level of the central dogma. For example, at the transcript level, RNA-editing and aberrant splicing can introduce new sources of neopeptides. Furthermore, translational errors and post-translational modifications (PTMs) of those antigens that cannot be predicted can be characterized by mass spectrometry analysis. Moreover, PTMs were found to be more efficient in triggering an immune response.
Mass spectrometry (MS) is an indispensable tool in proteome discovery and, as such, it is widely used in the clinic to profile patient samples and aid their treatment. Novel technological developments in MS improved targeting and identification of cancer-specific antigens as the basis of antigen-specific therapies. Sequencing of the immunopeptidome was pioneered by Hunt et al. in the 1990s [13]. Prediction of cancer-specific antigens is a challenge in immunotherapies [14], and the fact that the accurate identification of the peptide neoantigens can serve as the basis for personalized anti-tumor treatments makes mass spectrometry an essential tool in mapping the tumor immunopeptidome [14][15][16]. HLA-binding peptides were targeted in various cancer types of cell lines and tissues including melanoma, breast, and brain tumors [16][17][18][19][20]. Detection of the immunopeptidome may support the inclusion of tumor-associated neoantigens for the vaccine preparation, as well as expand the knowledge about the pathways leading to the peptide presentation on the cell surface.
Modern immunopeptidomics is a cutting-edge application [21]; the technical development in terms of the sample preparation, detection, and further computational identification of the peptides is still in development. Above all, low amounts of the neoantigens available per sample require improvements in the processing protocols and more efficacy of the current liquid chromatography-mass spectrometry (LC-MS) methods. Therefore, in this review, we discuss protocols for neoantigen isolation, as well as currently used LC-MS approaches for their detection.
Isolation of the Immunopeptides from MHC Complexes
In the last 30 years, the isolation of MHC-bound peptides (MBPs) was typically performed with two established methods named immunopurification/immunoprecipitation (IP) and mild acid elution (MAE) (Figure 1). In typical conditions, the identification of several thousands of MBPs requires more than 100 million antigen-presenting cells (APCs), which is the issue often encountered with a limited amount of the sample available from patients.
In 1987, Sugawara et al. described a simple, reproducible method to eliminate the antigenicity of MHC I molecules called mild acid elution. The method used two minutes of incubation of the viable cells in a citric acid buffer (pH 3), and it was applied not only for humans but also for murine cells from various origins [22]. A crucial step in MAE is maintaining a pH of 3 for an effective elution of MHC I molecules. Moreover, the authors described that removal of HLA-A, B, and C antigenicity was less significant at pH higher than 3.7 and alkaline solutions at pH 8-11. An interesting observation from the MAE method is that neither MHC class II antigens nor other non-MHC antigens were removed from the cell surface [22].
Moreover, β2-microglobulin (β2M) is the component of the MHC I molecule present on the surface of the cell, and the MAE method facilitates dissociation of the extracellular non-covalently bound β2M from the MHC I complexes on the surface of the cell [23]. The MAE method was used to investigate the MHC I-presented peptide pool in different biological samples and was successfully applied for isolation and analysis of naturally presented MHC-associated viral peptides [24]. After isolation, the detection of MHC peptides was performed with reversed-phase high-performance liquid chromatography (RP-HPLC) coupled to tandem mass spectrometry. These techniques provided a successive experimental path to sequence the MHC peptides, with a significant impact on the cancer vaccine pipeline. In another study, Storkus et al. used a single-cell suspension, incubated with a citrate-phosphate buffer for a short time (15 s) at a low pH (pH 3.3). Incubation in the citric buffer enabled swiping off the cell surface MHC-I-associated peptides while maintaining the cell viability, which was used for the identification of T-cell epitopes [18]. However, due to the stability of MHC-II-peptide complexes, this acidic treatment was unable to release MHC-II peptides [23].
The second, most successful, and well-known method for the isolation of MHC I and MHC II peptides is called immunopurification/immunoprecipitation (IP). IP is an antibody-dependent method based on the generation of robust and specific HLA-specific antibodies that recognize the pan-MHC or specific HLA alleles. This approach usually gives a higher number of MHC peptide identifications from the cell lines and clinical samples or tumor tissues [18,[26][27][28].
Hunt et al. in their pioneering study investigated IP-based identification of peptides bound to MHC I molecules and their sequencing by MS [13] using a combined set-up of microcapillary high-performance liquid chromatography-electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS). This set-up leads to sequencing a sub-picomolar amount of peptides from the MHC molecule, which enhances the identification of MHC-associated peptides for transformed and virally infected cells. Currently, advanced techniques such as nano-ultra-performance liquid chromatography coupled to high-resolution mass spectrometry (nUPLC-MS/MS) [28] and 96-sample plate high-throughput platforms [27] are essential improvements in the isolation of MHC-associated peptides and their detection. These methods consisted of multiple replicates of tissues (n = 7), with multiple cell lines (n = 21, 108 cells per replicate), and they showed high reproducibility (e.g., Pearson correlations for HLA I and II were up to 0.98 and 0.97, respectively) [27]. In parallel, significantly improved bioinformatic pipelines aided in the identification of a few thousands of MHC peptides in a short period [27,28].
The first step in the IP method is the solubilization of the tissues or cells and the cell surface of MHC complexes in a lysis buffer with a detergent. The solubilized MHC complexes are further captured by the pan-MHC I antibody on the principle of immunoaffinity (i.e., W6/32 antibody) that recognizes all alleles (i.e., HLA class I alleles A, B, and C). Furthermore, the formed antibody-MHC complexes are thoroughly washed to remove the unspecific bindings, detergents, and contaminants. After the washing procedure, MHC-associated peptides are separated from the antibody, MHC molecules, and β2M by solid-phase extraction using C-18 discs. Finally, purified peptides are subjected to the high-resolution MS. In the above-mentioned study, Lanoix et al. observed that IP typically provides 6.4-fold higher number of MHC I-associated peptides compared to the MAE method [26]. The authors also observed that most peptides identified by the MAE were also detected in extracts from the IP method.
In 2006, Gebreselassie et al. performed immunopeptidome analyses to compare MAE and IP methods on U937 cells and further used matrix-assisted laser desorption/ionization tandem time of flight (MALDI-TOF/TOF) MS for the identification of the MHC I-associated peptidome. The authors identified 64 and 21 MHC I-associated peptides by MAE and IP, respectively [29]. Later in 2014, Hassan et al. introduced isotopically labeled peptide MHC monomer (hpMHC), which they added directly to the cell lysate for accurate quantification of MHC class I-presented peptides. A combination of the hpMHC with IP-based peptide isolation method on B-LCL cells revealed the average recovery yield of 1%-2% on two minor histocompatibility antigens (MiHAs) [30]. As IP is one of the widely accepted methods to study the MHC expression mechanism, Komov et al. reported that expression of MHC I peptides depends on the availability of empty MHC molecules and not on the peptide supply [31]. Furthermore, for the comprehensive quantification of immunopeptidomes, Caron et al. introduced HLA allele-specific peptide spectral and assay libraries to extract the digital SWATH-MS data (sequential window acquisition of all theoretical mass spectra MS) for quantitative analysis of immunopeptidomes across several samples [32]. Very recently, in 2018, Lanoix et al. applied both MAE and IP methods for the comparative identification of the MHC I immunopeptidome repertoire of B-cell lymphoblasts [26]. Particularly in the MAE approach, the viable cells (B-LCL) were incubated with citrate buffer (pH 3.3) to disrupt the MHC I complex; hence, β2M protein and MHC-associated peptides were released into the MAE buffer. Furthermore, the peptides were desalted, and β2M was removed via the ultrafiltration method. Subsequently, the eluted purified peptides were subjected to MS analysis. Here, with the MAE approach, the authors identified few hundreds to few thousands of MHC-associated peptides (Table 1) [26]. Synthetic peptides, selected hits for in vivo immunogenicity [40] The exact source of targeted tissue-specific antigens and MHC expression pathways is still unclear. Moreover, there are many potential sources of MHC-associated antigens such as classical MHC pathways, pioneer translation products (PTPs) [41], defective ribosomal products (DRiPs) [42], cisand trans-spliced peptide [43], non-canonical reading frames [44], etc. Therefore, the combination of high-resolution MS and MHC peptide isolation methods (MAE and IP) is not only convenient for accurate identification of MHC peptide neoantigens; it also indirectly contributes to detecting the source of origin at a genomic location and to tracing the exact mechanism of MHC peptide expression, which is a pressing issue [40]. The use of the MAE method due to the high number of contaminating peptides limits the possibility of doing an in-depth analysis of the immunopeptidome. On the contrary, IP facilitates the direct and in-depth analysis of clinically relevant neoepitopes on human tissue by MS [18]. Therefore, while the IP method shows some more convenience for use over the MAE method, both methods have their particular advantages in terms of the duration of the protocols, specificity for immunopeptides, and achievable throughput, which all aid in identifying relevant neoantigens to answer biological questions [16,45]. Therefore, it is recommended that both methods can be applied in parallel to gain a maximal amount of biological information and to understand the dynamic nature of the immunopeptidome. In Table 2, we summarize a comparison of MAE and IP methods with highlighted advantages and drawbacks of each technique. Table 2. Comparison of advantages and drawbacks between mild acid elution (MAE) and immunoprecipitation (IP) methods. HLA-human leukocyte antigen.
Neoantigen Isolation Method Reference
Advantages Drawbacks
Immunoprecipitation (IP)
Due to specificity of the antibody, IP is less likely to select contaminating peptides Due to the binding specificity of the antibody, HLA subtype information is also lost in IP-based methods utilizing the pan MHC antibody [25] Highly specific because the MHC antibodies capture the MHC complexes, where about 90% of MHC peptides come from immunopeptidomes IP requires extensive washes to remove contaminants/unspecific peptides and detergent, which may lead to losses of low-affinity MHC I-associated peptides [16,29] Applicable for a variety of biological samples; fresh or frozen dissociated cells and solid tissues [26] Mild acid elution (MAE)
MAE is applicable when starting material may be limited
After MAE, further analysis is challenging due to the additional cell surface (non-MHC) contaminating peptides bound through hydrostatic interactions that may also be eluted by this acid-wash process [23] MAE allows for cell regeneration following β2M-dissociation, so it can be re-analyzed after a second perturbation [23] This method is simple, quick, and reproducible It extracts not only MHC I-associated peptides but also other contaminants or contaminant peptides [22,38] The MAE method can be used in situations where antibodies against MHC I molecules are not available MAE needs viable dissociated cells to perform acid elution, which is the limitation when using fragile cells/solid tumor tissue [45] It involves fewer purification steps and no detergents It is been used for high-throughput sequencing of the MHC I-associated peptide repertoire because it is assumed that eluted peptides contain not only MHC I-associated peptides, but also "contaminant" peptides [45] MAE introduces no bias linked to preferential loss of low-affinity peptides [45]
Detection and Identification of Immunopeptides with Mass Spectrometry
Until recently, most efforts in neoantigen identification were performed with indirect methods, relying on genome and transcriptome sequences [18,46,47]. This information can be used to detect mutations and confirm the transcription of the variant DNA sequences. Then, it can be further analyzed to predict the probable neoantigen sequence [5], affinity to MHC molecules [48], and immunogenicity [49]. However, due to the vast amount of possible neoepitopes, additional confirmation at the protein level may be used to further narrow down the list of potential neoantigens [37].
Currently, LC-MS/MS-based immunopeptidomics is the only method that can comprehensively interrogate the repertoire of MBPs presented in vivo. The idea behind the use of MS for neoantigen discovery relies on direct detection of the sequences of neoantigens presented on the cell surface [50]. Currently, great efforts are made to make this possible, as well as to make the workflows reliable enough for application of MS-based neoantigen identification in the clinical setting [51,52].
In most of the methods used today, the starting amount of the material required for in-depth LC-MS analysis is about 10 8 cells [28] or 1 g of the tissue sample [17]. In such cases, the amount of isolated immunopeptides should enable identification of several thousands of individual MHC ligands. After the isolation from the MHC complexes, immunopeptides are further separated by nanoscale LC. These separations of the MHC-bound peptides are carried out in a way partially similar to the typical proteomic experiments, with several differences. Firstly, the gradient used for immunopeptides is usually shallower, utilizing less than 30% acetonitrile. Secondly, the analysis time used for immunopeptides is usually longer compared to the time used for tryptic peptide separation, which leads to the use of long columns with high peak capacity. Usually, immunopeptides are analyzed without prefractionation, due to possible sample losses; however, if enough of starting material is available (probably limiting the scope to the cell culture samples), prefractionation through strong cation exchange (SCX), high-pH reverse-phase chromatography, or isoelectric focusing (IEF) can lead to a substantial increase in the number of identified peptides [53,54]. As these studies reported a vast improvement of the identification number after prefractionation, it is expected that single-step separation methods are currently not sufficient to cover the complexity of the immunopeptidome.
Reports showed that the MBPs have different physiochemical characteristics than typical tryptic peptides. The difference occurs from the fact that they do not always contain a basic residue on the C-terminus [55]. Because of the absence or reduction in the basic residues, effective ESI ionization and fragmentation may be hampered. One possibility to overcome this issue is chemical derivatization of the functional groups of the immunopeptides. There were attempts to increase their identification rate via chemical modifications since the inception of immunopeptidomics. As an example, the N-pirydylacetyl modifications of amino groups was shown to increase the completeness of b-ion series [56]. More recently, Chen et al. described two different chemical derivatization approaches for extending the identification of MBPs. Firstly, peptides were dimethylated on all amino groups including N-termini, which does not significantly affect their fragmentation but increases hydrophobicity and, therefore, improves their chromatographic properties. Secondly, they were alkylamidated on the carboxyl groups including C-termini. This modification affects the fragmentation pattern, leading to the generation of more complete y-ion series, increasing the sequence coverage ( Figure 2) [39]. Analysis of the native, dimethylated, and alkylamidated peptides from the same sample enabled the detection of a subset of previously undetected peptides and, thus, led to a substantial increase in immunopeptidome coverage.
In immunopeptidomics, mass spectra are almost exclusively collected on fast, high-resolution instruments employing Orbitrap or time-of-flight analyzers for discovery-oriented experiments. Even after decades of research, the overall complexity of the immunopeptide identification represents a great challenge in terms of performance and reproducibility of the analyses. The high complexity of the sample, along with lack of clear sequence specificity of MHC ligands makes obtaining a high-quality MS/MS spectrum crucial for peptide identification. The most utilized fragmentation strategy is the use of higher-energy collision-induced dissociation (HCD) or collision-induced dissociation (CID), which are both capable of generating MS/MS spectra at high speed. Moreover, the peptide antigen sequence coverage is further improved by alternative fragmentation strategies such as electron capture dissociation (ECD) or electron transfer dissociation (ETD) and HCD combinations [57], which are particularly useful with de novo sequencing approaches that exclusively rely on fragmentation data to assign peptide sequences. Electron-transfer/higher-energy collision dissociation (EThcD) fragmentation was applied on an Orbitrap Velos instrument with modified firmware, and it was shown to generate fragmentation patterns with high sequence coverage. This method relies on all-ion HCD fragmentation of ETD fragmentation products and generates b, c, z, and y-type ions [58]; however, the tradeoff is the lower acquisition speed compared to conventional HCD [57]. Examples of immunopeptidomics studies with details on the most important parameters, such as the type of the column and instrument, amount of the starting material, and obtained main results, are summarized in Table 2. Moreover, confirmation of the presence of a particular peptide or antigen quantitation can also be done on triple quadrupole or quadrupole-ion trap type instruments [34,59]. The application of such analyses is especially useful for highly reproducible peptide detection and quantitation.
One of the novel challenges of the immunopeptidomics is the characterization of the posttranslationally modified MHC ligands. Analysis of glycosylated [60] and phosphorylated [61,62] MHC ligands shows that these modifications are realistic and have an influence on the immunogenicity. Detection and characterization of glycosylated MHC II ligands is feasible using combined HCD and ETD fragmentation techniques, even without affinity enrichment [60]. The phosphorylated peptides constitute approximately 1-2% of total antigens, and there are sequence motifs specific for phosphorylated ligands [63]; however there is a wide variety of other possible modifications such as acetylation, methylation, citrullination, and cysteinylation [64]. Detection of the PTMs further increases the complexity of the pipeline, and analysis of neopeptide PTMs is hampered because of the increase in the already big search space by additional possibilities [65].
The analysis of the obtained immunopeptidome LC-MS data is challenging compared to most proteomics experiments. The most popular data analysis strategy in peptide-centric proteomics is a database search that matches a theoretical fragment spectrum from a candidate peptide sequence to an observed fragment spectrum. Typically, candidate peptides originate from an in silico digested protein database and are selected for comparison based on the mass of the precursor ion. This differs in neoantigen discovery, where the whole analysis workflow is highly dependent on the previously acquired genomic data, as the sequence database must contain all the mutations expected in the neoantigen sequences. Mutations and aberrations can be called from sequencing data using standard measures, currently well optimized by the genomics community [66][67][68][69]. To date, most immunopeptidomics studies still use the conventional whole proteome database search strategy with search engines such as MaxQuant, PEAKS, SEQUEST and Proteome Discoverer [27,70,71], in combination with oxidation (M), deamidation (NQ), N-terminal acetylation, and phosphorylation (STY) as variable modifications. Due to the fact that the cleavage specificity is still unknown, in silico protein digestion is performed in a nonspecific manner, penalizing both the search performance and the quality of the search results.
After peptide-spectrum matching, post-processing pipelines also differ from those used in global proteomics. The lack of protein level information reduces overall confidence in correct identifications. Each peptide must be scored individually, irrespective of its source protein [21]. For these reasons, an immuno-peptidomic post-processing algorithm called MS-rescue [72] for refining the spectrum-peptide assignments was developed. It was designed specifically for MHC studies, achieving an increased sequencing depth, while at the same time removing potential experimental outliers and contaminants. For the same reasons, some argue that using a biologically relevant peptide database rather than the whole proteome would significantly speed up the searches and increase sensitivity. For instance, SpectMHC [73] implements a targeted database search approach showing a two-fold increase in peptide identifications compared to unspecific proteome search.
Mass spectrometry lags behind in terms of the development of file standards, when compared to genomics or transcriptomics. Other than the MIAPE initiative (minimal information about immunopeptidomics experiment), there is a need for standardized and version-controlled bioinformatics workflows, especially when considering potential clinical applications. One of the examples of such a workflow is MHCQuant [74], which also shows superior identification capabilities compared to other identification engines, as well as support for label-free quantitation. Remarkably, authors showed the identification and confirmation of previously undetected neoantigens in a publicly available dataset. Probabilistic inference of codon activities by an EM algorithm (PRICE), another MHC-related workflow [75], is a computational method for accurately resolving overlapping small open reading frames (sORFs) and noncanonical translation initiation sites, revealing it as a substantial fraction of the antigen repertoire.
Complete de novo peptide sequencing is required to identify a part of the immunopeptidome. The recently discovered peptide splicing phenomenon could lead to the presentation of neoantigens of sequences that are not present in the genome. The portion of spliced peptides in the immunopeptidome is not known, but some reports suggest that it may be up to 30% [76]. The de novo sequencing of HLA ligands cannot be done manually, due to the large amount of created data; therefore, recently, automated tools started to become available with examples of the most popular software tools as PEAKS and PepNovo+ [77]. There was significant progress in this field in recent years due to the availability of MS instruments capable of high-speed acquisition with high-resolution MS/MS spectra, as well as developments in computational methods, such as the introduction of deep learning-based workflows for peptide sequencing. For example, DeepNovo uses the peptide sequences identified via database searching to train a personalized de novo sequencing algorithm [78]. Similarly, SMSNet [79] is a deep learning-based de novo sequencing model with a post-processing strategy that pinpoints misidentified residues and utilizes a sequence database to revise the identifications. In general terms, the hybrid de novo/database approach relying on mass tags (partial peptide sequence inferred from a spectrum, which is used to search the database with total peptide mass as a constraint) or the use of prior knowledge of the predicted MHC-binding motifs to further filter out the improbable peptide sequences and perform rescoring to rescue probable hits of lower quality may improve the sensitivity and accuracy of the searches [18,72].
Beyond the direct identification of immunopeptides by mass spectrometry, a variety of predictive software exists to identify those genomic variants that would be presented to the immune system. There are several tools to process genomics and transcriptomics data to identify potential neoantigens. These include, for example, PVacTools [80][81][82] and Neopred Pipe [83] These tools can be used to establish mutations likely present in the immunopeptidome, as well as to narrow down those that have affinity for MHC. The best way to integrate the results of these tools, which are of course limited in their ability to predict correctly, with personalized and directly measured immunopeptidomes remains an open question.
Conclusions
The characterization of peptides presented as antigens on the cell surface is of vital interest to our understanding of immune-related diseases including pathogen response, auto-immune diseases, and cancers. Cancers are of particular interest as they rely on creating an immune-suppressive environment as one of their hallmarks. In the end, the direct detection of peptides will be hugely important for our understanding of biology and to identify candidates for immunotherapeutic intervention in this disease. The direct detection of these peptides is essential, as current neoantigen prediction methodologies from genomics and transcriptomics are error-prone. Nevertheless, direct detection by mass spectrometry suffers from sensitivity issues and complicated sample preparation strategies, which burdens its broad adoption. Isolation and purification of the immunopeptides is one of the most important steps in their analysis. Further development of the sample preparation protocols to improve isolation of MHC peptides and reduce the number of preparation steps might minimize the loss of low abundant peptides. Moreover, limited patient sample amounts affect the sensitivity and reproducibility of detection, which are major challenges in the MS discovery of immunopeptides. Chemical derivatization approaches might aid in bettering chromatographic performance and ionization efficiency of the peptides, thus extending their sequence coverage and identification. The next decade will see tremendous advantages in neoantigen characterization by mass spectrometry fueled by increased instrument sensitivity, thus reducing tissue quantity requirements.
Author Contributions: Conceptualization, S.K., A.P., and I.D.; writing-original draft preparation, S.K., A.P., G.B., J.A., and I.D.; writing-review and editing, all authors. All authors have read and agreed to the published version of the manuscript.
Funding: This work was supported by the International Center for Cancer Vaccine Science, carried out within the International Research Agendas program of the Foundation for Polish Science, co-financed by the European Union under the European Regional Development Fund.
Conflicts of Interest:
The authors declare no conflicts of interest.
|
2020-03-01T14:03:39.342Z
|
2020-02-26T00:00:00.000
|
{
"year": 2020,
"sha1": "8b730c887b7d2f2801ea7481956cf0ef0707234c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cancers12030535",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9ca493592b6d4e6e9f0372eafdc75489c2d0ecff",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
234809400
|
pes2o/s2orc
|
v3-fos-license
|
Spatiotemporal effects of urban sprawl on habitat quality in the Pearl River Delta from 1990 to 2018
Since the implementation of the Chinese economic reforms. The habitat quality of coastal has gradually deteriorated with economic development, but the concept of "ecological construction" has slowed the negative trend. For quantitative analysis of the correlation between the Pearl River Delta urban expansion and changes in habitat quality under the influence of the policy, we first analyzed the habitat quality change based on the InVEST model and then measured the impact of construction land expansion on the habitat quality through habitat quality change index (HQCI) and contribution index (CI) indicators. Finally, the correlation between urbanization level and habitat quality was evaluated using geographically weighted regression (GWR) and the Self-organizing feature mapping neural network (SOFM). The results indicated that: (1) during the study period from 2000 to 2020, habitat quality declined due to urban sprawl, indicating a deterioration of ecological structure and function, and the decrease was most significant from 2000 to 2010. (2) The urbanization index had a negative effect on the habitat quality, but the negative effect have improved after 2000, reflecting the positive effect of policies such as "ecological civilization construction" (3) The implementation degree of ecological civilization varies greatly among cities in the study area: Shenzhen, Dongguan, Foshan, and Zhongshan have the best level of green development. These results reflect the positive role of policies in the prevention of damage to habitat quality caused by economic development and provide a reference for the formulation of sustainable urban development policies with spatial differences.
31
Habitat quality refers to the ability of an ecosystem to provide suitable living 32 conditions to sustain a species, which can reflect the level of biodiversity and ecological 33 services to a certain extent 1,2 . Urbanization is the main driving factor that puts 34 tremendous pressure on biodiversity conservation 3 . Since the implementation of the 35 reform and opening-up policy, China's urbanization rate has increased rapidly, from 36 17.9% in 1978 to 60.6% in 2019, and is expected to reach 65.5% by 2025. The rapid 37 landscape patterns, and habitat quality, as well as analyzing the impact of natural factors such as DEM, temperature, precipitation, and slope aspect, or human factors such as 63 degradation of habitat quality 3,22 . Land-use cover change and expansion of construction
150
The land-use information graph is a geospatial analysis model combining 151 attributes, processes, and spaces, which can reflect the spatial differences and temporal 152 changes in land-use types. In its function expression, let the state variables be = ( , ) (1) 156 where represents land-use characteristics.
(1) To realize the spatial description 157 of land attributes, when t is constant, the function relation of changing with is 158 constructed.
(2) The process description of land attributes can be realized, and when 159 is constant, the function relation of changing with can be constructed. The 160 combination of these two functions can form a conceptual model of the land-use 161 information graph and realize a composite study of land space, process, and attributes. 162
Habitat quality
163
Habitat quality evaluation 164
We used InVEST-HQ to evaluate the habitat quality in the Pearl River Delta region. 165 Based on land-use types, this module calculated the habitat degradation degree and 166 habitat quality index by using threat factors, the sensitivity of different habitat types to 167 threat factors, and habitat suitability. Habitat degradation and habitat quality were 168 calculated using the following formulas: 169 is the habitat quality of grid x in land-use type j, is the habitat 172 suitability of land-use type j, is the habitat degradation degree of grid x in land-173 use type j, k is the half-satiety sum constant, r is the number of threat factors, and y is the relative sensitivity of threat sources. 、 , and are, respectively, the 175 interference intensity and weight of the grid where the threat factor r is located, and the 176 interference generated by the habitat.
、 are the anti-disturbance ability of 177 habitat type x and its relative sensitivity to various threat sources, respectively. 178 The value range of habitat degradation degree is [0,1], and the larger the value, the 179 more serious the habitat degradation. The value of habitat quality is between 0 and 1, 180 and the higher the value, the better the habitat quality. 181 The CI was used to analyze the causes of the changes in habitat quality, and the 199 following formula was used to quantitatively represent the contribution of land-use 200 conversion to habitat quality change. In this study, the total value of habitat quality loss 201 The research unit is a river basin, which has both natural and social attributes. It is 230 a relatively independent and complete system, which can connect and explain the 231 coupling phenomenon of society, economy, and nature 45 . The hydrological analysis 232 module in ArcGIS was used to divide the research area into 374 small basins. When 233 calculating the cumulative flow of the grid, 100,000 was used as the threshold value, 234 and basins less than 5 km 2 were combined with the adjacent basins.
The SOFM neural network was proposed by Kohonen, a Finnish scholar, and 237 constructed by simulating a "lateral inhibition" phenomenon in the human cerebral 238 cortex. It has been widely applied in classification research in geographic and land 239 system science 39,40 . The advantages of this method in classifying the coupling 240 relationship between urbanization and habitat quality are as follows :(1) it simulates 241 human brain neurons through unsupervised learning, which is objective and reliable. (2) 242 It maintains the data topology during self-learning, training, and simulation to obtain 243 reasonable partition results and identify the differences between different basins. (3) 244 For massive data, the SOFM network has a good clustering function while maintaining 245 its characteristics and uses the weight vector of the output node to represent the original 246 input. This method can compress the data while maintaining a high similarity between 247 the compression results and the original input data. 46 . We exported the data from 248 ArcGIS, and conducted cluster analysis on the four factors of NTL, POP, LUR and 249 habitat quality using SOFM. Finally, the analysis results are imported into ArcGIS for 250 display.
Therefore, before analyzing the spatiotemporal change of habitat quality and its degree 257 of coupling with urbanization level, the urban expansion in the Pearl River Delta region 258 from 1990 to 2018 was examined as reflected in Figure 3 and 4. 259 To reflect the spatial and temporal changes in habitat quality in the Pearl River 320 Delta region during the study period, an interannual map of habitat quality was drawn. Shenzhen, and Zhongshan, while there are many areas with increased habitat quality in 330 Zhaoqing, Jiangmen, and Huizhou, which preliminarily indicates that these areas attach 331 greater importance to ecological protection. 332 The above results can be further verified by analyzing Table 5. During the three 333 periods, the change in habitat quality was most prominent from 2000 to 2010. The 334 increased value of low-grade habitat quality was as high as 2893.47 2 , and the 335 decrease in higher and higher quality habitat quality was more than 500 2 , which 336 highlighted the negative impact of rapid economic development on ecology during the 337 decade. In the past 30 years, the area of low-grade habitat quality increased by 4911.07 338 2 , while the sum of the area reduced to IV and V habitat quality was approximately 339 1500 2 . Substantial degradation in habitat quality has become an urgent concern. 340 These results are consistent with the situation of urban sprawl discussed in Section 3.1.1, 341 which can preliminarily infer the correlation between urban sprawl and habitat quality. Table 3. According to the HQCI, all land conversions to construction land will 347 lead to habitat quality degradation. The HQCI was negative, and the absolute value was 348 greater than 0.10, and the effect of grassland transfer on habitat quality was the most 349 obvious, with an HQCI value of −0.30. It can be observed from the CI value that the 350 conversion of cultivated land leads to the degradation of habitat quality the most, with 351 a CI value of −386.02, followed by woodland and water areas. The reason why the 352 HQCI value of these land transfers is smaller but the CI value is larger is that they cover 353 a larger area. However, grassland conversion had the greatest impact on habitat quality 354 per unit area, but the total loss to habitat quality was not obvious because of the small 355 area of grassland transfer.
Impact of socioeconomic factors on habitat quality 360
The changes in urban expansion and habitat quality reflect the important influence 361 of human activities on the ecological space. In this study, habitat quality was taken as 362 the dependent variable, and NTL, POP, and LUR were selected as independent variables 363 by referring to existing studies 20,35 . The OLS and GWR were used for the analysis, and 364 it was found that the explanatory power of the GWR model at four time points was 365 superior to that of the OLS model, and the Sigma and AICC values of the former were 366 lower. Therefore, the GWR model was selected to obtain a better fitting effect and 367 higher accuracy. 368 During the study period, there was a negative correlation between habitat quality 369 and the NTL, POP and LUR in most areas. With the passage of time, this effect first 370 intensified and then gradually improved (Figure 7,8,9). From 1990 to 2010, these three 371 urbanization factors were negatively correlated with habitat quality in more than half Second, the SOFM classification results show that these areas have a low level of 493 urbanization and habitat quality is not considerably higher than that of the Pearl River 494 Delta core areas, so they have high development potential. 495
496
Based on existing research, the following innovations were made in this study: not 497 only quantifying the harm of urban sprawl on habitat quality, but also identifying the 498 regional differences in three types of socioeconomic factors, and reflecting the 499 importance of policy in urban ecological protection through comparison between 500 different time periods. First, in order to examine the impact of policies on habitat quality 501 in different periods, HQCI and CI index were used to quantitatively analyze the specific 502 impact of land transfer on habitat quality dating back to the time when the Pearl River 503 Delta was just established as a "coastal economic development zone." Then, 504 considering the construction of ecological civilization as the starting point, the GWR 505 analysis results reflect the regions that pay attention to economic development but ignore the protection of ecological resources during the research period. Finally, based on the clustering results of the SOFM neural network, the differences in green 508 development in different regions over the past 30 years were discussed. Consistent with 509 previous studies, the results of this study also show that urban expansion and human 510 activities have a serious negative impact on habitat quality. The difference is that some 511 studies do not consider the spatial and temporal differences of various influencing 512 factors and the impact of ecological-protection-related policies in different regions 24,35 . 513 In this study, from the perspective of policy changes, the spatial heterogeneity of social 514 and economic factors is included to provide a reference for the social and economic 515 development process of coastal urban agglomerations and the relationship between 516 urbanization and the ecosystem. 517 Although this study has effectively supplemented and expanded the sustainable 518 development of urban ecology in the Pearl River Delta region, it still has some 519 limitations. First, the assessment of habitat quality is a complex task. Although the 520 InVEST -HQ model has been applied by many scholars to calculate the habitat quality 521 index, it needs to be improved in terms of pertinence and reliability because it is based 522 on land-use type. In the future, multi-source data will need to be considered to reflect 523 habitat quality. Second, because of the limitation of data and considering that both 524 population and night light are social and economic factors with stable growth in the 525 short term, the population density in 2018 and the night light grid layer in 1990 in this 526 study were obtained by linear fitting of the data of the most recent year, so as to maintain original data will be sought to avoid these errors. Finally, the spatial scale effect plays 529 an important role in studies related to geography and ecology. To reflect the natural and 530 social attributes at the same time, this study adopts the watershed as the research unit 531 when analyzing the correlation by GWR and classifying by SOFM to solve the problem 532 of inconsistent resolution of original data. We will then consider changing the size of 533 the research unit to explore the impact of urban development on habitat quality at 534 different spatial scales. 535
536
Rapid urban expansion and high-intensity human activities have greatly affected 537 habitat quality in the Pearl River Delta. Based on the analysis of the spatiotemporal 538 evolution characteristics of habitat quality, the GWR model was used to explore the 539 impact of urbanization on habitat quality, and the SOFM neural network was used to 540 cluster each river basin into four zones according to the green development status. The 541 habitat quality index was calculated based on the InVEST-HQ model, and urbanization 542 indexes included NTL, POP, and LUR. 543 The main results are as follows :(1) The period of urban expansion was the fastest 544 from 2000 to 2010, which coincided with the period of decreasing habitat quality, and 545 the area of urban expansion was mainly concentrated in the center of the Pearl River the types of land transferred to construction land, arable land accounts for the largest area, causing irreversible harm to development of the agricultural level. (2) The area of 549 low-grade habitat quality increased by 4911.07 2 during the study period, and the 550 sum of the reduction in areas of IV and V habitat quality was about 1500 2 . The 551 conversion of grassland to construction land per unit area had the most obvious effect 552 on habitat quality, while the conversion of cultivated land caused the greatest total loss 553 of habitat quality. Considerable degradation of habitat quality has become a matter of 554 urgent concern. (3) There were considerable negative correlations between habitat 555 quality and NTL, POP, and LUR in most areas during the study period. Before 2000, 556 this negative impact worsened but gradually improved from to 2000-2018, which is 557 closely related to a large number of policies related to "ecological civilization 558 construction" since the 21st century. (4) Different cities in the Pearl River Delta have 559 great differences in the importance they attach to the construction of ecological 560 civilization and green development. The level of green development in Shenzhen, 561 Foshan, and Zhongshan was the highest, while the levels of urbanization and habitat 562 quality in most areas of other cities were relatively low. 563 564
|
2021-05-21T16:56:56.075Z
|
2021-04-15T00:00:00.000
|
{
"year": 2021,
"sha1": "3f33392e35bd89a7af98e1b8a7c745e8ff044b99",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-92916-3.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5d564c64c380fcaa200fa8269f750ea5b5d48b9b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
225112004
|
pes2o/s2orc
|
v3-fos-license
|
Improving Prone Positioning for Severe Acute Respiratory Distress Syndrome during the COVID-19 Pandemic. An Implementation-Mapping Approach
Rationale: Prone positioning reduces mortality in patients with severe acute respiratory distress syndrome (ARDS), a feature of severe coronavirus disease 2019 (COVID-19). Despite this, most patients with ARDS do not receive this lifesaving therapy. Objectives: To identify determinants of prone-positioning use, to develop specific implementation strategies, and to incorporate strategies into an overarching response to the COVID-19 crisis. Methods: We used an implementation-mapping approach guided by implementation-science frameworks. We conducted semistructured interviews with 30 intensive care unit (ICU) clinicians who staffed 12 ICUs within the Penn Medicine Health System and the University of Michigan Medical Center. We performed thematic analysis using the Consolidated Framework for Implementation Research. We then conducted three focus groups with a task force of ICU leaders to develop an implementation menu, using the Expert Recommendations for Implementing Change framework. The implementation strategies were adapted as part of the Penn Medicine COVID-19 pandemic response. Results: We identified five broad themes of determinants of prone positioning, including knowledge, resources, alternative therapies, team culture, and patient factors, which collectively spanned all five Consolidated Framework for Implementation Research domains. The task force developed five specific implementation strategies, including educational outreach, learning collaborative, clinical protocol, prone-positioning team, and automated alerting, elements of which were rapidly implemented at Penn Medicine. Conclusions: We identified five broad themes of determinants of evidence-based use of prone positioning for severe ARDS and several specific strategies to address these themes. These strategies may be feasible for rapid implementation to increase use of prone positioning for severe ARDS with COVID-19.
An estimated 10-15% of patients admitted to intensive care units (ICUs) around the world suffer from acute respiratory distress syndrome (ARDS) (1)(2)(3)(4) Mortality in ARDS is estimated to be as high as 40% (5,6), and those who survive commonly experience long-term cognitive, emotional, and physical impairments (7,8). Despite the large body of literature on interventions to treat ARDS, only three therapies have been proven in multicenter randomized trials to reduce mortality (6,9,10), one of which is prone positioning. In 2013, a multicenter randomized trial of patients with severe ARDS demonstrated that prone positioning early during the course of illness reduced mortality from 33% to 16% (10), and prone positioning for patients with severe ARDS is now included as a strong recommendation in a multisociety international practice guideline (11). Nonetheless, multiple international studies have recently demonstrated that up to 85% of patients with ARDS do not receive this lifesaving therapy (12)(13)(14).
The barriers to and facilitators of evidence-based use of prone positioning, and implementation strategies to address these determinants, are unclear. A few studies have described patient-level factors that influence the use of prone positioning, such as severity of hypoxemia, obesity, and hemodynamic instability (1,12). However, these factors are largely unmodifiable, and may not explain the entirety of the gap between evidence and practice. Therefore, our primary objectives were to identify the determinants of prone-positioning use and to develop a menu of strategies to improve evidence-based use of prone positioning, using an implementation-mapping approach.
This work is particularly salient as clinicians face the challenges of responding to coronavirus disease 2019 (COVID- 19), which is associated with high rates of severe ARDS. International societies and expert panels recommend prone positioning in patients with COVID-19 and severe ARDS (15,16), and early experience has demonstrated that prone positioning may be a standard of care (17,18).
Methods
In this qualitative study, we used an implementation-mapping approach, an evidence-based approach for developing strategies to improve evidence uptake (19). Implementation mapping involves five discrete, sequential tasks, as summarized in Table 1. For this project, we completed the first four of these tasks. Specifically, we first conducted a needs assessment and identified adopters of prone positioning through informal interviews of local ICU leaders. We then identified barriers and facilitators to implementation of prone positioning through qualitative interviews. Next, we convened a task force of ICU leaders to 1) identify institutional change objectives to improve implementation on the basis of the identified barriers and facilitators and 2) develop a menu of specific implementation strategies. Last, the task force members, in their roles as hospital and ICU leaders during the COVID-19 response, rapidly developed implementation materials and established implementation plans.
Study Setting and Participants
The study included bedside clinicians and leaders of nine ICUs within four hospitals of the Penn Medicine Health System (including two tertiary care academic hospitals and two affiliated communitybased hospitals) and three ICUs of the University of Michigan Medical Center, a tertiary care and regional referral hospital. We selected a range of types of hospitals and ICUs to promote diversity of experience with and perspectives about prone positioning. All tasks described above were conducted at Penn Medicine. Qualitative interviews regarding barriers and facilitators (task 2) additionally included participants from the University of Michigan, as detailed below.
Interviews
We developed a semistructured interview script to elicit perspectives on use of prone positioning for patients with ARDS, including potential barriers and facilitators to implementation (see the online supplement). We used the Consolidated Framework for Implementation Research (CFIR) as a guide (20). The CFIR organizes 37 constructs relevant to implementation of evidence-based processes into 5 broad domains (see Table E1 in the online supplement) and has been used extensively in qualitative research related to implementation practices (21). We invited ICU leaders, bedside nurses, respiratory therapists, clinicians who place orders (including hospitalists, advanced-practice providers, and trainees), and critical care specialists (including fellow and attending physicians) to participate on a voluntary basis. We identified eligible participants through ICU leaders, approached them individually via e-mail, and compensated participants with a $50 gift card. We purposively sampled clinicians of different backgrounds from all study ICUs to facilitate representation of varied viewpoints (22). We first interviewed clinicians at Penn Medicine hospitals and continued interviews until we achieved thematic saturation and represented the key groups, as appropriate for the sampling methods we used. We then invited clinicians at University of Michigan hospitals to assess whether themes were consistent at a geographically and organizationally distinct hospital. We again used purposive sampling to ensure diversity of clinician groups and perspectives and continued interviews until all groups were represented and no new themes were identified.
We conducted three pilot interviews with ICU physicians, followed by further refinement of the interview script. All interviews, including pilot interviews, were conducted by one member of the research team (T.S.) trained in qualitative interviewing. Each interview lasted approximately 30 minutes and was recorded and professionally transcribed, and each transcript was deidentified before analysis. The study was deemed exempt from review by the institutional review board of the University of Pennsylvania.
Focus Groups
After analyzing qualitative interview data, we convened a task force of one ICU director, three critical care nursing leaders, and two respiratory therapists from two hospitals of Penn Medicine, with whom we conducted three 1-hour focus groups led by three members of the research team (T.K., M.B.L.-F., and M.P.K.). We presented the findings of the interviews by broad theme and provided preliminary objectives of an implementation program, which the research team developed on the basis of the CFIR constructs underlying each theme. The task force then refined these objectives through discussion.
Next, to facilitate the development of specific implementation strategies based on these themes and objectives, we provided the task force with a list of general strategies from the Expert Recommendations for ORIGINAL RESEARCH Implementing Change (ERIC) framework. In the ERIC project, an expert panel produced a list of 73 implementation strategies by using a modified Delphi process (23). These strategies were subsequently mapped to CFIR constructs by another expert panel, according to the likelihood that a given strategy could address an issue within a specific implementation construct (24). Using the CFIR-ERIC mapping tool, we selected those strategies that were mapped to the relevant CFIR constructs by at least 25% of experts (24). The task force then further developed and refined these general strategies into a list of specific strategies through discussion.
All focus-group conversations were documented by audio recording. Two study staff (J.A.S. and T.T.) additionally took notes during each of the focus-group sessions.
Implementation Planning
After development of the menu of specific implementation strategies, the Critical Care Alliance of Penn Medicine developed materials and plans for rapid implementation in the setting of the COVID-19 pandemic. Several members of the task force are members of the Critical Care Alliance, and in their health-system leadership roles, they guided implementation planning using the specific strategies as a foundation.
Analysis
We performed thematic analysis of qualitative interviews in two stages. We first coded all interview transcripts in NVivo 11 (QSR International) using CFIR as the codebook. Two research coordinators (T.S. and S.S.) coded the first three interviews by consensus and then coded the next three interviews independently and reviewed them together to ensure consensus. Thereafter, all interview transcripts were coded by research coordinators independently, with double-coding of 20% of interviews to ensure consistency. Outstanding coding questions and disagreements were resolved by consensus of four members of the team. Next, we identified broad themes of determinants and mapped them across CFIR domains. Three team members (T.T., J.A.S., and T.K.) developed a list of broad themes, which were then discussed and refined by the entire team. All coding and analysis were supervised by an experienced qualitative researcher (T.K.).
The team debriefed after each focus group to discuss identified interventions. Summaries were shared with participants to confirm agreement on the content.
Determinants of Prone-Positioning Use
The qualitative interviews included 30 participants, as detailed in Table 2. The types of ICUs staffed by participants included both general and specialty ICUs (medical, surgical, cardiovascular, and neurological ICUs). Some participants worked in only one ICU, whereas others worked in more than one ICU.
When asked about the use of prone positioning, 8 respondents (27%) reported that their primary ICU used it frequently, 14 (46%) reported that it was sometimes used, and 8 (27%) reported that it was rarely or never used. Perceived determinants of prone-positioning use mapped to constructs from all five CFIR domains. We identified five broad themes, as summarized in Table E2: knowledge, resources, alternative therapies, team culture, and patient factors. Two domains were relevant across all themes: 1) "intervention characteristics," which refers to characteristics of the evidence-based intervention, such as the quality evidence about the intervention, the relative advantage of the intervention over other treatments, the complexity of the intervention, and how easily it can be adapted locally and 2) "inner setting," which refers to the setting immediately surrounding the intervention (in this case, the ICU environment) and includes constructs such as the structure of the setting, the culture within the setting, the climate for implementing new practices, and readiness for implementation ( Figure E1).
Knowledge about prone positioningincluding patient eligibility, its therapeutic value, and the procedure of actually putting a patient into a prone position-was consistently identified as integral to its uptake in the treatment of patients with ARDS. Many respondents were knowledgeable about the evidence that prone positioning is an effective intervention, particularly early in the course of ARDS; however, others perceived that prone positioning was often used as a rescue
ORIGINAL RESEARCH
therapy, or a "last ditch" effort, and expressed uncertainty about the right timing during the course of illness. Respondents also perceived that processes and protocols that describe indications for the intervention and outline staff roles and responsibilities could address lack of knowledge or variability in interpretation of the evidence for prone positioning. Respondents suggested that education and practice using simulations, videos, and photo cards could be useful. Furthermore, availability of ICU staff with knowledge of and experience with prone positioning-commonly nurses and nurse leaders-could facilitate administration of prone positioning to patients. Availability of appropriate resources was identified consistently as necessary for effective implementation of prone positioning. Prone positioning requires physical labor by a number of staff members, and lack of adequate staffing was commonly identified as a barrier, particular during nighttime hours. Availability of a dedicated team to provide supplemental staff members when needed was described as a facilitator. Several participants mentioned that the availability of clinical protocols could serve as a resource to delineate roles and procedures. A few participants also mentioned that equipment designed to help turn patients is available, but opinions regarding the need to have any special turning equipment was mixed. Some also described a need for supplies to support the patient once turned, such as eye shields and foam pillows to prevent decubitus ulcers.
The culture of the team was believed to contribute substantially, although in somewhat less-concrete ways, to use of prone positioning. ICU leadership was considered influential; for example, an ICU director or nursing leader with belief in, and experience with, prone positioning could facilitate changing culture among the ICU staff. Attending physicians were commonly considered the leaders in decisions of whether or not to put patients into the prone position; however, team discomfort or inexperience with prone positioning was described as a barrier that could overcome an attending physician's clinical decision. Team dynamics were also considered an important factor. Teams that communicated well and allowed all members to voice their opinions and concerns, that incorporated mentorship, and that integrated education into their work were believed to be more effective in using prone positioning. Those who had prior experience were champions of implementation. Although culture change implementing a new intervention was challenging, seeing the intervention successfully used increased uptake and buyin from clinicians. On the other hand, prior negative experiences or adverse outcomes with prone positioning could be a significant barrier. As clinicians gained additional experience with prone positioning, organizational culture changed and became more supportive of the intervention.
Patient factors such as comorbidities and potential contraindications influenced use of prone positioning. For example, higher severity of hypoxia prompted clinicians to consider administering prone positioning. Commonly mentioned patient factors that served as barriers were obesity and hemodynamic instability. Some providers believed that exposure to a higher volume of eligible patients increased use of prone positioning.
Availability of alternative therapies for ARDS were generally considered to be barriers to prone-positioning use. Although many respondents acknowledged that available evidence suggests that prone positioning is efficacious as an early intervention for ARDS, it was often administered as a last resort, after other therapies had been tried. Many participants had uncertainty about the order in which to administer the different therapies and expressed variability in the practice of different attending physicians. Furthermore, when staff were uncomfortable with prone positioning, they defaulted to interventions that were more familiar and required less effort, even those that had not been proven effective. In some ICUs, extracorporeal membrane oxygenation was often an initial intervention implemented to treat ARDS, in part because of the immediate availability of a proactive extracorporeal membrane oxygenation team and an institutional culture supporting its use.
Development of Implementation Strategies
On the basis of the main points of each broad theme, the ICU leadership task force specified program objectives (Table 3). We mapped the general strategies based on the ERIC framework to the CFIR constructs relevant to each theme (Table E3). Prompted by these main points and frameworks, the task force first refined the program objectives for each theme and then generated a list of specific strategies to improve prone-positioning use. The final output of this project phase was a ORIGINAL RESEARCH multifaceted implementation menu of strategies that individually and collectively could address all the perceived determinants and achieve the program objectives (Table 4). For example, an interprofessional educational outreach program could improve knowledge of individual clinicians and ICUs, and it could also promote change in team culture through educating and obtaining buy-in from ICU leadership. Learning collaboratives could facilitate changes in team culture and belief in the value of prone positioning. Written clinical protocols and automated electronic health record-based alert systems could enhance knowledge about patient eligibility for prone positioning and provide prompts to clinicians to consider prone positioning. Hospital-wide pronepositioning teams could support inexperienced ICUs in patient identification, education, and staffing resources.
Development of Implementation Plan and Materials
In response to the COVID-19 pandemic, in anticipation of high rates of severe ARDS, the Penn Medicine Health System rapidly produced implementation plans and materials for all of these strategies ( Table 5). The Critical Care Alliance, composed of a team of interprofessional ICU leaders across all health-system hospitals, which includes members of the task force that developed the implementation strategies, led the implementation planning and efforts. The alliance serves as a learning collaborative, in which ICU leaders meet monthly and share experiences and implementation plans of practices common to multiple ICUs.
Experienced clinical nursing specialists serve as consultants for ICUs without prior experience with prone positioning. Leaders from the medical and surgical ICUs across the health system collaborated to create educational materials for widespread just-intime training, including 1) an educational video about procedures for placing patients into the prone position, 2) a one-page clinical infographic card summarizing patient eligibility and procedures (see online supplement), and 3) written guidelines for skin care, developed in conjunction with the Wound, Ostomy, and Continence Nursing team. The committee approved and disseminated a clinical protocol that had been developed previously in one medical ICU with historically high adherence to evidencebased prone positioning, thereby leveraging institutional experience and success and promoting leadership buy-in. An existing information technology program, the ARDS Finder (University of Pennsylvania), leverages the electronic health-system record to continuously screen patients for ARDS and display relevant ventilator data on an electronic dashboard. This system was enhanced to identify and alert clinicians when patients with ARDS meet criteria for prone positioning (see Figure E2). This alert also prompts ICU nursing leaders and ICU telemedicine staff, who can then provide validation and expertise regarding patient eligibility and procedures for prone positioning. Finally, four of the six hospitals have created prone-positioning teams, whose ORIGINAL RESEARCH members provide consultation regarding eligibility for prone positioning, as well as staffing and expertise to safely implement prone positioning in sites with little prior experience and/or inadequate staffing. The committee has developed a template for the prone-positioning team, describing the staffing, the equipment needed, and the roles and responsibilities, to support the other hospitals as they build local prone-positioning teams.
Discussion
Using qualitative research methods and implementation-research frameworks, this study identified five broad themes of determinants of evidence-based use of prone positioning for patients with ARDS, including knowledge, resources, team culture, patient factors, and availability of alternative therapies, thus adding to an existing literature limited to an understanding of patient factors as determinants (1,12). Determinants of implementation most consistently mapped to constructs within the two domains of intervention characteristics and inner setting. Therefore, strategies to improve use of these complex, team-based interventions for critically ill patients may be similarly complex and multifaceted to address multiple domains of implementation. Indeed, several of the specific strategies developed by the task force address several determinants. Knowledgeable and experienced clinicians can serve as educators and champions to improve awareness and change culture in their local environments. Educational programs, including simulation training and informational brochures, can also help inexperienced clinicians to acquire knowledge and comfort with a complex and unfamiliar intervention. Establishing clinical protocols or guidelines can serve a similar educational purpose, and they can also support the necessary team coordination. Finally, a culture of collaboration and teamwork, in which all clinician groups believe their concerns and suggestions are valued, can help to promote this team-based intervention. These findings are similar to studies of barriers and facilitators of other complex, team-based practices in critical care, such as low tidal volume ventilation for ARDS (25)(26)(27)(28)(29) and implementation of evidence-based mechanicalventilation bundles (30). Lack of or erroneous knowledge about the interventions and team culture have been identified as important determinants, and clinical protocols can be important facilitators (31). Although this study was initiated before the COVID-19 pandemic, the lessons learned are directly relevant to prone-positioning implementation in response to the current emergency. In fact, physicians in Wuhan found that prone positioning of patients with COVID-19 was widely used for critically ill patients (32) and appeared to improve hypoxia, protect organ function (33), and increase lung recruitability (34). We anticipate prone positioning will and should be implemented more broadly throughout the course of the pandemic. Our health system has taken a multifaceted approach to rapidly adapt and implement strategies to facilitate increased use of prone positioning in inexperienced and novel ICUs, selecting those developed by the task force that were perceived to be most readily implemented (e.g., the existing clinical protocol and an automated alert from one ICU were refined and disseminated broadly) and that would be most effective (e.g., the development of hospital prone-positioning teams that could bring knowledge, experience, and resources to multiple ICUs).
Strengths and Limitations
Our study has several strengths. To our knowledge, it is the first study to identify ICU-and clinician-level determinants of evidence-based use of prone positioning, expanding the existing literature on patientlevel determinants. We included broad groups of clinicians, and we conducted interviews in hospitals of various sizes and organizational models and focus groups with a diverse set of clinicians. We used implementation frameworks to code the data from interviews and to develop specific strategies to ensure that we considered evidence-based domains of implementation. Our study also had a few notable limitations. First, all the interviews were conducted
ORIGINAL RESEARCH
among clinicians of two large academic hospital systems; therefore, they may not be representative of all possible perspectives. We did, however, include several different hospitals and ICU types to elicit perspectives from clinicians who care for a broad array of patients under different organizational models and have variable exposure to prone positioning, with some clinicians practicing in settings where prone positioning was not performed at all. Second, we also used a convenience sample of those who agreed to participate, so the sample may be subject to self-selection bias; however, we heard a variety of perspectives from people who represent varied positions, and we believe we reached thematic saturation on this topic. Third, our needs assessment was performed before the COVID-19 pandemic began, so we could not capture factors that may be particularly relevant to immediate practice, such as concerns over personalprotective-equipment conservation with a staff-intensive intervention. However, the lessons learned were immediately relevant to our local practice during the pandemic and may remain relevant beyond this crisis. Lastly, the final task of implementation mapping involves evaluation of implementation outcomes, which we have not yet performed; therefore, we cannot report on the success of these strategies. Importantly, we did not attempt to estimate the costs or resources required to develop and implement the strategies as described. This implementation outcome may be of particular importance in the context of a pandemic, when both hospital finances and the time ICU and hospital leaders have to dedicate to implementation efforts may be strained. Future work should assess how these strategies impact implementation outcomes in the different contexts in which they will be applied.
Conclusions
In summary, our study identified several broad themes of barriers to and facilitators of evidence-based implementation of prone positioning for severe ARDS, a lifesaving, proven-effective treatment that is administered to a minority of eligible patients. We identified specific methods for implementation in the areas of infrastructure, personnel, guidelines/ protocols, and leadership buy-in. We have developed implementation plans for some of these strategies in our own institution and believe they can inform the increased uptake of prone positioning in response to the COVID-19 crisis. n Author disclosures are available with the text of this article at www.atsjournals.org.
|
2020-10-28T19:10:52.754Z
|
2020-10-09T00:00:00.000
|
{
"year": 2021,
"sha1": "968f48cb71d36e950cfc08b7fb15f67b1e8be597",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1513/annalsats.202005-571oc",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d81f10f7fafb3f2aa1a8b202cc94ac6101d19967",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118622256
|
pes2o/s2orc
|
v3-fos-license
|
Low-frequency acoustics in solid H 4 e at low temperature
The elastic properties of hcp $^4$He samples have been investigated using low frequency (20 Hz to 20 kHz) high sensitivity sound transducers. In agreement with the findings of other workers, most samples studied grew very significantly stiffer at low temperature; Poisson's ratio $\nu$ was observed to increase from 0.28 below 20 mK to $\sim 0.35$ at 0.7 K. The span of the variation of $\nu$ varies from sample to sample according to their thermal and mechanical history. Crystals carefully grown at the melting curve show a different behavior, the change in $\nu$ taking place at lower $T$ and being more abrupt.
The prospect that solid 4 He may exhibit a new state of matter, the supersolid state, with both mechanical rigidity and superow behavior, is arousing strong renewed interest in its mechanical properties below 0.2 Kelvin 1 . The supersolidity features are seen primarily in torsional oscillator (TO) measurements of the moment of inertia, which decreases below ∼ 0.2 K or so in a way that can not be accounted for in the framework of classical mechanics, the so-called non-classical rotational inertia (NCRI) eect 2 .
In this work, we are concerned primarily with the elastic properties of hcp 4 He below 0.8 K. The velocities and attenuation of sound in solid 4 He have long been known to display anomalies at low temperature. Early studies 3,4,5,6,7,8 have attributed these anomalies to the presence of dislocation lines, even in good quality crystals, with typical line densities of Λ 10 6 cm −2 and internodal lengths of the dislocation network L N 5 10 −4 cm. Already in these early studies, it was noted that sample preparation greatly inuences these properties 5 and could lead to anomalous values of Λ. Nonetheless, these anomalies have been interpreted in the framework of the usual theory of elasticity 7,8 ; they nd a natural explanation in terms of dislocation motion in the framework of the Granato-Lücke theory, as discussed by many authors over the years (see, e.g. 9,10,11 and references therein) The more recent work was directed toward the search of clear-cut quantum eects and, possibly, to a direct manifestation of supersolidity 12 , which is now thought to be found in TO measurements 13 . The anomalies in the shear compressibility are possibly related to those in TO experiments, a possible linkage that makes the subject matter of an ongoing debate 9,14 .
Here we probe directly the low frequency elastic properties of solid 4 He at unprecedented low levels of strain in the solid ( zz < 10 −10 ) down to below 20 mK in an attempt to test the stress-strain relation for extremely small perturbations and at very low temperature. The gap between the exible diaphragm and the backplate in which the superconducting sensing coil is embedded, is quite narrow (∼ 300 µm). If a pressure P el is applied by the diaphragm (here an electrostatic force) to the solid in the gap, the response is expressed by the following stress-strain relation 15 : Equation (1), where K is the bulk modulus, ν Poisson's ratio, and M the uniaxial com- 19 with only a weak dependence on ν around ν=0.3 18 . A direct numerical simulation using the exact cavity geometry yields the same gure to better than 1 %.
Thus, the acoustic cell can be operated in three dierent ways: (1) the variation of uniaxial compression in the displacement sensor gap; (2) that of the transverse velocity (that is, of shear modulus) in the acoustic cavity by the resonance frequency of the fundamental mode; (3) a direct, albeit less precise, check on the variation of ν in the cavity by the diaphragm response to a quasi-static (70 Hz) actuation of the piezo. Typical results for measurements (1) and (2) are shown on Fig.2 and 3 for several samples. These results depend, apart from the cavity geometry, on two quantities, the bulk modulus divided by the density K/ρ and Poisson's ratio ν. Since no pressure variation is recorded when the sample is cooled at constant volume the bulk modulus K remains constant. Its value is known from the phase diagram and from sound velocity measurements 20 .
The solid 4 He samples have been obtained by the two conventional methods. With the blocked capillary (BC) method, helium is solidied by lling the cell to a given density (or a given pressure, usually 56 bar here), cooled through complete freezing, and then further down in temperature, the quantity of helium in the cell remaining constant because the solid formed in the lling capillary makes an immovable plug at low temperature. This method is used to make samples with densities higher than the density close to melting but subsequent cooling induces thermal stresses that usually damage the crystal. To obtain crystals with lower internal stress, the solid can be grown at low temperature by feeding liquid into the cell at constant temperature (CT), producing samples with a density close to that at melting. When the drive frequency is scanned, using the piezo as a transmitter and the diaphragm as a microphone, the fundamental resonance frequency yields, in a rst approximation as discussed above, the change in the shear modulus G eff = 3K(1 − 2ν)/2(1 + ν) = ρ c 2 ⊥ with temperature. An example of such scan is given in Fig.3; the frequency of the resonance peak decreases by 20 % from 17 to 717 mK, which implies that the eective shear modulus G eff of this particular sample is divided by 1.58 upon warming.
The resonance frequencies in dierent samples are shown in terms of the temperature in strain, which they did not.
In most samples, two dierent regimes appear below and above the temperature T * around which the frequency shifts at the fastest rate (∼ 200 mK in Fig.3), which is also the temperature at which the resonance amplitude goes through a minimum. Contrarily to other work 9 , we do not observe a narrowing of the resonance line, but a splitting into several resonances, or wiggles, below T * . This splitting is more pronounced at the lowest temperature, where the crystal is stiest and intrinsic damping lowest. On occasions, distinct, weak, resonance peaks also show up several hundreds of Hz above or below the main resonance line.
Both features, which can be seen in Fig.3, appear in the linear response regime. Their origin is unknown but they reveal that the resonance line is inhomogeneously broadened at T T * and partially resolved below, presumably because damping decreases 21 . The fundamental mode depends on some coarse-grained average of the density and elastic properties of the medium in the acoustic cavity and does not probe inhomogeneities or localized defects. We believe that the splitting and the satellite lines arise from the existence of additional, macroscopic, degrees of freedom within the solid sample as, for instance, a condensate fraction, or a dislocation network collective motion.
At higher drive levels, non-linear eects appear, notably frequency pulling that tends to push the resonance frequency back to its high temperature value. The drive amplitude at which this eect sets in depends on temperature; it goes through a minimum around 100 to 200 mK. The value of the strain is 10 −9 , an order of magnitude lower than the threshold reported in Ref. [11]. It should be noted that the corresponding displacement at its maximum, at the center of the cavity, is exceedingly small, less than one tenth of the width of the Peierls valley.
From the low T limit of the resonance frequency in Fig.3, ω/2π = 17.5 kHz, and α = 3.07, we nd, using for the value of the bulk compression modulus at a molar volume of 20.65 cm 3 , K = 275 bar 22 , a value for Poisson's ratio of 0.28 at 0.02 K, not much dierent from the value obtained by high frequency sound measurements 19 , climbing back to 0.36 at 0.7 K with the appropriate value of α (3.20). A similar variation is obtained from the change in uniaxial compression modulus in Fig.2.
Most of the features reported here nd a natural explanation in terms of dislocation motion in the framework of the Granato-Lücke theory as used by many authors over the years (see, e.g. 3,4,5,6,7,9,11 and references therein) and may also be related to the anomalies in the TO experiments as discussed in Refs. 9,14 . Other features do not. The behavior of the CT samples, showing a more abrupt (or else, very little) change in G eff at lower T requires a like-sudden change (or lack of ) in dislocation pinning. Such behaviors have also been observed in TO experiments on the NCRI fraction in good quality samples 23 . Sometimes, the temperature dependence of G eff shows a kink, as can be seen in Fig.2 for the fastcooled BC sample. This is not really expected in the dislocation line model, which relies on the condensation of 3 He impurities on dislocation cores and should lead to a more gradual variation, like for the curve obtained after warming the sample to 1.05 K.
Let us nonetheless interpret the drastic change in elastic properties observed here in terms of the strain induced by dislocations, following Paalanen et al. 7 and using their expression R depending on the orientation between the local displacement vector and the c-axis of the crystal. Assuming equal weight between longitudinal and transverse motions, R 1/4 and using our result G/G eff = 1.58 and ν = 0.3, we nd ΛL 2 4.2. This value greatly exceeds that found experimentally by Iwasa et al. 5
|
2009-08-11T08:45:39.000Z
|
2009-08-11T00:00:00.000
|
{
"year": 2009,
"sha1": "0941f076e7f1bca59f4cea7dea0793f858f85f15",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0908.1480",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0941f076e7f1bca59f4cea7dea0793f858f85f15",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
208142930
|
pes2o/s2orc
|
v3-fos-license
|
Precarious employment in occupational health – an OMEGA-NET working group position paper
Despite the growing use of the term precarious employment, there is no consensus on a theoretical framework or definition. This hampers the study of the subject, especially in public and occupational health. We propose a theoretical framework for understanding precarious employment as a multidimensional construct where unfavourable features of employment quality accumulate in the same job. Future research should apply an intersectional and multi-level approach to analysis, with a focus on improving exposure assessment and investigating mechanisms. Precarious employment occupational health – an OMEGA-NET working group position paper. Objectives The aims of this position paper are to (i) summarize research on precarious employment (PE) in the context of occupational health; (ii) develop a theoretical framework that distinguishes PE from related concepts and delineates important contextual factors; and (iii) identify key methodological challenges and directions for future research on PE and health. inadequate study designs and biased assessments of exposure and outcomes. PE is highly dependent on contextual factors and cross-country comparison has proven very difficult. We also point to the uneven social distribution of PE, ie, higher prevalence among women, immigrants, young and low educated. We propose a theoretical framework for understanding precarious employment as a multidimensional construct. Conclusions A generally accepted multidimensional definition of PE should be the highest priority. Future stud ies would benefit from improved exposure assessment, temporal resolution, and accounting for confounders, as well as testing possible mechanisms, eg, by adopting multi-level and intersectional analytical approaches in order to understand the complexity of PE and its relation to health.
Precarious employment in occupational health
There is a growing recognition of the myriad ways in which employment and work contribute to the health of populations. In addition to traditional occupational hazards, such as dusts, chemicals, injury risks and psychosocial stressors, there is an increasing appreciation of how aspects of the employment relationship, referring to the terms and conditions by which an organization (business, public, or non-profit entity) pays someone to work for them, can be a social determinant of health. Various lines of evidence suggest that the overall quality of employment relationships in developed economies has significantly degraded in the last decades (1,2). Moreover, because employment quality is commonly associated with sociodemographic profiles, these trends have led to mounting concern about the role of employment in contributing to health disparities across working populations (3). These concerns have been incorporated into a broad conception of an array of employment conditions through the term 'precarious employment' (PE). However, despite the growing use of the term, and the multiple efforts to define this concept, no clear consensus has emerged. Thus, research of the health implications of PE has been hampered.
The EU-funded Network on the Coordination and Harmonisation of European Occupational Cohorts, OMEGA-NET (omeganetcohorts.eu), recognizes the importance of the issue and supports the development of epidemiological research on PE. This position paper is the result of a working group consisting of researchers from the EU, Turkey and the USA, who have discussed the issue over the course of six months from October 2018 to April 2019. It is hoped that through refinement of the PE concept, a clearer consensus on its operationalization can be adopted for its integration into research on the health of working populations of Europe and beyond. Further, by addressing issues of PE in the context of these large cohort studies, a research agenda can be developed to focus research on questions with the greatest potential for improving the health and well-being of a large number of workers.
Specifically, the aims of this paper are to (i) summarize research on PE in the context of occupational health; (ii) develop a theoretical framework that distinguishes PE from related concepts and delineates important contextual factors; and (iii) identify key methodological challenges and directions for future research on PE and health.
Lack of a common definition of precarious employment
Despite thriving in fields such as economics and sociology, there are still barriers preventing PE from becoming an established part of occupational and public health research some of which have recently been pointed out in an editorial in this journal (4).
There is substantial confusion when it comes to the concept of PE, as many related terms are used interchangeably: 'the precariat', 'precarious work' or simply 'precarity' or 'precariousness'. In the EU, 'atypical' or 'nonstandard' forms of employment have been used widely instead of, or as synonyms for, PE, whereas in the US the term 'contingent work' is more common (5). Another concept used in the occupational health literature is 'employment quality', which refers to the employment conditions and employment relations together (6). Employment quality can be conceived as a continuum, where PE is at the disadvantaged end due to an accumulation of unfavorable facets of employment quality.
Fundamentally, there is no universally accepted definition of PE that can transcend sociopolitical and historical context. Starting with the foundational work of Rodgers & Rodgers in the 1980s (7), four main dimensions of PE were identified: (i) employment instability, (ii) employment insecurity (limited control, collective or individual, over working conditions, wages and place of work), (iii) erosion of workers' protection and (iv) low material rewards. Building on this work, several researchers and institutions have adopted and developed their own definitions and operationalizations and studied these within a public health context (7)(8)(9)(10)(11)(12). However, none of these have gained enough traction from researchers and practitioners to become the "standard definition". Therefore, PE has not been integrated into routine surveillance instruments, such as labor force or working conditions surveys, making longitudinal, population-based studies on the topic rare and international comparison infeasible. From a public health perspective, compared to research on psychosocial work environment launched by Karasek in 1979 (13), research on PE has achieved far less attention as an "occupational health threat" from practitioners and policy-makers. In public health research, PE has often been reduced to 'single dimensions' (of employment quality), such as temporary or part-time employment. Too often, it has been used synonymously for self-perceived job insecurity, a psychological (cognitive and/or affective) phenomenon (14), which we consider a consequence rather than an objective measure of PE.
Previous operationalization and measurement
To date, only two validated questionnaires have been developed for the sole purpose of measuring PE in a public health context. Researchers affiliated with the GREDS-EMCONET (Research Group on Health Inequalities -Employment Conditions Network) in Barcelona developed the Employment Precariousness Scale (EPRES), which includes six dimensions: "temporariness" (contract duration), "disempowerment" (level of negotiation of employment conditions), "vulnerability" Bodin et al (defenselessness to authoritarian treatment), "wages" (low or insufficient; possible economic deprivation), "rights" (such as paid vacations, parental leave, sickleave benefits and pensions), and "exercise rights" (powerlessness, in practice, not being able to exercise the workplace rights listed previously without obstacles) (11). So far, the EPRES scale has only been employed in studies of working populations in Spain (15,16), Chile (17), and Sweden (18). The only other purpose-specific survey instrument to measure PE is from Canada. In the longitudinal survey Poverty and Employment Precarity in Southern Ontario (PEPSO) 2011-2014, the Employment Precarious Index was created based on ten questions covering: income level, income security, employment security, schedule predictability, contract type, employment-related benefits, fear of raising concerns at the workplace (lack of rights/vulnerability), and receiving salary in cash (risk factor for undeclared salary in the Canadian context) (19).
Due to the challenges and costs of creating and validating purpose-specific surveys, many studies have attempted to exploit existing data to characterize PE using proxy indicators within labor, economic, health, or social surveys. The list of surveys used to study PE (or related constructs) and health is long; some notable examples include the European Working Conditions Survey (6, 20-23), European Labour Force Survey (24), Gender and Generations Study in Belgium (25), Catalan Working Conditions Survey in Catalonia (16), the US General Social Survey (26), and Canadian Survey of Labor and Income Dynamics (27). All but the last of these are cross-sectional studies, limiting the scientific value of resulting analyses to hypothesis generation and theory development. The few extant examples of longitudinal analyses provide more support for the causal relationship -for example, by controlling for baseline health or examining employment trajectories (27, 28) -but have limited generalizability and comparability to other studies using other sets of questions (29). Although studies employing secondary data analysis are inherently limited, the use of proxies facilitates the development of large-scale and cross-national evidence using existing data sources (20).
Records from government agencies, hospitals, large employers, insurance firms or other organizations can contain employment-related variables, which could be used to operationalize PE. A few studies have used register data to study employees who have a large number of contracts (30), frequent job changes (31), or fixedterm contracts (32)(33)(34). Ongoing work to operationalize a multi-dimensional construct of PE in routine register data is under way in Sweden and Denmark, and attempts have also been made in Belgium.
In addition to choice of indicators, several approaches to operationalizing a multidimensional construct of PE are present in the health literature. One approach is to include multiple indicators of PE within multivariable regression analysis (27). This approach thus examines associations between individual indicators and health, while controlling for all others. A second, and the most common approach is to create a composite or summed scale variable, measuring a worker's relative position along a continuum of low to high precariousness (35,36). This approach can be used to examine whether health is associated with an accumulation of poor employment conditions. A third approach is to construct a typology of employment arrangements, conceptualizing jobs as packages of employment features and thus allowing for examination whether specific patterns of exposure are associated with health. Studies using a typological approach have most commonly used latent class analysis to model types of employment arrangements (6,25,26). We do not recommend a specific approach, but encourage researchers to make careful considerations when designing a study.
Precarious employment and health
Despite the limited agreement on definition and operationalization of PE, a growing number of studies have focused on the health effects of PE during the last decades. Recent systematic reviews show that multidimensional indicators as well as various separate dimensions of PE may be linked to an array of health issues including mental and physical ill-health (3,29) and occupational injuries (37), as well as health-related behaviors such as higher levels of smoking (38) and lower access to healthcare (39). There are also studies showing associations between PE and higher risk of childlessness/postponed parenthood (40) and risk of disability pension (41). One study investigated relations with satisfaction with working conditions (6).
The mechanisms linking PE and health are not yet fully understood (20,42,43). Three main pathways have been suggested (i): working conditions with harmful health consequences are more frequently experienced by workers with PE (ii); poor employment conditions associated with PE may lead to adverse health outcomes by limiting worker's control over their professional and personal lives; and (iii) PE may produce incomes below the subsistence level, which may consequently affect various social determinants of health (eg, housing quality, adverse lifestyle etc.) (20).
Investigating the relation between PE and health is complicated because of bidirectional or reverse causation and health selection effects (44). In a recent review of longitudinal studies on PE and mental health, many studies had serious limitations in design, including measurement of exposure and outcomes at the same time-point and lack of appropriate baseline adjustments Precarious employment in occupational health (29). Because employees may change employment status throughout life -and precarious employees even more so -using only one time-point for exposure measurement may result in misclassification of exposure, a problem which has only been handled in very few studies (28). Further, many survey-based studies suffer from the use of subjective measures of both exposure and outcomes adding to risk of common method bias (occurring eg, when both exposures and outcomes are self-reported).
Applying work-life trajectories would take several of the limitations in previous studies into account and has the potential to create a bridge to sociology, ethnography, and economics, where the process of precarisation is a major focal point of interest.
Although several studies have applied multidimensional constructs, most studies with stated aims to investigate the association between PE and health depend on single aspects of employment arrangements, such as non-standard or temporary employment contracts. Recent reviews have found no overall clear direction of associations between temporary contracts and mental health (29,45) or occupational injuries (37). Yet, longitudinal studies applying various multidimensional exposure constructs have found stronger mental health effects of PE than those seen when studying parts of the phenomenon as single-item variables (29). This adds to the already strong case for applying a multi-dimensional approach with objective measures (46).
Labor market context and welfare regimes
Due to the lack of a common definition, relevant comparisons between countries and continents are extremely difficult. Based on multidimensional operationalizations in surveys in the EU, the prevalence of PE has been found to be higher in Southern and Eastern European countries and lower in the Nordic countries, although the detailed picture is more nuanced (47,48).
In Anglo-Saxon or "liberal" welfare regimes such as the USA, the term "contingent work" is commonly used instead of PE. However, very few workers fall within this category (49), highlighting the fact that employers in these countries have little need or incentive to create explicitly temporary jobs, since open-ended contracts can easily be terminated. On the contrary, temporary employment is more common in East-Asian welfare regimes such as Korea and Japan (50), which are generally characterized by low levels of governmental intervention and investment in social welfare, less developed public service provision, and a strong reliance on family and the voluntary sector in welfare provision (51). A strong welfare system can alleviate some of the effects of PE. However, in some countries with strong welfare systems, PE might increase the risk of not qualifying for social security schemes, as these were built to cater for workers in a "standard employment relation". The lack of social (and OHS) protection for precarious employees could further increase the differences in health and well-being. Strong welfare regimes have also developed in countries with strong unions and labor laws, which further confounds and complicates the study of these issues. Others have called for a systems approach (42), stating this is necessary to understand the interaction between social and institutional regulatory protection and PE.
Axes of inequality
Exploring the intersecting axes of inequality among those in PE is crucial to understanding both the social distribution of PE and its differential effects (52). Several studies have found a higher prevalence of PE among women, young workers, manual workers, and immigrants (22,53). The more of these inequality characteristics that accumulate in the same person, the higher the prevalence of employment precariousness (36). One publication from Chile compared the association between psychological distress and PE and found it to be significantly higher in women (54).
Research gaps persist whether these inequalities in employment relationships contribute to gender differences in adverse health outcomes.
Standing (8) underlines how the migrant population covers a large share of the world's precariat. Studies have further shown how recently arrived migrants are more likely to be engaged in temporary and agency work and to have insecure and poor working conditions (55)(56)(57). Recently arrived migrants often have limited access to legal expertise, collective bargaining, and union representation, and consequently end up accepting the most precarious labor contracts, sometimes also informal employment (57)(58)(59).
Research agenda
Theoretical framework for a common definition. Despite a rapidly growing empirical and theoretical literature, a generally accepted multidimensional definition of PE is lacking -an issue of the highest priority. A critical first step to advancing research on PE and health is conceptual clarity. We propose a theoretical framework for understanding PE as a multidimensional construct where unfavorable features of employment quality accumulate in the same job ( figure 1). Thus, conceptualization and measurement of PE for occupational health research occurs at the level of the employment relationship. Examples of important employment conditions that constitute this relationship are level of pay and other Bodin et al non-wage benefits, workplace rights and representation, and length or type of contract. We do not aim to list all possible aspects of employment relations that could be included, rather we encourage an open discussion on that matter.
The types of employment found in a labor market are influenced by global economic, technological, social, and political trends, such as globalization processes and weakening labor representation. These macro-level trends and factors are upstream and antecedent of PE, for example, having contributed to a general increase in the prevalence of sub-contracting, outsourcing, consulting, and newer forms of gig and platform work. Macro-level trends also interact with international and national labor rights legislation, regulations, and collective agreements.
From our theoretical framework, it follows that hazardous, boring or dissatisfying work should be seen as possible consequences of PE, rather than characteristics of precarious work itself. The same holds for psychological (cognitive and/or affective) perceptions of job insecurity, social precarisation (poverty, "life" insecurity, etc.) and adverse health outcomes, which we believe are conceptually on another level, downstream from the PE. Furthermore, the component of contractual instability, which is more related to job insecurity, could be addressed by measuring type of contract, objective treats to the continuity of employment.
We further highlight the importance of policy and social contexts, which may be key modifying (or moderating) factors that influence the nature of PE in specific contexts, as well as the PE-health relationship but that are not included in measurement of the construct. In other words, the nature and consequences of a given employment relationship may differ depending on contextual factors like regulatory protections or availability of social insurance, as well as across different sociodemographic groups and workers' social context.
Without a common definition of PE to guide operationalization of the concept, cross-study and crosscountry comparisons and meta-analyses will continue to be elusive.
Precarious employment in occupational health
Future research on PE and health would benefit from (i) an open and interdisciplinary process to reach consensus on a definition of PE and/or employment quality; (ii) guidelines for standardized reporting of data in order to increase the comparability between studies; and (iii) inclusion of a core set of questions into different panel surveys such as the European Working Conditions and Labour Force surveys, etc.
Intersectional and multi-level analysis
Using our proposed theoretical framework, we aim to clarify the various relevant levels of analyses, beginning with the level of the employment relationship. Moving upstream, analyses should continue to clarify the antecedents of different employment forms -both across jurisdictions and over time -to inform potential interventions and policy aimed at reducing the prevalence of PE. However, most research in occupational and public health in this area will be oriented downstream of the employment relationship. For instance, further analyses are needed to clarify mechanistic pathways leading to ill health, such as characteristics of the work environment, adequacy of job-related material rewards, and direct psychological impacts from PE relations. Further, it is important to account for policy and social contexts that are likely to modify/moderate these pathways. In particular, theory and evidence suggest that individual experiences of precariousness are heavily influenced by these contexts (52,60). Ultimately, the goal of occupational health epidemiology is to identify job-related determinants of worker health; thus, distinguishing between the employment relationship and other factors is important to guide workplace and regulatory policy interventions. However, because of the complex and embedded nature of PE within multiple layers of context, we argue that a deeper understanding of the role of PE in producing health will occur when we disentangle the relationships across all of these levels as suggested in previous work (61). This approach differs from other research focused on the level of precarious workers or a precariat social class.
Better longitudinal studies for health
Research on PE and health lacks longitudinal studies of high-quality, ie, design, objective exposure and outcome measurement and standardized reporting of results. Thus, future research on PE and health would benefit from: 1. Improved exposure assessment, temporal resolution, and accounting for confounding: • More longitudinal studies on PE (2, 4, 6) and health.
• Use of a combination of data sources to minimize reporting bias. • Better resolution of timing of the exposure, eg, by examining employment trajectories. Studying employment trajectories can provide knowledge on how effects of PE accumulate across time, the transient or chronic nature of possible effects, and if these effects are modified by other factors (4). • Better use of register data, providing information that is both objective and repeatedly measured, highly useful for operationalization. • Careful considerations when selecting the sources of data on precarious employees. Precarious employees may be on hourly contracts, which can affect the registration of the outcome, eg, for occupational injuries, when the outcome is registered as days of absence due to injury (6).
Mechanisms and mediator studies:
• Detailed study on the mechanisms/pathways relating precariousness to (specific) health and well-being outcomes. • Clinical studies with biological sampling should be seriously considered.
3. Studies on other outcomes, such as: • Cardiovascular and respiratory diseases • Associations with biomarkers of which relationship with stress-related diseases is demonstrated • There are very few studies on the relation/coexistence of PE and hazardous work environment, a pathway that should be explored.
Concluding remarks
A commonly accepted multidimensional definition of PE should be one of the highest priorities in the occupational safety and health field. Adopting a multi-level and intersectional analytical approach in future studies is key to understanding the complex processes of PE and their relation to health.
Ethics statement
This position paper does not require ethical approval
Conflicts of interest
The authors declare no conflicts of interest.
|
2019-11-19T14:04:41.525Z
|
2019-11-17T00:00:00.000
|
{
"year": 2019,
"sha1": "a477a152dc861e238d7ba07defaa804b55f59f90",
"oa_license": "CCBY",
"oa_url": "https://www.sjweh.fi/download.php?abstract_id=3860&file_nro=1",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b60470c0cab3f2e6cb088c1afd1793143ef39275",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
212555552
|
pes2o/s2orc
|
v3-fos-license
|
Study the Corrosion and Corrosion Protection of Brass Sculpture by Atmospheric Pollutants in Winter Season
Brass is an important metalloid which is used in construction of sculptures. It is noticed that sculpture of brass is corroding due to interaction of pollutants. The pollutants develop chemical and electrochemical reaction on the surface of base material. Their concentrations of corrosive pollutants are increased in winter season. The air quality becomes very poor in winter season. Inside sculpture different forms of corrosion are observed like galvanic, pitting, stress, crevice etc. The major components of pollutants are oxides of carbon, oxides of nitrogen, oxides of sulphur, ammonia, ozone and particulates. Among these pollutants oxides of sulphur and ammonia are major corroder of brass. Ammonia is observed moist air to form ammonium hydroxide. It produces chemical reaction with brass metal and form complex compounds like [Zn(NH4)4](OH)2, [Zn(NH4)4]SO4, [Zn(NH4)]CO3, [Cu(NH4)4](OH)2, [Cu(NH4)4]SO4, [Cu(NH4)]CO3 etc. Oxides of sulphur react with moist air to exhibit sulphurous and sulphuric acids. They interact with brass to develop corrosion cell zinc metal and it is oxidized into Zn2+ ions and these ions are active to humidity and carbon dioxide to yield Zn(OH)2.ZnCO3.2H2O. Copper is converted into Cu2+ and it reacts with moist air and carbon dioxide to produce Cu(OH)2.Cu(CO3)2 and these complex compound detached on the surface of brass metal by rain water. These pollutants change their physical, chemical and mechanical properties and they also tarnish their facial appearance. Brass’ sculpture is affected by uniform corrosion. This type of corrosion can be control by nanocoating and electrospray techniques. For this work (6Z)-5,8-dihydrazono5,8-dibenzo[a,c][8]annulene and TiO2 are used as nanocoating and electrospray materials. The corrosion rate of material was determined by gravimetric and potentiostat technique. The nanocoating and electrospray compounds are formed a composite layer on surface of base metal. The formation of composite layer is analyzed by thermal parameters like activation energy, heat of adsorption, free energy, enthalpy and entropy. These thermal parameters were calculated by Arrhenius, Langmuir isotherm and transition state equations. Thermal parameters results are depicted that both materials are adhered with sculpture through chemical bonding. The surface coverage area and coating efficiency indicates that nanocoating and electrospray are produced a protective barrier in ammonia and sulphur dioxide atmosphere.
Introduction
The sculpture of brass comes in contact of contaminated air thus its deterioration starts for protection various types methods can be applied [1]. Brass [2] has major components is copper and zinc. Zn reacts the hot air to produce ZnO which is active in humidity and [Zn(NH 3 ) 4 ]SO 4 that complex layer is eroded in rain water. In acidic medium brass outer face has developed CuSO 4 and ZnSO 4 when dust particulates [13] are deposited on their surface which contains Fe to remove Cu and Zn from outer surface. Dust particulates are possessed oxides of alkali metal in presence of moisture, it produces NaOH or KOH [14] that is create hostile environment for Zn and it forms complex compound [15] Na 2 [Zn(OH) 4 ]or Na[Zn(OH) 3 .H 2 O] or Na[Zn(OH) 3 .(H 2 O) 3 ]. The oxides of NO 2 reacts with moist air to give HNO 3 that acid produces chemical reaction with Cu and it converted into Cu(NO 3 ) 2 . Some organic acids [16] available in air like acetic acid which develop corrosive environment for Cu and Zn which converts Cu into Cu 2 (CH 3 COO) 4 .H 2 O and Zn into (CH 3 COO) 6 .
Zn 4 O complex compounds [17]. They are eroded by rain water on the surface of brass. Organic compounds [18] like amnio and sulpur increased day by day in atmosphere. They develop hostile environment for brass and corroding it. Corrosive pollutants [19] concentrations like oxides of carbon, oxides of nitrogen, oxides of sulphur, hydride of sulphur and nitrogen, ozone and particulates are enhanced due to industrials wastes, effluents, flues and other factors are like burning of coals, woods and cow dung cakes.
Harmful pollutants [20] come into atmosphere through agricultural wastes, human wastes, pharmaceutical wastes, household wastes, food wastes and decomposition of living things. Various types of transports like road, water and air are evolving CO, NO 2 and SO 2 gases which produce acidic environments for brass. Several types of techniques are used to control the corrosion of brass like metallic coating; polymeric coating, paint coating, organic and inorganic coating of materials but these didn't give satisfactory results in corrosive medium. Some organic and inorganic inhibitors are applied to protect the corrosion of materials in acidic but they provide good results. Hot dipping, electroplating and galvanization techniques is used as protective tools for brass corrosion in acidic medium but these methods don't shave base metals. In this work it is to mitigate corrosion of brass corrosion by nanocoating and filler techniques. These materials form composite barrier on the surface base metal and blocked porosities and stop diffusion or osmosis process of pollutants.
Experimental
Brass coupons 15sqcm were taken for experimental analysis.
Results and Discussion
Brass metal was exposed in moist SO 2 (Table 1). The corrosion rate of brass metal was recorded in the months of November, December, January and February, the results ( Table 1) was shown that corrosion rate of metal increased in January to (Table 1). The addition of nanocoating and electrospray were reduced the corrosion rates as temperatures variation, it noticed in K versus T in (Figure 7). (1-K/Ko) X100 (where Ko is corrosion rate without coating and K is corrosion rate with coating) and their values were given (Table 1).
( Figure 9) show plot between %C (percentage coating efficiency) versus T (temperature in K). This figure indicated that percentage coating efficiency enhanced as temperatures varies in Nov to Feb months and their values were recorded in (Table 1). Figure 6 plotted between θ (surface coverage area) versus C (concentration in mM) and covered areas were produced by (6Z)-5, 8-dihydrazone- (Table 1).
5,8-dibenzo[a,c][8]annulene and TiO 2 were mentioned in
The results were shown that nanocoating compound occupied less surface areas with respect of electrospray. The surface coverage area developed by nanocoating and electrospray compound was calculated by formula θ = (1-K/Ko). (Figure 10) plotted between θ (surface coverage area) versus T (temperature) noticed that temperatures were varies from Nov to Dec but surface coverage area and electrospray values were increased and their values were written in (Table 1).
Figure 9
: V s C nanocoating and electrospray. (Table 2). The plot between logK versus 1/T was found to be straight line as shown in (Figure 11). The plot between log K and 1/T found to be straight line. It observed that activation before coating activation energy high but decreased after coating.
Month C(mM) Temp(K) SO 2 PPM Ea o (kJ/mol) Ea(kJ/Mol) q (kJ/mol) ΔG (kJ/Mol) ΔH (kJ/mol) ΔS(kJ/K)
These trends indicated that nanocoating compound adhered on the surface of base metal. Heat of adsorption was calculated by Langmuir isotherm log(θ/1-θ) = log(AC) -q/2.303R T and their values were mentioned in (Table 2). Its values were found to negative, it indicated nanocoating compound formed chemical bond with base metal. (Figure 12 and their values were recorded in (Table 2) (Table 2). These values were found to be negative which indicated these compounds adhered on the surface of metals.
All thermal parameters versus T (temperature) plotted in ( Figure 13) which indicated composite barrier formed on surface of base metal. Thermal parameters Values of TiO 2 eleectrospray activation energy, heat of adsorption, free energy, enthalpy and entropy were written in (Table 3) and their plot against T (temperature) in ( Figure 14). (Table 3) (Table 4). (Figure 15) was plotted ΔE(corrosion potential versus I(corrosion current density).
The results of (
|
2020-03-07T14:36:20.660Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "d7031c2811b40c539072f1b8cffab0b899df7a67",
"oa_license": "CCBY",
"oa_url": "https://lupinepublishers.com/material-science-journal/pdf/MAMS.MS.ID.000111.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d7031c2811b40c539072f1b8cffab0b899df7a67",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
260220566
|
pes2o/s2orc
|
v3-fos-license
|
ORAL MEDICINE THE ORAL EFFECTS OF E-CIGARETTES – A LITERATURE REVIEW
Background Because of regulations made against smoking, and the rising popularity of a healthy lifestyle there has been a visible change in the smoking habit of the population in the last 15 years. The negative impact on the attitude toward smoking forced the industry to develop new ways to satisfy the consumer’s nicotine need. That is how heated tobacco products and a variety of ENDS (Electronic Nicotine Delivery System), such as electronic cigarettes) have been invented. Objective This literature review aims to summarise the oral effects of consuming e-cigarettes which have been proven and publicised. Data Sources The main source of the study has been the publications found through PubMed and NBCI (National Center for Biotechnology Information). Study Selection Articles have been selected from the international literature if they had any information on the oral effects of e-cigarettes. Data Extraction The information from the articles has been categorised based on the tissue and the time they last. Data Synthesis Electronic cigarettes cause a change in saliva flow and its composition, a decrease in the blood supply of soft tissues and an immunosuppressed state in the said area, therefore the incidence of some diseases are higher among the users. Components of the e-liquid may cause damage to both soft and hard tissues, such as cancerous lesions, inflammation, chronic periodontitis and neurodegeneration. Nicotine may be absorbed by the surface of the teeth, causing patches, and some ingredients may be beneficial to the bacterial flora of the oral cavity.
INTRODUCTION
Over the last few years, electronic cigarettes (e-cigarettes) have gained greater and greater popularity. According to a 2011 survey by the WHO (World Health Organisation), 7 million people used e-cigarettes regularly worldwide at the time, and this number increased to 41 million by 2018 [1]. Some forecasts indicate that the popularity of e-cigarettes will not change; furthermore, as of 2021, there are an estimated 55 million daily users [1]. Concerning the health effects of e-cigarettes, they are thought to be a healthy alternative to smoking, a notion rooted in the marketing strategy and other factors of the tobacco industry. Another problem is that these products are accessible to a younger demographic: in 2020 alone in the USA, 19.6% of surveyed secondary school students used e-cigarettes, and 22.5% used them as a daily routine [2]. The harmful effects of traditional cigarettes are already established knowledge in people's awareness, thanks to widespread effort to combat the habit of smoking in the population; however, when it comes to e-cigarettes, no similar action has been underway, allowing their popularity to increase on and on. Over the last years worldwide, countless research studies have aimed to describe the health effects of electronic cigarettes, which indicates that there are many questions without answers. For this reason, this article aims to summarize present knowledge concerning the oral effects of using electronic cigarettes.
CHANGES IN THE SALIVA FLOW AND ITS CONSEQUENCES
Traditional cigarettes burn at almost 1000 degrees Celsius and produce quite toxic by-products at this high temperature, for instance, tar. In contrast, during the use of electronic cigarettes, there is no burning; instead, they vaporize liquid content at a much lower temperature, so the exhaled vapour is assumed to contain fewer toxic components [3]. The temperature of the vapour emitted by the e-cigarette depends on www.stomaeduj.com Györkös ÁI, et al.
109-114
Review Articles several factors such as battery voltage, resistance, atomiser condition, mouthpiece size, and e-liquid composition (mainly the level of propylene glycol (PG) and glycerine content); furthermore, some devices have adjustable voltage, so the temperature can vary quite widely. Generally, it can be stated that the average temperature of vaporizing is about 157 to 266 degrees Celsius [3]. The result of the extremely hightemperature vapour (mainly of PG-based liquids) may be the formation of substances containing a carbonyl group such as formaldehyde or acetaldehyde, which cause inflammation in the oral mucosa [4,5]. The most common and important condition that the vapourinduced temperature increase causes is xerostomia. The change in the saliva flow is assessed by measuring the resting and stimulated saliva flow. Symptoms of xerostomia may be a sticky or burning sensation in the oral cavity, increased thirst, difficulties in talking, swallowing, and tasting, and halitosis. Furthermore, the dried-out mucosa has a greater risk to develop oral infections such as oral candidiasis, which can recur from time to time. Besides these, the user's oral hygiene deteriorates [6], and the saliva's washing effect is compromised, therefore the incidence of caries rises [7]. Using an electronic cigarette changes not just the quantity of saliva but the quality as well. Oral pH is driven into the acidic range by nicotine; however, nicotine-free liquids move oral pH into the basic range and the saliva's buffer capacity is not affected [8]. Changes to the saliva's composition are also notable: the amount of secretory IgA, lysozyme, and lactoferrin is different from the physiologic level. Secretory IgA is a specialized antibody for the oral cavity containing saliva. The lysozyme content of saliva is detrimental to the immune processes: because of its proteolytic function, it breaks down antibodies. B-lymphocytes that have met antibodies migrate to one of the salivary glands and transform into plasma cells. In addition to monomer IgA, these cells produce a protein called J-protein, which connects IgA molecules by their Fc regions to form a dimer; this way, the lysozyme recognising the Fc regions becomes ineffective against these IgA dimers [9]. After the use of an electronic cigarette, it is proved by ELISA testing that the amount of IgA is decreased, which leads to a weakened oral immune response [10]. Lysozyme is responsible for breaking the bond between N-Acetylglucosamine and N-Acetylmuramic acid; these being the components of the bacterial cell walls, the action causes the lysis of bacteria. The substance also has antiviral and antifungal functions. As an effect of using e-cigarettes, the amount of lysozyme decreases, causing a downturn in oral protection [10]. Lactoferrin is an iron-binding glycoprotein, a multifunctional molecule participating in numerous physiological processes. Concerning the oral cavity, it is produced by the serous cells of the salivary glands; like lysozyme, it has antibacterial, antiviral, and antifungal effects, and is also an important immunomodulator. As an effect of using e-cigarettes, its level increases; the extra quantity may be considered an indicator of oral inflammation [10].
CHANGES CAUSED BY VACUUM
During the use of an electronic cigarette, there is suction in the oral cavity, the size of which depends on the type of equipment, and its duration on the user's habits. The suction power is produced by the mimic muscles, and its consequence is a relative vacuum in the oral cavity. In a University of California 2010 study, the researchers ran tests on the most popular e-cigarettes of that time, assessing the level of effort needed to use an e-cigarette and the health effects of the arising vacuum. They measured the pressure by a manometer attached to a machine mimicking smoking and found that the user needed to generate greater suction power with any type of e-cigarette than with traditional ones. In the first ten suction cycles, the density of the vapour did not change; after the tenth suction cycle, however, it started to decrease continuously. The longer the e-cigarette had been used the greater effort was needed for the same amount of vapour. The generated vacuum, the density of the vapour and the required effort varied across device types [11]. There is no unified medical position yet on the health effects caused by the vacuum; further studies are needed, but the presumed consequences are the overload of the tongue and mimic muscles. The result of the greater suction power is that the vapour travels to the distal parts of the lung, reaching deeper regions, with all associated disadvantageous consequences [12].
Soft Tissues: Acute Changes
As an effect of e-cigarette vapour, the blood supply of the soft tissues decreases, which can be traced back to two reasons. On the one hand, the chronic nicotine supply has a vasoconstrictor effect; on the other, the use of nicotine-free e-cigarettes decreases the blood supply as well. This is explained by glycerine and propylene-glycol inducing endothelial inflammation, which decreases the ability of veins to dilate so they stay constricted [13]. A decrease in blood flow can have numerous consequences such as decreased tissue defence and a hypoxic milieu, which can cause changes in the bacterial microbiome, leading to the proliferation of anaerobic species. Periodontal index values become higher, and the tendency to regenerate decreases, which leads to slow or failed wound healing [8].
Propionaldehyde, which appears in the aerosol during the decomposition of propylene-glycol, leads to the irritation of the oral cavity and the throat, with sensitivity, redness, and dry cough [14]. The symptoms of irritation tend to alleviate with continued use. Nicotine can cause a burning sensation by the activation of the TRPA1 (transient receptor potential ankyrin) channel [15]. www.stomaeduj.com
109-114
As a result of the immunosuppressed state in the oral cavity, the incidence of infectious mucosal diseases, mainly candidiasis caused by Candida albicans, increases among e-cigarette users. It has been proved by in vitro experiments that in candida cells exposed to e-cigarette vapour, the expression of chitin and SAP-2,3,9 (secreted aspartic protease) increases, and a change takes place in the phenotype: the hyphae become longer. These changes help the adhesion of candida to the oral mucosa [16]. Upon exposure to some components of the vapour such as carbonyls, reactive oxygen radicals, and different types of aldehydes, the cytokine secretion of epithelial cells increases, which causes inflammation [17]. The components of e-liquid may cause an allergic reaction depending on one's immune system. The prevalence of some mucosal diseases is higher among e-cigarette users. One such disease is stomatitis nicotina palati; induced by nicotine, the lesion mainly occurs on the hard palate, appearing as hyperkeratotic patches. Another disease occurring more often is lingua villosa nigra (black hairy tongue), causing the enlargement of the tongue's papillae and a change in colour to black. The risk of the disease cheilitis angularis also increases; this is a state associated with the bilateral drying and cracking of the anguli oris, which can be superinfected by some candida species [18].
Soft Tissues: Chronic Changes
Several components of e-liquids may damage the epithelial cells, which can cause the death of these cells, leading to ulcerative areas and wounds. During the use of an e-cigarette, metal particles may get into the aerosol from the atomiser or the cotton wool, including cadmium, nickel, and arsenic. Metal particles cause cancerous lesions, inflammation, chronic periodontitis, and neurodegeneration. As a result of heat and atomizing, flavouring substances decompose to carbonyls such as diacetyl. The main ingredients (propylene glycol and glycerine) decompose because of the heat, and among the decomposition products there are molecules containing a carbonyl group (formaldehyde, acetaldehyde) and reactive oxygen species (ROS). These molecules are cytotoxic regardless of the nicotine content; they induce DNA damage in the oral epithelium. They decrease the cells' defence via antioxidants, which would protect the cell against the reactive radicals; apoptosis and inflammation are induced. Carbonyl compounds cause protein carbonylation and oxidative stress. The consequence of all the above is a decrease in proliferative capacity and viability [15]. The volatile organic compounds with the potential for carcinogenicity in the e-cigarettes' vapour have a genotoxic effect [19]. Compared to traditional cigarettes, e-cigarettes have fewer carcinogenic substances at lower levels, but further studies are needed to find out if e-cigarettes can cause malignant transformations. The difficulties to judge the situation include the lack of long-term experiments and participant recruitment challenges since the ideal subject would be a person who has never smoked and does not have any risk factors. Despite the shortage of clinical proof, current medical opinion states that because there are carcinogenic components and adverse changes induced in cells, using an e-cigarette could have a carcinogenic effect [20].
Hard Tissues: Teeth
Nicotine can be absorbed by the surface of the teeth, causing yellowish-brown patches [21]. During the degradation of propylene-glycol, which is one of the components of the e-liquid, acidic substances such as acetic acid or lactic acid are produced, which can directly damage the enamel [22]. Furthermore, propylene-glycol is a quite hygroscopic substance: it binds the water from saliva and soft tissues, further increasing the rate of xerostomia already developed, along with all the adverse consequences [22]. During a 2018 research, enamel was incubated in flavoured and non-flavoured e-cigarette vapour; measuring their hardness, the researchers found that the enamel treated with flavoured vapour was 27% softer than the other preparation. Considering these results, flavours may promote the demineralisation of enamel. The pathomechanism of this process is that triacetin (traditional tobacco flavour), hexyl acetate (apple flavour), and ethyl butyrate (pineapple flavour) are all esters, sources of nutrients for cariogenic bacteria, manly for Streptococcus mutans. They facilitate the extracellular polymer formation of bacteria, which is the main process of biofilm generation, and they promote bacterial growth. During the degradation of carbohydrates, acids are generated, mainly lactic acid, decreasing pH and causing the demineralisation of teeth [23]. The metal content of e-cigarette vapour is beneficial for the bacteria as well because it contains iron, copper, and magnesium ions, which are the cofactors of some of the essential enzymes in Streptococcus mutans, and help the bacteria survive the attacks of the immune system. Researchers think e-liquid flavours are like fizzy drinks considering all things above, because of their cariogenic potency [23]. E-liquids usually contain glycerine as well, which is a desiccant like propylene-glycol. In the food industry, it is used as a sweetener, but cariogenic bacteria cannot break it down, meaning it does not facilitate the development of caries this way. On the other hand, through its viscosity, it helps bacterial adhesion to the surface of teeth; with the flavours in the e-liquid, it quadruples microbial adhesion to enamel, and the bacterial biofilm's size becomes twice as large [23]. The result of dental tissue weakening may be the fracture of enamel or fracture of an entire tooth. During a 2016 cross-sectional study, 11.4% of young respondents reported such damage to their teeth, proving the scientific position that with the use of e-cigarettes, the number of cracks and fractures of teeth increases [24].
Hard Tissues: Bone
Some ingredients of e-liquids are harmful to bone cells, affecting the cells' viability, differentiation, proliferative capacity, and matrix production. Cadmium found in e-liquid causes a decrease in the lifetime of osteoblasts even at a small concentration; furthermore, it increases the risk of certain musculoskeletal diseases such as rheumatoid arthritis and osteoarthritis [25,26]. In a 2019 experiment series, the effect of different flavouring substances of e-liquids on bone cells was assessed: osteoblasts were exposed to the most popular flavoured e-liquids, both nicotine-containing and nicotine-free, for 48 hours, then the cells' viability and their main osteoblast markers were evaluated. Results showed an increase in the expression of type I collagen, and the conclusion was that e-cigarettes are osteotoxic: all e-liquids decreased the viability of the cells, which was explained by oxidative stress and a higher level of reactive oxygen radicals. The rate of osteotoxicity was determined by the dosage and the flavour but was unrelated to nicotine content. Considering these findings, flavourless e-liquid is the least harmful, and cinnamon flavoured is the most cytotoxic [25]. The negative effects of e-cigarettes cause a change in the bone's features, and a decrease in its density and mineral content, which is dangerous mainly in childhood because this period is crucial in proper bone growth and development, this being the time when 90% of the bone mass develops. A long-time consequence of the change in the bones' condition might be osteoporosis [25], thus bone fractures might occur more frequently; furthermore, as an effect of nicotine, the regeneration of bone fractures is disturbed as well [27]. The changes in the bone's condition start in about two months of e-cigarette use; upon quitting, the alveolar bone recovers to its original, healthy state [28].
Effects on the Periodontium
The duty of the periodontium is anchoring, and fixing the teeth in the tooth ridge. The inflammation of periodontal tissues may lead to losing all the teeth, meaning that a healthy periodontium is necessary to keep and maintain one's teeth. Traditional cigarettes are well known for leading to periodontitis; the question here is if e-cigarettes have this consequence, too. Because of the vasoconstrictor effect of nicotine, the gingiva's oxygen and nutrition supply decrease. The consequence is a decrease in local white blood cell count, followed by these cells' inability to fulfil their defensive role, reinforced by low levels of lysozyme as a result of reduced saliva flow. On the other hand, there are consequential changes to the microbiome of the oral cavity, creating perfect circumstances for anaerobic periodonto-pathogenic bacteria to multiply, such as Porphyromonas gingivalis, Aggregatibacter actinomycetemcomitans and Prevotella intermedia. As a result of weakened defence and bacterial colonisation, inflammation, gingivitis, and periodontitis may develop [4]. The symptoms of such gingivitis include pain, redness and bleeding-while-brushing of the gingiva [17]. It is quite interesting that according to research on the microbiome of the saliva, done on 119 participants, some gram-negative bacteria such as periodontopathogenic Porphyromonas and Veillonella occur in a greater quantity in e-cigarette users' saliva than in that of traditional smokers, showing e-cigarettes' potential harm leading to periodontitis [29]. Furthermore, nicotine is antiproliferative to fibroblasts: as a result of prostaglandins and matrix metalloproteases released upon exposure to nicotine, myofibroblast and mesenchymal stem cell differentiation are blocked, holding back wound healing. Osteoblast functions and new vessel growth are similarly suppressed, negatively impacting the success rate of implant dentures, osteointegration, and the regeneration of papillae. Decreased osseointegration has been confirmed by animal experiments: around implants in rats getting a nicotine injection, the size of BIC (Bone Implant Contact), i.e., the contact surface between bones and implants, was lower than in the control group [27]. The components of the periodontium -gingival fibroblasts, periodontal ligaments, and epithelial cells -develop and maintain inflammation as a response to specific stimuli or stress caused mainly by cytokines. Some components of the vapour of e-cigarettes such as reactive oxygen substances, aldehydes and compounds containing a carbonyl group, are among the triggers of inflammation. The potential inflammatory effects of carbonyl compounds include carbonylation of proteins, leading to autoantibody production and periodontal destruction [30]. Furthermore, the stress caused by these compounds gives rise to DNA damage, which translates to early cell ageing. In vitro experiments have proved that gingival fibroblasts exposed to e-cigarette vapour face a greater risk of necrosis and apoptosis [30]. Periodontitis is a multifactorial disease where the presence of bacteria is a necessary but not sufficient condition; the effects of e-cigarette vapour provide a favourable medium for the development of this disease. The depth of an e-cigarette user's periodontal sac increases, the developed gingivitis may cause sensitivity and bleeding, and the plaque index increases.
[17] The risk of periodontitis increases, which leads to tissue and bone destruction, tooth mobility and, in the worst case, tooth loss.
CONCLUSION
The main purpose of inventing e-cigarettes was to find a less harmful alternative to traditional cigarettes; it is therefore useful to compare the health effects of these two harmful habits. In numerous cases, the e-cigarettes' harmful effects on the oral cavity are milder than those of traditional cigarettes; however, they have several adverse effects and may cause Györkös ÁI, et al. severe diseases. A further danger of e-cigarettes is that there are various types of devices and e-liquids, making the uniformity of regulations and medical research more difficult. Even though much research has been done in this area, there are still numerous unanswered questions and statements awaiting proof. Despite our lack of knowledge, the opinion on e-cigarettes' health effects are clear: they may be a useful assistive device while quitting traditional cigarettes, but they are unadvised to use in other cases due to their negative effects on oral tissues.
AUTHOR CONTRIBUTIONS
All authors agree to be accountable for the content of the work.
|
2023-07-28T15:05:01.039Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "6c26ce7efdf172d04e5e3441eb63ddc98a072a56",
"oa_license": null,
"oa_url": "https://doi.org/10.25241/stomaeduj.2022.9(3-4).art.5",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "64646973ce308a9f15000cd2d9381f0b17d40673",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
260379017
|
pes2o/s2orc
|
v3-fos-license
|
Phase Diverse Phase Retrieval for Microscopy: Comparison of Gaussian and Poisson Approaches
Phase diversity is a widefield aberration correction method that uses multiple images to estimate the phase aberration at the pupil plane of an imaging system by solving an optimization problem. This estimated aberration can then be used to deconvolve the aberrated image or to reacquire it with aberration corrections applied to a deformable mirror. The optimization problem for aberration estimation has been formulated for both Gaussian and Poisson noise models but the Poisson model has never been studied in microscopy nor compared with the Gaussian model. Here, the Gaussian- and Poisson-based estimation algorithms are implemented and compared for widefield microscopy in simulation. The Poisson algorithm is found to match or outperform the Gaussian algorithm in a variety of situations, and converges in a similar or decreased amount of time. The Gaussian algorithm does perform better in low-light regimes when image noise is dominated by additive Gaussian noise. The Poisson algorithm is also found to be more robust to the effects of spatially variant aberration and phase noise. Finally, the relative advantages of re-acquisition with aberration correction and deconvolution with aberrated point spread functions are compared.
Introduction
Images acquired in microscopes and other imaging systems are often degraded by phase aberrations. Phase aberrations occur when the spherical wavefronts emitted from a sample become distorted by misaligned optics or refractive index (RI) mismatches and variations [1]. These aberrations cause the spherical wavefronts to converge into spots much larger than the diffraction limit on the imaging sensor, decreasing the sharpness and contrast of images.
In order to restore diffraction-limited performance of optical systems, aberrations need to be corrected, either using hardware like a deformable mirror (DM) or with software deconvolution. For this to be achieved, the aberrations must first be detected and measured, so they can be properly compensated. Phase aberrations can be detected with hardwarebased methods, like a Shack-Hartmann wavefront sensor (SHWFS), or with software-based methods. Software-based methods include optimizing a predefined image metric over DM configurations and phase retrieval (PR) [2] [3].
In this paper, phase diverse phase retrieval, also known as phase diversity (PD) will be used to estimate aberrations occurring in widefield images. In PD, multiple images are acquired, each with a different but known phase aberration purposefully introduced. With this set of images, the unknown phase aberration can be estimated by optimizing an objective function appropriate for the assumed image noise model. This technique was first derived for extended objects by Gonzalvez [4] [5] in the case of a single additional diversity image assuming Gaussian noise. Paxman et al. later generalized the derivations to multiple images with both Gaussian and Poisson noise models, but presented no results or implementation [6].
There have been very few studies of PD in microscopy and all have used the Gaussian approach [7] [8] [9]. Here, the Poisson-based approach proposed by Paxman is implemented and compared to the Gaussian one. The Poisson estimator may offer improved aberration estimation over the Gaussian likelihood function according to Cramer-Rao lower bound estimates for extended objects [10]. PD with the Poisson estimator has had some limited use in astronomy in deconvolution [11] [12] and wavefront estimation [13] [14], but with little evaluation of aberration estimation performance and no comparison to the Gaussian estimator.
Image Formation
In incoherent fluorescence microscopy, the noiseless image formation process within the scalar diffraction approximation is modeled as where g is the image, s is the point spread function (PSF), f is the object, and H is the generalized pupil function (GPF). Coordinates x and u are 2d spatial coordinates in the image/object plane and pupil plane respectively. F denotes a Fourier transform and * denotes convolution. Phase aberrations ϕ are described in the GPF as where P is the pupil function. No apodization is used: To reduce the dimensionality of the problem, the phase aberrations are expressed using a basis of Zernike polynomials: Zernike polynomials have both a double-index representation (Z m n ) and a single-index representation (Z j ). We use a single-index representation following the Noll convention [15]. The factor of λ NA is to align the domain over which the Zernike polynomials are orthogonal with the nonzero domain of the pupil function P .
Phase Diversity
PD requires multiple images with known phase aberrations (referred to as diversity phases and diversity images) that are purposefully introduced: where θ k are the diversity phases. While there are no restrictions on diversity phase selection, a 'good' choice of diversity phase is essential for creating a likelihood function that leads to an accurate phase estimation. For convenience, we only consider defocus for the diversity phase, as it is rotationally symmetric and easily introduced in most optical systems by moving the sample relative to the lens, which allows for use of PD even in systems without a DM. Note that the term defocus is used in the literature to refer to both Zernike polynomial Z 0 2 and to the defocus function [8]: where γ is the defocus function, n is the RI of the medium, and z k is the distance from the focal plane for the k th diversity phase. The Zernike polynomial defocus is an approximation of the defocus function as described in [16]. Here, we use the defocus function, not its Zernike approximation, to model the use of defocus as a diversity phase. Using three diversity phases, with z 0 set to 3λ, z 1 set to 0λ, z 2 set to −3λ, gave relatively accurate estimations in a wide range of conditions. On a case by case basis, a more optimal set of diversity phases can be chosen, but for simplicity, this diversity phase scheme is used throughout the paper.
Phase Estimation
To estimate the unknown aberration ϕ, maximum-likelihood expectation maximization (MLEM) is used. Paxman et al. derive likelihood functions (LF) in [6] for additive Gaussian noise and Poisson noise. In the present paper, these LF's are both maximized and compared in a variety of situations. Besides the noise type, the LF's differ significantly in implementation. In the Gaussian LF, the maximal likelihood object has a closed-form solution in terms of the acquired images and aberration parameters. The Gaussian LF still estimates the object, just not explicitly. In the Poisson LF, the object needs to be explicitly estimated along with the aberration parameters.
Gaussian
The Gaussian LF to be maximized is derived in [6], equation 13: where d k is the measured noisy image and the ⃗ c in s k is used to indicate that the quantity depends on the aberration parameters. The LF is maximized using the strategy of Vogel et al. in [17], which develops a Gauss-Newton approach and calculates both the gradient and the pseudo-Hessian of the Gaussian likelihood function. Details of our implementation are given in our supplement.
Poisson
The Poisson LF to be maximized is also derived in [6], equation 29: This LF is maximized using the methods from [6], which involve alternating updates of the aberration coefficients and the object. The algorithm is initialized with a uniform object estimate and a small non-zero phase estimate (e.g. 10 −10 for each Zernike amplitude). To begin, the aberration coefficients are updated by performing a line search over the gradient of the likelihood with respect to each Zernike coefficient. This gradient, specified in equation 42 from [6], is given by: .
(13) The tilde denotes flipping:f Next, equation 62 from [6] is used to update the object estimate: These two steps are repeated until stopping conditions are met. Using specifically (13) and (15) is essential for fast convergence of the Poisson LF. We used a threshold on the norm of the gradient ( ∂L(⃗ c) ∂cj ) for the stopping condition.
Other Algorithm Details
In order to improve algorithm runtime, images are downscaled by a factor of two. Performing this downscaling gives a substantial runtime improvement, with minimal loss of estimation accuracy for both algorithms in most situations (see section 5.3 and Fig.7). Note that this technique only works if the image sensor is sampling at the Nyquist rate or higher.
Regularization was found to have no significant impact on phase estimation with either algorithm for both the object estimate and the aberration estimate. For the Gaussian algorithm, a small non-zero value (e.g. 10 −10 ) was used for regularizing the object estimation. For the Poisson algorithm, the object estimation was normalized after each iteration.
Image Simulation
The process used to simulate an aberrated image is shown in Fig.1. It involves (1) creating a synthetic object, (2) generating a phase aberration of specified magnitude and its corresponding PSF, (3) convolving the PSF with the synthetic object, and (4) applying noise. These steps are introduced in greater detail in the following sections. Step 4: Simulate noise processes (E). Final image is normalized so that the minimum value is 0 and maximum value is 1.
Object Simulation
Objects are approximated as 2D distributions and are created similar to the methods described in [18]. Using a synthetic object allows extensive control over the object properties (such as size of features, number of cells, cell texture, etc.). The object properties can have strong influence on the estimation accuracy of the PD algorithms, and fine control over those properties allows rapid testing of the algorithms in a variety of situations. The four simulated objects used to test the PD algorithms are shown in Fig.2.
Aberration
To set the amount of each Zernike component present in the simulated wavefronts, uniform random values between 1 and -1 are selected for the coefficients. 42 coefficients are used, corresponding to Zernike polynomials of order 2 through 7 (j = 4 through j = 45). Polynomials of orders lower than 2 are not considered here, as they do not cause image blurring and cause no change in the LFs.
The first 12 coefficients (n = 2 through n = 4) are scaled to a specific aberration magnitude, which is specified in units of wavelength. Aberration magnitude (also called wavefront RMS) is defined as: where A j is the normalization factor Z j λ NA u 2 du.
The rest of the coefficients are scaled to half of that aberration magnitude, as they are usually present in lower amplitude [19] [20] [21]. Not all coefficients need to be estimated; the large number of coefficients for image simulation is to verify that lower-order coefficients can be estimated in the presence of higher-order coefficients. Unless otherwise indicated, every simulated aberration had an aberration magnitude of 2λ.
Convolution
After selecting the coefficients, the phase of the aberrated wavefront is calculated using (5), along with the GPF and PSF using (3) and (2) respectively. Next, a pristine object image is convolved with the PSF. When standard Fourier-based convolution is performed, the output image has unrealistic artifacts near the edges from the cyclic convolution. The convolution process used here still uses Fourier transforms, but the output images are cropped to more accurately simulate the infinite extent of the PSF and object. See the supplement for more details on the convolution process. Examples of noiseless, convolved images at four different aberration magnitudes are shown in Fig.3.
Noise
To simulate the noisy image, three noise processes are used, as in [22]: where QE (which has a value of 0.6 in all simulations) is the quantum efficiency of the detector, P(λ) is a Poisson distributed random variable with mean λ, and N (µ, σ 2 ) is a Gaussian distributed random variable with mean µ and variance σ 2 . I k ph represents the expected number of photons at each pixel from the convolved object and k th PSF. To specify the number of expected photons per pixel (p/px, i.e. the amount of shot noise), the noiseless convolved image from the convolution step is divided by its mean, and subsequently multiplied by the desired p/px. I dc represents the expected number of electrons from an additive Poisson distributed noise process, such as dark noise. σ 2 r represents the variance of the number of electrons from an additive Gaussian distributed noise process, such as read noise. I dc and σ 2 r are uniform at each pixel. No ADC noise is simulated. Examples of images with different levels and combinations of noise are shown in Fig.4. Unless otherwise indicated, every image was simulated with an average of 500 p/px and low additive noise (I dc = 1, σ r = 2).
Simulation Results
First, the performance of the phase estimation algorithms will be analyzed with a variety of simulation settings. Estimation accuracy of the algorithms is quantified by the residual wavefront error (RWE), which is the RMSE of the residual wavefront: wherec j are the ground truth Zernike coefficients. All parameter sweeps are performed by simulating 100 image sets at each point in the sweep with randomly chosen aberrations. Error bars show standard error.
Aberration Magnitude
First, the accuracy of estimation is evaluated at different levels of aberration (Fig.5). Each plot shows the RWE as a function of initial aberration magnitude (wavefront RMSE). The Poisson estimation algorithm matches or outperforms the Gaussian algorithm in most objects and aberration levels.
Image Noise
Next, the algorithms are tested with different amounts of noise and different objects (Fig.6). The p/px are varied, and two different additive noise strengths are used. Note that each image is simulated with the same total number of photons in the noise process, causing the sparser objects to have higher peak p/px. In the high-count regime, the Poisson algorithm matches or outperforms the Gaussian model. At lower count levels, when additive Gaussian noise dominates, the Gaussian model generally performs better.
Image Size
Image size variation is explored in two cases: cropping at constant magnification (Fig.7, upper left), and cropping with variable magnification (Fig.7, lower left). The first case reduces the amount of object in the FOV, but keeps it at the same resolution, while the second case keeps the same amount of object in the FOV, but at a lower resolution.
Regardless of estimation algorithm, both size-changing paradigms lead to worsening aberration estimates for smaller images. Runtime of the estimation algorithms are shown in (Fig.7, upper right), with the algorithms being run on a desktop PC with an Intel Core i7-7700K. One additional benefit to increased image size (not explored here) is that if the FOV is large enough to sample the entire object, the cyclic convolution model assumed by both estimation algorithms is accurate, giving a better aberration estimation.
With smaller scaled images, the Gaussian outperforms the Poisson algorithms. In all other cases, the Poisson matches or outperforms the Gaussian algorithms. Downscaling the images prior to running the algorithms sacrifices very little estimation accuracy, while substantially decreasing algorithm runtime.
Spatial Variance and Phase Noise
Biological samples have varying RIs, leading to interfaces where light is refracted, causing aberrations. However, the RI variations are different across the sample, leading to aberrations that vary spatially. The PD techniques here do not account for spatial variation, but can still find an overall 'mean' aberration (Fig.8, left) that can be corrected over the entire image with a single DM at the pupil plane.
To simulate spatially varying aberration, the isoplanatic wavefront has a random component added to it at each point in the image. Over the entire image, these random components have zero mean, so the isoplanatic wavefront remains unchanged. The magnitude of the spatial variance (SV) was varied by scaling the random components. Images were simulated with random wavefront components that varied slowly over the image (low frequency SV) and rapidly over the image (high frequency SV). Due to the long times required to simulate images with spatially varying wavefronts, only a single aberration pattern is evaluated, albeit under these four different combinations of frequency and magnitude. The results show that both algorithms perform well in the face of most of the variation studied, although at high magnitude and high frequency the Poisson approach significantly outperforms the Gaussian approach. Also, some additional unknown aberration may be generated when adjusting the imaging system (with either a DM or by moving the objective lens) for acquiring the diversity images. This aberration, called phase noise, is unique to each diversity image. The mean phase noise between all the diversity images simply combines with the main aberration ϕ being estimated and does not violate any assumptions of the image formation. However, the residual phase noise (with zero mean) is not modelled by the image formation process, and degrades accuracy of the estimation. To measure the estimation degradation of phase noise, random vectors of Zernike components (with zero mean) are added to each individual wavefront before running the estimation algorithms (Fig.8, right). See the supplement for more details on SV and phase noise.
Deconvolution vs Re-acquisition
Having estimated the phase aberration, it can be used either to generate a PSF for deconvolution of the original image or to shape a deformable mirror for re-acquisition of a less aberrated image. To compare these two approaches, structural similarity (SSIM) [23] is used as an image-quality metric. The ground truth image in the SSIM comparison is an image with blur from only the diffraction process and high-order aberrations. This represents a perfectly AO-corrected image.
Deconvolution was performed with Richardson-Lucy deconvolution using all the diversity images, similar to the approach in [24]. Images are slightly cropped before measuring the SSIM to remove ringing artifacts (commonly occurring in Richardson-Lucy deconvolution) from comparison. Re-acqusition with a DM was simulated by applying the estimated loworder aberrations as correction phases in the pupil plane, then re-simulating the image.
The results are displayed in Fig.9 A, which shows SSIM as a function of p/px. The p/px in Fig.9 A were multiplied by 4/3 for the estimated corrected images to account for the extra photons needed to acquire the corrected image. The RWEs of the aberration estimations are also plotted, with values given on the right-hand scale.
It can be seen that both correction approaches substantially improve over the uncorrected image at high SNR. The DM-based re-acquisition provides a small but clear improvement over deconvolution at high SNR and nearly approaches the image quality provided by the hypothetical ideal correction. However, at lower SNR the extra photon dose combined with the poor estimation accuracy substantially decreases the quality of both correction approaches and leads to deconvolution outperforming the DM-corrected image, at least when dose matching as we have done here.
Discussion
The PD algorithms presented here estimate phase aberrations accurately enough for image improvement, even in the many detrimental settings tested in simulation. Unlike the use of SHWFS, PD requires no specialized hardware (even the DM is only required for correction), accounts for extended objects, and measures aberrations in the path to the image sensor. Consequently, PD algorithms can be fast, simple, and robust methods to measure phase aberrations. The PD algorithms can be run with uploaded images at https://share. streamlit.io/nikolajreiser/PoissonPhaseDiversity, and the code and usage instructions are available at https://github.com/nikolajreiser/PoissonPhaseDiversity.
The results here also show that aberration correction can be achieved with no specialized hardware at all. If defocus is used as a diversity phase by physically moving the lens, diversity images can be acquired without a DM, and the estimated aberrated PSFs then used in a multi-image deconvolution. At high SNR, the dose-matched performance of this strategy only slightly lags one that uses a DM to reacquire images corrected for the aberrations, and at low SNR it slightly outperformed the DM. While the process of wavefront estimation and deconvolution is technically equivalent to blind deconvolution, explicitly separating the wavefront estimation from deconvolution permits implementations tailored for each step.
While both PD algorithms achieve accurate phase estimations, the Poisson algorithm estimates the phase more accurately than the Gaussian algorithm in most cases. The Poisson algorithm is also less sensitive to effects not modeled in the LFs, such as spatial variance, phase noise, and out of focus plane contamination (see supplement for results on the latter).
There are many areas where further research could improve these algorithms. The optimization process could be improved for both algorithms, in terms of runtime and avoiding local minima. Also, diversity phases besides defocus were not explored. An improved method for diversity phase selection could enhance the algorithms, although it would likely depend strongly on specifics of the imaging system and the imaged object.
The PD algorithms should only be used for widefield detection. Imaging systems that are point scanning do not have to consider the object in phase retrieval, simplifying aberration estimation. Furthermore, the widefield PD algorithms are incapable of measuring spatial variation without subdividing the FOV. Smaller FOVs are likely to lead to poor aberration estimates, as seen in section 5.3, and re-acquisition would require multiple acquisitions with different DM settings tailored to the different FOVs.
Conclusion
Poisson and Gaussian phase estimation algorithms were implemented and compared in the context of widefield microscopy. The Poisson algorithm is found to match or outperform the Gaussian algorithm for a variety of objects as well as for a range of illumination intensity and aberration levels. The Gaussian algorithm performs better in low-light regimes when image noise is dominated by additive Gaussian noise. The Poisson algorithm is also found to be more robust to the effects of spatially variant aberration and phase noise. Finally, the advantage of re-acquisition with aberration correction over deconvolution using the estimated aberrated PSFs is demonstrated. However, the performance of deconvolution is strong enough that it could be used in situations where no deformable mirror is available.
|
2023-08-03T06:42:34.449Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "a475d45010160eaef8145ea0efe4c89e48e089f9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a475d45010160eaef8145ea0efe4c89e48e089f9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Engineering",
"Physics"
]
}
|
240072369
|
pes2o/s2orc
|
v3-fos-license
|
Pathophysiological and diagnostic importance of fatty acid-binding protein 1 in heart failure with preserved ejection fraction
Elevated intracardiac pressure at rest and/or exercise is a fundamental abnormality in heart failure with preserved ejection fraction (HFpEF). Fatty acid-binding protein 1 (FABP1) is proposed to be a sensitive biomarker for liver injury. We sought to determine whether FABP1 at rest would be elevated in HFpEF and would correlate with echocardiographic markers of intracardiac pressures at rest and during exercise. In this prospective study, subjects with HFpEF (n = 22) and control subjects without HF (n = 23) underwent resting FABP1 measurements and supine bicycle exercise echocardiography. Although levels of conventional hepatic enzymes were similar between groups, FABP1 levels were elevated in HFpEF compared to controls (45 [25–68] vs. 18 [14–24] ng/mL, p = 0.0008). FABP1 levels were correlated with radiographic and blood-based markers of congestion, hemodynamic derangements during peak exercise (E/e’, r = 0.50; right atrial pressure, r = 0.35; pulmonary artery systolic pressure, r = 0.46), reduced exercise cardiac output (r = − 0.49), and poor exercise workload achieved (r = − 0.40, all p < 0.05). FABP1 distinguished HFpEF from controls with an area under the curve of 0.79 (p = 0.003) and had an incremental diagnostic value over the H2FPEF score (p = 0.007). In conclusion, FABP1 could be a novel hepatic biomarker that associates with hemodynamic derangements, reduced cardiac output, and poor exercise capacity in HFpEF.
Results
Baseline characteristics. We enrolled 23 control subjects and 22 HFpEF patients in this study. Compared to control subjects, patients with HFpEF were older and had radiographic and blood-based signs of congestion, evidenced by a higher prevalence of cardiomegaly and greater N-terminal pro-B-type natriuretic peptide (NT-proBNP) levels (Table 1). Sex, body mass index, and prevalence of comorbidities did not differ in HFpEF and control subjects. Pulmonary rales and peripheral edema were rare in both groups. The use of cardiovascular medications was similar between groups. As expected, the H 2 FPEF score was higher in patients with HFpEF than controls. Hemoglobin and creatinine levels were similar between HFpEF patients and controls.
Conventional hepatobiliary markers were on average within the normal range in patients with HFpEF and were similar between groups. However, FABP1 levels were elevated in patients with HFpEF compared to controls (Fig. 1A). The difference remained significant after adjusting for either age (p = 0.04) or any hepatobiliary enzymes (all p < 0.05). FABP1 levels were not correlated with conventional hepatobiliary enzymes (all p > 0. 15). In contrast, FABP1 levels were directly correlated with radiographic and blood biomarkers of congestion (cardiothoracic ratio, r = 0.33, p = 0.03; and NT-proBNP levels, r = 0.50, p = 0.0005, Fig. 1B). Table 1. Baseline Characteristics. Data are mean ± SD, median (interquartile range), or n (%). ACEI, angiotensin-converting enzyme inhibitors; ALP, alkaline phosphatase; ALT, alanine transaminase; ARB, angiotensin-receptor blockers; AST, aspartate transaminase; NT-proBNP, N-terminal pro-B-type natriuretic peptide; FABP, fatty acid-binding protein; HFpEF, heart failure with preserved ejection fraction; T-bilirubin, total bilirubin; and γGT, γ-glutamyl transferase. (Fig. 2). Serum aspartate transaminase (AST) was also cor- LV function, hemodynamics, and their relationships with FABP1 during exercise. With low level (20 W) and peak exercise, heart rate, systolic BP, and oxygen saturation were similar between groups (Table 3). Compared to control subjects, mitral E velocity, E/e' ratio, PA pressures, and RA pressure were higher and mitral e' and s' velocities were again lower in HFpEF subjects during 20 W and peak exercise (Fig. 3). Subjects with HFpEF displayed lower CO during peak exercise than controls. Levels of FABP1 at rest were correlated with poor exercise capacity as reflected by lower peak watts achieved and shorter exercise duration (r = − 0. Diagnostic performance of FABP1. As expected, The H 2 FPEF score and NT-proBNP levels demonstrated good discriminatory abilities for identifying HFpEF, with areas under the curve (AUCs) of 0.72 and 0.88 (p = 0.009 and p < 0.0001), respectively. FABP1 distinguished HFpEF from control subjects with an AUC of 0.79 (p = 0.003) whereas other hepatobiliary markers did not (Table 4). FABP1 had an incremental diagnostic value over the H 2 FPEF score (global chi-square 14.1 vs. 6.9, p = 0.007).
Sensitivity analyses.
Of the 45 participants, liver sonographic examinations that were performed within a year from exercise echocardiography were available in six patients. Of the six patients, two patients were found to have mild chronic hepatitis and one had mild fatty liver while no evidence of acute or chronic hepatitis was observed in the others. Sensitivity analyses were then performed excluding the three patients with liver diseases. We found that (1) FABP1 levels remained significantly higher in HFpEF patients than controls (46 [24,70] ng/ mL in HFpEF [n = 21] vs. 17 [13,23]
Discussion
We demonstrated, for the first time to our knowledge, the robust relationships between serum FABP1 and echocardiographic measures characterizing HFpEF. We found that FABP1 but not conventional hepatobiliary markers was significantly elevated at rest in patients with HFpEF compared to controls. Interestingly, FABP1 levels were associated with markers of congestion and alteration of parameters for systolic and diastolic reserve, biventricular filling pressures, pulmonary hypertension, and CO during exercise in HFpEF. Additionally, FABP1 levels were associated with the presence of HFpEF, with an incremental diagnostic value over the H 2 FPEF score. Given that circulating FABP1 is most exclusively derived from the liver, our data suggest that FABP1 could be a novel hepatic biomarker that associates with hemodynamic derangements, lower cardiac output, and reduced exercise capacity in HFpEF.
Potential mechanisms for FABP1 elevation in HFpEF. Biomarkers provide valuable information to understand the specific pathophysiological pathways that relate to the disease 24 . FABPs are relatively small cytoplasmic proteins (14-15 kDa) abundantly expressed in a tissue-restricted manner; therefore, in response to tissue injury, FABPs diffuse more rapidly through the interstitial space and the endothelial clefts to circulation than large proteins such as alanine transaminase (ALT) (96 kDa) or AST (90 kDa) 25 . As such, FABP1 could serve as a biomarker for an earlier phase of hepatic injury where conventional liver markers are not released. In keeping with this notion, our data showed that circulating FABP1 levels were increased in the absence of elevation of AST and ALT. Accordingly, one of the most likely mechanisms for FABP1 elevation in HFpEF patients is a response to early or minimal hepatic injury. www.nature.com/scientificreports/ However, our simple correlation analyses revealed no significant correlation between FABP1 and AST, ALT, or the other conventional hepatic enzymes, arguing against the hepatic injury as a sole mechanism of an increase in circulating FABP1 in HFpEF. We recently found that FABP1 levels were significantly increased during exercise and were significantly correlated with plasma norepinephrine levels in healthy volunteers (manuscript in preparation). These results suggest that exercise induces circulating FABP1 through the mechanisms involving sympathetic nervous system activation. Thus, it is intriguing to speculate that an elevation of FABP1 in HFpEF is partly due to a hepatic activation of adrenergic signaling. Further studies are needed to determine the mechanisms underlying elevation in FABP1 in patients with HFpEF.
A potential link between FABP1, hepatic injury, and hemodynamic derangements during exercise in HFpEF.
HFpEF is a clinical syndrome that can be characterized by reduced cardiovascular reserve which leads to an elevation in LV filling pressure and secondary pulmonary hypertension during exercise 2,26 . The current study demonstrated correlations of FABP1 levels with radiographic and blood markers of congestion and echocardiographic evidence of hemodynamic derangements (higher E/e' ratio, PASP, and eRAP during peak exercise). The cross-sectional design of our study cannot determine whether hemodynamic derangements caused hepatic injury to promote elevations in circulating FABP1 levels, or whether FABP1 directly worsened LV diastolic function and hemodynamics during exercise. It has been shown that FABP1 is an effective endogenous cytoprotectant, minimizing hepatocyte oxidative damage 27 . The elevation of circulating FABP1 may represent a compensatory mechanism to counteract oxidative stress and inflammation in the liver 21 . Based on these data and ours, we speculate that systemic venous congestion secondary to the elevation in right heart pressures may lead to hepatocyte injury to promote up-regulation of FABP1. Further studies are warranted to determine the mechanisms underlying hepatic injury in HFpEF.
Diagnostic implications. Diagnosis of HFpEF in people presenting with chronic dyspnea is challenging 6,7,10 .
Assessments of clinical characteristics, chest radiography, echocardiography, and blood biomarkers play an important role in the diagnostic evaluation of HFpEF. Natriuretic peptides are the most commonly-used bloodbased biomarker to facilitate diagnosis of HFpEF and a recent guideline statement from the ESC has proposed a scoring system based upon echocardiographic markers of diastolic function as well as natriuretic peptides to determine whether HFpEF is present 7 . However, there are well-known limitations of natriuretic peptides, such as an underestimation in obese patients [28][29][30] . This makes the identification of novel biomarkers that relate to greater elevations of cardiac filling pressures or the presence of HFpEF high priority. It is therefore noteworthy that FABP1 levels were elevated in patients with HFpEF compared to controls, were well correlated with echocardiographic markers of elevated cardiac filling pressures and PA pressures during exercise, and predicted the presence of HFpEF. Although the diagnostic ability of FABP1 was lower than NT-proBNP, FABP1 had an incremental value to identify HFpEF from control subjects over the established H 2 FPEF score. The current data suggest that FABP1 could be a candidate biomarker to help identify HFpEF among patients with chronic dyspnea. Further large-scale studies are required to validate these findings and establish a cut-off for FABP1 levels to allow for incorporation into current diagnostic practice.
Limitations. This is a single-center study from a tertiary referral center. All participants were referred for exercise stress echocardiography for the evaluation of unexplained exertional dyspnea, introducing selection and referral bias. Although the current study and our previous one both focused on FABP1 levels in HF, the two studies are essentially different in three main perspectives: the aim, study design, and population. The primary aim of the previous study was to determine the prognostic value of FABP1 in HF 23 . In other words, the previous study was a longitudinal outcome study in design. On the other hand, the present study was a cross-sectional study to investigate whether FABP1 levels would correlate with echocardiographic markers of intracardiac pressures during supine bicycle exercise. Regarding the population, the previous study included HF patients regardless of EF and control subjects who were referred to coronary angiography. On the other hand, the present study included HFpEF patients and control subjects who were referred to exercise stress echocardiography for the evaluation of unexplained dyspnea. The pathophysiologic role of FABP1 in HF with reduced EF was beyond our scope. Patients with liver disorders were excluded from the analysis based on liver enzymes. Liver sonographic data were available in six patients, in which two had mild chronic hepatitis and one had mild fatty liver. We can- www.nature.com/scientificreports/ not exclude the possibility that some patients who did not undergo liver sonography might have had a liver disease that was not evident from liver enzymes 31 . The presence of hepatitis could have biased the results although key results remained similar even after excluding the three patients. The control group was not normal as they were referred for exercise stress echocardiography in the evaluation of exertional dyspnea and had comorbidities such as hypertension and interstitial pneumonia and relatively higher FABP1 levels than healthy controls, which could also bias the results 22 . However, the fact that the control population was more diseased than a truly normal healthy control population only biases our data toward the null. Given the presence of exertional dyspnea and comorbidity burden, control subjects might be considered as pre-HFpEF, and the inclusion of controls might add greater insight into the continuous relationships between the magnitude of FABP1 elevations and cardiac abnormalities across the spectrum from risk to frank HFpEF. We cannot conclude that these observations were specific to HFpEF, or may be observed in other disorders that cause right heart pressures, such as HF with reduced EF or non-Group II pulmonary artery hypertension. Further studies are required to address this question. The small sample size of this study does not allow to simultaneously adjust multivariable factors to analyze the diagnostic power of FABP1 to distinguish HFpEF from controls.
Conclusions. Serum FABP1 levels are elevated in early HFpEF and the magnitude of elevation is associated with echocardiographic markers of elevated LV filling pressure and PA pressures, systemic congestion, and lower workload. The present study suggests that FABP1 may serve as a potential hepatic biomarker that associates with hemodynamic perturbation, lower cardiac output, and reduced exercise capacity in HFpEF, and that FABP1 may help distinguish HFpEF among subjects with dyspnea. Further studies are required to confirm the current findings.
Material and methods
Study population. This was a cross-sectional study that assessed the association between serum FABP1 levels and Doppler echocardiographic hemodynamics at rest and during supine bicycle exercise. We prospectively enrolled consecutive subjects who were referred to our echocardiographic laboratory for exercise stress echocardiography for the evaluation of unexplained exertional dyspnea between November 2019 and June 2020. (2) no objective evidence of elevated left heart filling pressures at rest and with exercise (criteria above). Subjects with EF < 50%, significant left-sided valvular heart disease (> moderate regurgitation, > mild stenosis), infiltrative, restrictive, or hypertrophic cardiomyopathy, non-Group II pulmonary artery hypertension or exercise-induced pulmonary hypertension without elevation of E/e' (mPAP with exercise > 30 mmHg with a total pulmonary resistance [i.e., mPAP/CO] of > 3 mmHg min/L) 33,34 , and significant liver disorders (any acute or chronic liver diseases, defined by serum levels of transaminases more than three times the upper limit of normal) were excluded. There was no overlap with our previous study focusing on the prognostic value of FABP1 in HF patients regarding the study subjects 23 . The data underlying this article will be shared on reasonable request to the corresponding author.
Biomarker measurements. Venous blood samples were obtained just before the assessment of exercise stress echocardiography. Serum FABP1 levels were measured using a commercially available enzyme-linked immunosorbent assay (ELISA) kit (abcam, Cambridge, UK). As specified by the manufacturer, the lower limits of detection of serum FABP1 were 9.4 pg/mL. Serum NT-proBNP levels were also determined using another ELISA kit (abcam, Cambridge, UK). Serum hemoglobin, hepatobiliary enzymes, creatinine, glucose, and lipid profiles were measured by routine automated laboratory procedures.
Transthoracic echocardiography. Comprehensive resting echocardiography was performed by experienced sonographers using a commercially available ultrasound system (Vivid E95, GE Healthcare, Horten, Norway). LV volumes and EF were determined using apical 4-chamber views 35 . LV systolic function was assessed based on the EF and the systolic mitral annular tissue velocity at the septal annulus (mitral s'). LV diastolic function was assessed using the E, e' , and the E/e' ratio. Left atrial volume was determined using the biplane method of disks. Stroke volume (SV) was determined from the LV outflow dimension and pulse wave Doppler profile. CO was calculated from the product of heart rate and SV. RV systolic function was assessed using TAPSE and TV s' . RA pressure was estimated from the diameter of the inferior vena cava and its respiratory change. PASP was calculated as 4 × (peak tricuspid regurgitation [TR] velocity) 2 + estimated RA pressure. The mPAP was calculated as 0.61 × PASP + 2 36 . Subjects underwent supine cycle ergometry echocardiography, starting at 20 W for five minutes, increasing 20 W increments in three-minute stages to subject-reported exhaustion. Echocardiographic images were obtained at baseline and during all stages of exercise. All Doppler measures represent the mean of ≥ three beats. All studies were interpreted offline and in a blinded fashion by a single investigator (M.O.).
|
2021-10-29T06:18:57.893Z
|
2021-10-27T00:00:00.000
|
{
"year": 2021,
"sha1": "61c61f445dff280912ce4cb2423dce77e17a5db1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-00760-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37cf361dd3a7cf10fd02744fb1ee704897797dcd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245730573
|
pes2o/s2orc
|
v3-fos-license
|
Redshifted methanol absorption tracing infall motions of high-mass star formation regions
Gravitational collapse is one of the most important processes in high-mass star formation. Compared with the classic blue-skewed profiles, redshifted absorption against continuum emission is a more reliable method to detect inward motions within high-mass star formation regions. We aim to test if methanol transitions can be used to trace infall motions within high-mass star formation regions. Using the Effelsberg-100 m, IRAM-30 m, and APEX-12 m telescopes, we carried out observations of 37 and 16 methanol transitions towards two well-known collapsing dense clumps, W31C (G10.6-0.4) and W3(OH), to search for redshifted absorption features or inverse P-Cygni profiles. Redshifted absorption is observed in 14 and 11 methanol transitions towards W31C and W3(OH), respectively. The infall velocities fitted from a simple two-layer model agree with previously reported values derived from other tracers, suggesting that redshifted methanol absorption is a reliable tracer of infall motions within high-mass star formation regions. Our observations indicate the presence of large-scale inward motions, and the mass infall rates are roughly estimated to be $\gtrsim$10$^{-3}$ $M_{\odot}$yr$^{-1}$, which supports the global hierarchical collapse and clump-fed scenario. With the aid of bright continuum sources and the overcooling of methanol transitions leading to enhanced absorption, redshifted methanol absorption can trace infall motions within high-mass star formation regions hosting bright H{\scriptsize II} regions.
Introduction
Infall motions provide direct evidence of mass accretion and their observation plays a crucial role in star formation research. The classic blue-skewed profile is commonly taken as evidence of infall motions (e.g. Leung & Brown 1977;Zhou et al. 1993). However, the interpretation of the blue-skewed profile as an infall signature can be impaired by kinematic peculiarities (e.g. outflow and rotation) and chemical abundance variations . This situation becomes even more severe when analysing spectra obtained towards complex high-mass star formation regions by single-dish observations with a resolution 10 (corresponding to 0.24 pc at a distance of 5 kpc), because they incorporate emission which is statistically located at farther distances.
Detecting redshifted absorption or inverse P-Cygni profiles towards background continuum sources turns out to be a more straightforward and reliable method to identify infall motions associated with high-mass young stellar objects. Such redshifted absorption has already been detected in lines from a few molecules (e.g. NH 3 , Keto et al. 1987aKeto et al. , 1988Ho & Young 1996;Zhang & Ho 1997; HCO + , Welch et al. 1987;CS, Zhang et al. 1998; H 2 CO, Di Francesco et al. 2001) against strong radio-and millimetre-wavelength continuum emission. However, to date, Send offprint requests to: W.
Yang, e-mail: wjyang@mpifr-bonn.mpg.de such absorption has only been found in a couple of sources with interferometric observations (with typical angular resolutions of <10 ). Redshifted absorption in sub-millimetre-wavelength rotational lines from NH 3 and H 2 O against dust continuum emission has been detected in a number of objects (Wyrowski et al. 2012van der Tak et al. 2019). However, these transitions are not easily observed using ground-based telescopes because their frequencies are largely blocked by the Earth's atmospheric absorption. Furthermore, these high frequency measurements by Herschel and SOFIA are rather expensive and only a very limited number of sources have been studied so far, leaving extensive searches of these redshifted absorption features towards a larger sample of sources unfeasible. Therefore, salient tracers that are more accessible by ground-based telescopes can help to gauge reliable infall motions in a much larger sample of high-mass star formation regions.
The methanol (CH 3 OH) molecule has numerous transitions, including many maser lines, within the centimetre-and millimetre-wavelength ranges and hence provides a powerful tool to investigate star-forming regions. Early studies divided CH 3 OH masers into two categories (Batrla et al. 1987;Menten 1991b) based on their different observational properties and pumping mechanisms. Class II CH 3 OH masers are found in close proximity to infrared sources, OH masers, and ultracompact Hii (UCHii) regions, and radiative pumping is believed to be responsible for their pumping (e.g. Menten 1991a;Cragg et al. 2005). In contrast, class I CH 3 OH masers are thought to trace shocked regions and are produced by collisional pumping (e.g. Leurini et al. 2016). Experimentally determined rate coefficients for collisions between CH 3 OH and H 2 or helium (Lees & Haque 1974;Rabli & Flower 2010) have shown a tendency for the final state of a collisional excitation process of a CH 3 OH molecule to have the same k-quantum number as the initial state, that is a propensity for ∆k = 0 collisions. This leads, for E−type CH 3 OH 1 , to an overpopulation of the energy levels in the k = −1 ladder relative to the levels in the k = 0 (or the k = −2) ladder that they can radiatively decay to (see Fig. 1). For transitions originating in the k = −1 ladder that have their upper energy levels in this ladder, that is, those with J upper > 4 (line numbers 3-8 in Table A.1), this results in maser emission, while for 3, 2 and 1, it leads to enhanced absorption (anti-inversion or overcooling), in particular for the 12.2 GHz 2 −1 − 3 0 E line (line number 2). An analogous situation attains for A−type CH 3 OH, for which the levels in the K = 0 ladder are overpopulated relative to those in the K = 1 ladder (see Fig. 1), causing maser action for levels with J upper > 7 (line numbers 27 and 28) and enhanced absorption in the 6.7 GHz 5 1 − 6 0 A + (line number 26) transition. In the presence of a strong mid-infrared field, radiative pumping to torsionally excited states disturbs this simple picture and causes class II methanol maser action (Sobolev & Deguchi 1994;Sobolev et al. 1997). We note that the two strongest class II CH 3 OH lines show, as explained above, absorption in regions that are conducive to class I maser excitation, which are often significantly removed from centres of activity marked by strong compact radio or IR sources (see, e.g. Menten et al. 1986a). The actual observational picture can be more complex, since in a single dish telescope beam, the signals from different regions can be merged, which is, among others, the case for the 12.2 GHz line in W31C, which shows absorption in addition to maser emission (see Sec. 3.1). These effects successfully explain the observed bright class I methanol masers (e.g. Leurini et al. 2016) and overcooling, which enhances the absorption lines' detectability, in particular when strong background continuum emission is present, although even absorption against the cosmic microwave background radiation is observed in the 12.2 GHz line ) and even the 6.7 GHz line, whose lower energy level is 49 K above the ground (Pandian et al. 2008). Absorption in the 6.7 GHz and 12.2 GHz CH 3 OH transitions is indeed detected towards several sources, several of which host I methanol masers (e.g. NGC2264 and DR21-W; Menten 1991a;Ortiz-León et al. 2021), dark clouds (TMC1 and L183; Walmsley et al. 1988), hot corinos (NGC1333; Pandian et al. 2008), molecular clouds and Hii region complexes (Peng & Whiteoak 1992), and the Galactic Centre regions (Sgr B2, Sgr A and G1.6−0.025; Whiteoak et al. 1988;Houghton & Whiteoak 1995). These facts suggest that methanol transitions that show absorption could have potential to aid in studying infall motions in high-mass star formation regions. 1 Methanol exists in two separate (non-interconverting) symmetry species: E-type and A-type CH 3 OH. Its energy levels are described by the total angular momentum quantum number J and its projection on the near symmetry (O-C) axis, which for the double degenerate E−type levels is lower case k (with −J ≤ k ≤ +J). This leads to ladders of levels that have the same k, but (bottom to top) an increasing value of J. For A−type CH 3 OH, upper case K is used for the latter quantum number (with 0 ≤ J) and, except for the K = 0 ladder, the levels are split in pairs with opposite parity. Two high-mass star formation regions, W31C and W3(OH), have been selected for this work. W31C, also known as G10.6−0.4, is an UCHii region located at a distance of 4.95 +0.51 −0.43 kpc (Sanna et al. 2014). Based on previous dust spectral energy distribution (SED) fitting, this region is found to have a mass of ∼ 2×10 4 M and a luminosity of ∼ 3×10 6 L (e.g. Lin et al. 2016). W31C shows evidence for infall motions in both molecular gas (e.g. NH 3 , Ho & Haschick 1986;Keto et al. 1987aKeto et al. , 1988; HCO + , Klaassen & Wilson 2008;high-J CO, Wyrowski et al. 2006) and ionised gas (in the H66α radio recombination line, Keto 2002), which is well modelled by the gravitational collapse of a centrally condensed and centrally heated core (e.g. Keto et al. 1987a;. A series of high resolution observations present the overall picture of a hierarchically collapsing system for W31C (Liu et al. 2010a(Liu et al. ,b, 2011(Liu et al. , 2012Liu 2017).
The W3(OH) complex is located at a distance of 1.95±0.04 kpc (Xu et al. 2006), and consists of two high-mass objects W3(OH) and W3(H 2 O). W3(OH) is associated with an UCHii region ionised by an O7 star (e.g. Dreher & Welch 1981). In contrast, W3(H 2 O), located 6 (0.06 pc) east of W3(OH) (e.g. Zapata et al. 2011;Qin et al. 2015) is in an earlier stage of massive star formation and hosts a (double) hot molecular core (Wyrowski et al. 1999;Ahmadi et al. 2018). On larger (≤ 1 pc size) scales, both single-pointing spectral line observations and mapping reveal infall motions in the W3(OH) complex (e.g. Wu & Evans 2003;Sun & Gao 2009;Wu et al. 2010). Keto et al. (1987b) reported that the velocity, density and temperature structure in the W3(OH) complex is similar to that in W31C. That a blue-skewed profile is observed in the HCN (3−2) line over a region of 110 × 110 (1.1 × 1.1 pc), Qin et al. (2016) further supports the notion that the W3(OH) complex is overall undergoing large-scale collapse like W31C. These properties make W31C and W3(OH) excellent test beds to examine the suitability of other spectral lines for tracing infall.
In this work, we present observations of multiple methanol transitions towards W31C and W3(OH) to test whether methanol transitions can trace infall motions. These data were collected using the Effelsberg-100 m, the IRAM-30 m, and the Atacama Pathfinder Experiment (APEX) 12 metre telescopes. We note that both W3(OH) and W3(H 2 O) are covered within the same beam of single-dish observations. Here we use W3(OH) to refer to the W3(OH) complex. Our observations and data reductions are described in Sec. 2. Sec. 3 presents the results which are discussed in Sec. 4. A summary of this work is presented in Sec. 5
Observations and data reduction
We observed 37 CH 3 OH transitions towards W31C with rest frequencies ranging from 6 GHz to 236 GHz. The observations were undertaken between 2011 and 2021 using the Effelsberg-100 m, IRAM-30 m, and APEX-12 m telescopes. Information on the observed transitions is given in Table A.1, and these transitions are denoted by arrows in the methanol energy level diagram (see Fig. 1). We adopt the most accurate available rest frequencies, most of which are taken from Müller et al. (2004). We note that a given error in frequency will lead to a larger error in velocity for transitions with a lower rest frequency. For all transitions in Table A.1, the corresponding errors in velocity range between 0.002 km s −1 (at 25.294 GHz) and 0.392 km s −1 (at 38.3 GHz). We adopt the prescription of Shirley (2015) to estimate the optically thin critical density for each methanol transition. The spectroscopic data for the CH 3 OH lines are taken from the Leiden Atomic and Molecular Database (LAMDA 2 ) with a data file version of 2021 April 12. Because the gas kinetic temperatures of W31C and W3(OH) are about 100 K (Mangum & Wootten 1993;Tang et al. 2018), the optically thin critical density is calculated by assuming a gas kinetic temperature of 100 K for this work. The optically thin critical densities are determined to be 2×10 4 -6×10 6 cm −3 , spanning two orders of magnitude. Observational parameters are given in Table A.2. Four CH 3 OH transitions, those at 84, 95, 108 and 111 GHz, were targeted towards an ATLASGAL source (AGAL010.624−00.384, at α J2000 =18 h 10 m 28. s 6, δ J2000 =−19 • 55 46 ) near W31C, the other 33 transitions were observed towards the UCHii region G010.6234−0.3837 at α J2000 =18 h 10 m 28. s 70, δ J2000 =−19 • 55 49. 8 (Yang et al. 2019(Yang et al. , 2021. The angular offset between the two positions is 4 which is less than one fifth of the IRAM-30 m beam size at 3 mm. We carried out K-band (18-26 GHz) observations towards W3(OH) in 2021 May 13 using the Effelsberg-100 m. The telescope was pointed towards the position, α J2000 =02 h 27 m 04. s 38, δ J2000 =61 • 52 20. 5, used by (Menten et al. 1986a), who reported absorption in the 25 GHz J 2 − J 1 E CH 3 OH lines. The methanol transitions at 95.914, 97.582 and 143.865 GHz were observed in 2019 June 11 using the IRAM-30 m. The telescope was pointed towards the position of the UCHii region W3(OH), that is α J2000 =02 h 27 m 03. s 90, δ J2000 =61 • 52 24. 0, which is about 5 offset (nearly one seventh of the Effelsberg-100 m beam size at 25 GHz) from the position that we used in the K-band observations.
Given that the offsets between different positions are much smaller than the beam sizes of our observations, the position offsets are neglected in the following analysis. Details of the observations and data reduction for each telescope are described below.
Effelsberg-100m observations
Using the dual-polarisation S7mm Double Beam RX secondary receiver on the Effelsberg-100 m telescope 3 , we conducted Q band observations towards W31C (α J2000 =18 h 10 m 28. s 70, δ J2000 =−19 • 55 49. 8) to observe methanol transitions ranging from 34.5 to 39.5 GHz on 2020 September 14 and from 40 to 45 GHz on 2020 May 30 (project id: 77-19). Towards W31C, the 6.7 GHz and 12.2 GHz methanol transitions were observed on 2021 January 5 using the S45mm and S20mm secondary focus receivers, while K-band observations (18 -26 GHz) were performed towards both W3(OH) (α J2000 =02 h 27 m 04. s 38, δ J2000 =61 • 52 20. 5) and W31C on 2021 May 13 and 14 (project id: 17-21). All these observations were made in the position switching mode with an off-position offset of 10 in right ascension. Fast Fourier Transform Spectrometers (FFTSs) were used as the backend to record signals. For the 6.7 GHz and 12.2 GHz methanol transitions, we used high spectral resolution FFTSs which provide a channel separation of 3.1 kHz and 4.6 kHz at 6.7 GHz and 12.2 GHz, corresponding to a velocity spacing of 0.14 km s −1 and 0.11 km s −1 , respectively. For other methanol transitions, we adopted broad-bandwidth FFTSs which cover an instantaneous bandwidth of 2.5 GHz with 65536 channels, yielding a channel separation of 38.1 kHz and a corresponding velocity spacing of 0.33 km s −1 at 35 GHz (multiply by 1.16 to convert to velocity resolution, see Klein et al. 2012).
NGC 7027 and 3C 286 were used for pointing, focus and flux density calibration, and the pointing uncertainty was better than 10 , 5 , 5 , and 10 at 6.7 GHz, 12.2 GHz, 24 GHz, and 40 GHz, respectively. The observed methanol transitions, corresponding observation dates, full width at half maximum (FWHM) beam sizes, velocity resolutions, the 1σ rms noise levels at the corresponding velocity resolutions, main beam efficiencies, η mb , and the scaling factors to convert antenna temperature, T * A , to flux density, S ν , are listed in Table A Csengeri et al. 2016 for details, project id: 181-10). The observational parameters including the main beam efficiency, the scaling factor, and the 1σ rms noise level of each transition are listed in Table A.2.
APEX-12 m observations
We observed methanol transitions ranging from 209.2 to 220.9 GHz and from 225 to 236.7 GHz towards W31C (α J2000 =18 h 10 m 28. s 70, δ J2000 =−19 • 55 49. 8) with the PI230 receiver at the APEX 4 telescope ) on 2019 June 1 and June 9 (project id: 0103.F-9516A). The PI230 receiver 5 is a dual polarisation and sideband-separating heterodyne system that covers a frequency range of 200−270 GHz. The FFTS backend provides 65536 channels over 4 GHz bandwidth, and an overlap of 0.2 GHz between the two backends results in a total 7.8 GHz coverage for each sideband and polarisation per tuning. The wobbler switching mode with a throw of 120 in azimuth was used for our observations. The pointing was accurate to 2 . The main beam efficiency, the scaling factor, and the rms noise level after an on-source integration time of 5 minutes are listed in Table A.2.
Data reduction
The CLASS programme that is part of the GILDAS package (Pety 2005; Gildas Team 2013) was used for processing all of our data. For the observations performed with the 7 mm dual-beam receiver, we only analysed the data for the beam that tracked the selected targeted positions. Because radio frequency interference caused a poor baseline for 12.2 GHz spectrum (for W31C), a fourth order polynomial baseline was subtracted within the baseline parameters determined over a velocity range of 60 km s −1 around the systemic velocity. A low-order (<4) polynomial baseline was subtracted from the other spectra.
To improve the signal-to-noise ratios (S/Ns), Hanning smoothing was performed on the spectra obtained with the APEX-12 m telescope. Velocities are given with respect to the local standard of rest (LSR) throughout this work.
Results
Our observations resulted in the detection of 37 methanol transitions in W31C and 16 methanol transitions in W3(OH). The observational results for the two sources are given separately in the following.
Detection of 14 redshifted methanol absorption lines towards W31C
Figures 2-4 present the observed spectra of 37 CH 3 OH transitions observed towards W31C, showing diverse line shapes. Gaussian fitting was performed to characterise the spectral features of each transition. Single-peaked spectra observed in emission are fitted with a single Gaussian component. We used a single Gaussian component to fit the absorption signal for the transitions at 19.9, 20.1, 23.4, 37.7, 38.3, 38.5, 85.6, 86.6 and 86.9 GHz, in order to clearly characterise the absorption feature. For other spectra with multiple emission peaks and absorption dips, we employed multiple Gaussian components for the fitting. The results of the fitting are listed in Table A.3. Among the observed spectra, we find 12 methanol transitions showing single-peaked emission profiles. These are the 104,107,108,111,216,218,220,229,230,231,232,and 234 GHz lines. Although the 104,107,108,216,218,229,231 and 232 GHz lines have been found to show maser action towards other sources (e.g. Val'tts et al. 1999;Slysh et al. 2002;Salii & Sobolev 2006;Voronkov et al. 2006;Ellingsen et al. 2012;Hunter et al. 2014), the observed broad widths of the lines are similar to the widths of lines from other species, which indicates that they are dominated by thermally excited emission towards W31C. Therefore, we use the fitted velocities of these 12 transitions to derive the systemic velocity of W31C. Adopting inverse-variance weighting, we use the weighted mean velocity as the systemic velocity, and it is determined to be V LSR =−3.43±0.44 km s −1 . The systemic velocity we adopt is consistent with the value of ∼ −3 km s −1 used in previous studies (e.g. Keto 1990;Liu et al. 2010a;Kim et al. 2020). In Figure 5, it can be seen that the velocity centroids of these 12 methanol transitions become more redshifted with increasing energy level and critical density, which indicates that there is a velocity gradient from the outer envelope to the centre.
Absorption features are detected in 14 transitions, the 6.7, 12. 2, 19.9, 20.1, 23.1, 23.4, 25.541, 25.878, 37.7, 38.3, 38.5, 85.6, 86.6 and 86.9 GHz lines (see Column 3 in Table A.1). Absorption in five of these transitions (6.7, 12.2, 19.9, 23.1 and 23.4 GHz) towards W31C has been discussed in previous studies (Wilson et al. , 1985Menten 1991b;Peng & Whiteoak 1992;Caswell et al. 1995b;Breen et al. 2010Breen et al. , 2014. The remaining nine absorption profiles at 20. 1, 25.541, 25.878, 37.7, 38.3, 38.5, 85.6, 86.6 and 86.9 GHz are reported here for the first time. Furthermore, the CH 3 OH transitions at 19.9, 25.541, 25.878 and 85.6 GHz show a typical inverse P-Cygni profile. The absorption features have a large scatter in their LSR velocities, ranging from −1.91 km s −1 to 0.43 km s −1 (see Table A Both the 5 1 − 6 0 A + and 2 0 − 3 −1 E lines at 6.7 (Fig. 3b) and 12.2 GHz (Fig. 2b) show broad absorption and maser features. The absorption features in these two transitions lie in the velocity range −2 to −1 km s −1 and are therefore redshifted with respect to the systemic velocity of the source. The line width of the absorption feature at 12.2 GHz is the broadest we have detected for absorption towards W31C. The 6.7 GHz spectral profile shows absorption sandwiched between two maser features, which is consistent with previous observations (Menten 1991b;Caswell et al. 1995b). Breen et al. (2010Breen et al. ( , 2014 reported the detection of a 12.2 GHz maser. In their spectra obtained in 2008, the intensity of the absorption feature is about −0.5 Jy, similar to our result. On the other hand, the peak flux density of the maser feature at 4.6 km s −1 is 1.4 Jy, which is nearly three times our value. This indicates that the 12.2 GHz maser flux density has declined over the past 13 years. The 19.9 GHz line ( Fig. 4a) shows an inverse P-Cygni profile with an emission feature at about −6.6 km s −1 and an absorption feature at about −0.9 km s −1 . The 23.1 GHz line ( Fig. 4c) exhibits one prominent absorption feature at about −0.74 km s −1 . These two spectra are consistent with previous studies (Wilson et al. , 1985. We adopted a rest frequency of 23444.778 MHz (Mehrotra et al. 1985;Voronkov et al. 2011) for the 10 1 − 9 2 A − line, which is 42 kHz lower than the value used by Menten et al. (1986b). Thus, to compare with the velocity of the absorption feature observed by Menten et al. (1986b), a value of −0.54 km s −1 should be subtracted from the velocity they reported, which shifts the peak absorption to 0.26 km s −1 and reduces the difference with our result to 0.77 km s −1 . The remaining difference may be due to the noisy spectra and slightly different fitting methods. Despite the low S/N ratio in the 20.1 GHz line ( Fig. 4b), its absorption feature has a similar velocity as those of the 19.9, 23.1, 23.4 GHz lines, indicating that it is a reliable detection.
A series of nine J 2 − J 1 lines (J from 2 to 10) of E-type methanol near 25 GHz are detected by our observations. Both the 9 2 − 9 1 E line at 25.541 GHz (Fig. 4l) and the 10 2 − 10 1 E line at 25.878 GHz (Fig. 4m) exhibit typical inverse P-Cygni profiles with the absorption dips at about 0.1 km s −1 . We note that such absorption dips are also present in other J 2 − J 1 lines at a nearly identical velocity but blended with emission, which make them deviate from the typical inverse P-Cygni profile. The dips appear to become more prominent with increasing J. The presence of narrow 25 GHz Class I methanol maser emission also makes the profiles more complex for the 5 2 − 5 1 E line at 24.959 GHz, the 6 2 −6 1 E line at 25.018 GHz and the 7 2 −7 1 E line at 25.124 GHz (see Figs. 4h-4j). Menten et al. (1986a) reported the detection of five J 2 − J 1 E lines (J = 2, 3, 4, 5, 6) towards W31C, but the maser emission peaks at a location 9 offset from our pointing centre. Nevertheless, the masers should be covered by our Effelsberg beam of ∼34 , and they give rise to the observed narrow spikes between −6.0 km s −1 and −8.0 km s −1 in our spectra of the 5 2 − 5 1 E, 6 2 − 6 1 E and 7 2 − 7 1 E transitions (see Figs. 4h-4j). On the other hand, likely influenced by our offset position, the flux densities are much lower than the values reported in Menten et al. (1986a). In addition to the position difference, our low spectral resolution may be further reducing the measured peak flux densities.
An absorption feature with a velocity of −0.68 km s −1 was detected in the 7 −2 − 8 −1 E line at 37.7 GHz (see Fig. 2g), and, with a flux density of −0.89 Jy. This is the strongest absorption feature among our detected absorption lines. Both 6 2 − 5 1 A ∓ lines (from levels of different parity) at 38.3 and 38.5 GHz (see Figs. 3e-3f) have a narrow emission spike with a velocity of −1.9 km s −1 that is superimposed on a broad absorption feature. Similar to the 9 2 − 9 1 E and the 6 2 − 6 1 E methanol transitions near 25 GHz in W3(OH) (Menten et al. 1986a), the narrow spike may arise from a weak maser. The velocities of the absorption features in the 38.3 and 38.5 GHz transitions match that of the 37.7 GHz line. The line widths of the absorption features in the 38.3 and 38.5 GHz transitions are ∼4.8 km s −1 , which is slightly narrower than the line width (∼5.35 km s −1 ) of the 37.7 GHz line. Ellingsen et al. (2011) also searched for the 37.7, 38.3 and 38.5 GHz CH 3 OH lines towards W31C but nothing was detected, owing to their rms noise of ∼1.2 Jy with a channel width of 0.27 km s −1 . On the other hand, Ellingsen et al. (2018) discovered an absorption feature in the 37.7, 38.3 and 38.5 GHz transitions towards G337.705−0.053, but their velocities are not redshifted with respect to its systemic velocity traced by NH 3 (1,1) ). Our observations are therefore the first detection of redshifted absorption in these three lines.
Absorption features of methanol transitions at 85.6, 86.6 and 86.9 GHz in this work are detected for the first time. The CH 3 OH transition at 85.6 GHz (Fig. 2g) clearly shows an inverse P-Cygni profile. The emission feature peaks at about −4 km s −1 , and the absorption feature lies at a velocity of about 0.22 km s −1 . Despite the low S/N ratios in the CH 3 OH spectra at 86.6 and 86.9 GHz (Figs. 3g-3h), their shapes are indicative of an inverse P-Cygni profile. The velocities and line widths of the absorption features of the 86.6 and 86.9 GHz lines are similar to that of the 85.6 GHz line, but their absorption line strengths are much weaker than that of the 85.6 GHz line. This could be due to a lower excitation temperature of the 85.6 GHz line.
The most common class I CH 3 OH transitions at 36, 44, 84 and 95 GHz show strong and narrow maser features, and the velocities of these maser features range from −7 to −6 km s −1 (see Table A.3). Therefore, all the class I methanol masers are significantly blueshifted with respect to the systemic velocity (see Fig. 5). These masers have been already reported by previous studies. The peak velocity and flux density of the 84 and 95 GHz CH 3 OH masers from our observations are consistent with previous observations (Chen et al. 2012;Breen et al. 2019). However, the flux densities of the 36 and 44 GHz CH 3 OH masers of our Effelsberg-100 m observations are about 2-3 times lower than previous single-dish measurements (Haschick & Baan 1989;Kang et al. 2016;Breen et al. 2019;Kim et al. 2019). The discrepancy might arise from the different beam sizes, spectral resolutions, and pointing positions. On the other hand, maser variability may also contribute to the observed difference. To our knowledge, there are few published variability studies of class I methanol masers in the Galaxy. Despite previous claims to the contrary, Menten et al. (1988), using the Effelsberg 100 m telescope, found no significant variability in the 25 GHz J 2 − J 1 E class I CH 3 OH maser lines (J = 2 -8) in the Orion Kleinmann-Low Nebula, neither on a time baseline of 2 years nor over 13 years, comparing their data with those that Hills et al. (1975) obtained with the same telescope. Because class I methanol masers are often distributed over larger scales than the class II masers, it is more difficult to infer variability from observations using different telescopes with different spatial and spectral resolutions. For example, in Orion KL the 25 GHz masers arise from different location spread over an area that is comparable to the Effelsberg beam (34 ).
Detection of 11 redshifted methanol absorption lines
towards W3(OH) Figure 6 shows the spectra of 16 methanol transitions observed towards W3(OH (Keto et al. 1995;Zapata et al. 2011). In this work, we simply take V LSR =−46.19±0.02 km s −1 from our single-dish observations as the systemic velocity for the subsequent analysis. The nine J 2 − J 1 lines (J from 2 to 10) of E-type methanol near 25 GHz show inverse P-Cygni profiles. The emission features peak at about −47.1 km s −1 for all J 2 − J 1 E lines, while the absorption features lie at about −44.6 km s −1 . In the nine transitions, the systemic velocity is nearly at the point where the emission feature transits to the absorption feature. The intensities of the absorption features show a descending trend with increasing J (see Fig. 6), which appears to be opposite to the case in W31C (see Fig. 4). Because the difference between the rest frequencies of these lines is small, the continuum levels should be almost identical. Hence, the trend strongly suggests that the excitation temperatures of these transitions decreases with increasing J in W3(OH). Our results are generally consistent with the line profiles of the J 2 − J 1 (J=2, 3, 4, 5, 6, 9) lines in Menten et al. (1986a) except that different line profiles are found in the 6 2 − 6 1 E and 9 2 − 9 1 E lines. Their observations revealed double absorption features in these two transitions, but only one single absorption component is seen in our observations. The double absorption features were interpreted as a weak maser at a velocity of −44 km s −1 that partially covers the absorption trough (Menten et al. 1986a). The non-detection of the double absorption features by our observations may indicate temporal variation of this weak maser.
The absorption feature in both the 20.1 and 23.4 GHz lines has an LSR velocity of about −44.4 km s −1 with a line width of ∼2 km s −1 , which is in good agreement with the results reported by Menten et al. (1986b). Despite the different intensities that might be caused by different spectral resolutions and calibrations, the line profiles of the observed masers at 19.9 and 23.1 GHz are also consistent with the spectra obtained more than 30 years ago (Wilson et al. , 1985. The 4 0 − 3 1 E line at 28.3 GHz and the 2 1 − 3 0 E maser line at 19.9 GHz belong to the same series, but absorption features were detected in the former line by Slysh et al. (1992) and Wilson et al. (1993), and the absorption features appear to be nearly at the same velocity as those at 20.1 and 23.4 GHz. Figure 7 shows the line velocity as a function of the upper energy level and critical density of each transition for W3(OH). We find that all detected absorption features have a velocity of about −44.6 km s −1 , which is redshifted with respect to the systemic velocity. The detected class II methanol masers at 19.9 and 23.1 GHz are even more redshifted than the absorption features. The three types of methanol emission indicate the presence of three distinct groups of lines in Fig. 7, and the velocities of the three groups seem to be independent of the energy level of the respective transitions and the critical density.
Modelling redshifted methanol absorption
Our observations have shown that redshifted methanol absorption is more prominent in the lower frequency (radio wavelength) transitions than in those at higher frequencies. This is expected since the two sources host UCHII regions, whose free-free continuum emission becomes optically thin regime at higher frequencies, which results in a reduction of the continuum brightness temperature (e.g. Kurtz 2005;Yang et al. 2021). Furthermore, the observed absorption is mainly caused by cool gas that lies in the foreground of the free-free continuum emission. Therefore, low frequency lines that are absorbed from levels with low energy above the ground state tend to be more prominent. We also note that the absorption features of methanol lines from transitions at nearby frequencies can exhibit quite different intensities, indicating different excitation temperatures. This suggests that their populations deviate from local thermodynamic equilibrium (LTE). As discussed in Sec. 1, such non-LTE effects can be easily achieved for various methanol lines by collisional pumping (e.g. Leurini et al. 2016), and such effects could lead to very low excitation temperatures (even <2.73 K) in methanol transitions.
Our single-dish spectra are likely tracing large-scale inward motions, but the large-scale density, temperature, and velocity structures might not be well described by simple analytical solutions. Hence, we use a simple two-layer model for this study instead of more sophisticated radiative transfer models. In order to demonstrate the feasibility of using redshifted methanol absorption to quantify infall motions, we make use of a modified two-layer model as discussed by Myers et al. (1996); Di Francesco et al. (2001) to fit the observed redshifted methanol absorption profiles. In this model, the two layers lie along the line of sight. The "front" layer lies between the observer and the continuum source with an excitation temperature T f , while the "rear" layer lies behind the source with an excitation temperature T r . The continuum source is assumed to be optically thick with a blackbody temperature T c , and the cosmic background radiation T b is also included. Each layer has a peak optical depth, τ 0 , velocity dispersion, σ, and infall velocity, V in , towards the source. The Planck-corrected brightness temperature at the rest frequency ν is defined as J ≡ T 0 /exp[(T 0 /T ) − 1], where T 0 ≡ hν/k, h is Planck's constant, and k is Boltzmann's constant. J f , J r , J c , and J b are the Planck-corrected brightness temperature of the "front" layer, "rear" layer, central continuum source, and the cosmic background radiation, respectively. The central continuum source fills a fraction of Φ of the observing beam with and V LSR is the systemic LSR velocity of the source. According to this model (Di Francesco et al. 2001), the observed line brightness temperature can be expressed as, In order to reduce the number of free parameters, we fixed V LSR =−3.43 km s −1 for W31C and V LSR =−46.19 km s −1 for W3(OH), while T b =2.73 K and Φ=0.9 are adopted for both sources. The fixed parameter Φ is arbitrarily selected. However, we tested the model with Φ=0.1-0.9, and the resulting V in , τ 0 and σ stayed nearly identical, while T f and T r changed dramatically. For each transition, the model inputs also include the corresponding rest frequency and the radiation from the central continuum source (T c ). The continuum flux densities contributed by the central UCHii region were obtained from interferometric radio continuum measurements (see Yang et al. 2021 for W31C, see Wilson et al. 1991 for W3(OH)), and corresponding T c values are thus calculated by adopting different frequencies and beam sizes. After fixing these parameters, there are still five free parameters V in , τ 0 , σ, T f , T r which will be determined by comparison with our spectra.
In order to derive the free parameters in the model together with their uncertainties, we used the emcee code (Foreman-Mackey et al. 2013) to perform Monte Carlo Markov chain (MCMC) calculations with the affine-invariant ensemble sampler (Goodman & Weare 2010). Uniform priors are assumed for the five free parameters. The likelihood function is assumed to be e −χ 2 /2 , and χ 2 is defined as: where T obs,i and T mod,i are the observed and modelled brightness temperatures for each channel, and σ obs,i is the standard devi- Fig. 3. Spectra of observed A-type CH 3 OH transitions towards W31C, excluding those from K-band (18-26 GHz), which are collected in Fig. 4. The panels from (a) to (l) are arranged following the order in Table A ation of T obs,i . The posterior distribution of these parameters is estimated by the product of the prior and likelihood functions. For the MCMC calculations, we run 20 walkers and 10000 steps after the burn-in phase. The 1σ errors in the fitted parameters are estimated by the 16th and 84th percentiles of the posterior distribution. In order to help the convergence, the parameter spaces are set to be 0.5-10 km s −1 for V in , 0.05-30 for τ 0 , 0.5-3 km s −1 for σ, 0-150 K for T f and T r . The MCMC fitting to the 37.7 GHz CH 3 OH line in W31C is shown in Fig. 8. The same method is applied to all the methanol transitions showing redshifted absorption, and the fitted spectra are presented in Figs. 9-10 for W31C and W3(OH), respectively, and Table A.5 shows the fitting parameters.
We note that the infall velocity is the key parameter in deriving the infall rates, which are of central interest for understanding mass accretion in star formation. We tested the model with different fixed parameters to study how V in is affected. We find that the derived infall velocity is nearly independent of the assumed parameters. This is generally in line with previous studies (Pineda et al. 2012) in which such behaviour was verified using a different method (i.e. minimised-χ 2 fitting). However, we also note that the presence of maser emission components in the redshifted velocity range might lead to underestimation of V in . Furthermore, the uncertainties in the assumed systemic velocity will propagate to V in , so there may be additional uncertainties of 0.44 km s −1 and 0.02 km s −1 in V in for W31C and W3(OH) (see Secs. 3.1 and 3.2). On the other hand, τ 0 , T f and T r are coupled (see Eqs. 1-4), so their values can be significantly affected by varying the fixed parameters.
The V in value of 1.07 +0.23 −0.21 km s −1 in the 25.541 GHz transition may be underestimated, because the observed spectrum is not well fitted by the model (see Fig. 9). The cause of the anomaly may be due to an inaccurate V LSR for the 25.541 GHz line. If we use a velocity of −2 km s −1 , around which velocity the emission transits to absorption, we can reproduce a better fit giving V in = 2.32 +0.40 −0.94 km s −1 . V in derived from the 86.6 and 86.9 GHz lines has a large uncertainty due to poor S/N ratios.
Except for the 25.541, 86.6 and 86.9 GHz transitions, the derived V in in W31C ranges from 2.0 to 3.5 km s −1 at corresponding linear scales of 0.3-1.0 pc, which is generally consistent with the infall velocities derived by other tracers at a linear scale of 0.2-0.3 pc (e.g. Liu et al. 2013;Liu 2017). Good agreement with other estimates of the infall velocity is also seen in W3(OH). Our derived V in in W3(OH) lies in the range 1.6-1.8 km s −1 at a linear scale of ∼0.4 pc, consistent with previously reported infall velocities (Keto et al. 1987b;Wilson et al. 1991). These results suggest that redshifted methanol absorption can reliably estimate the infall velocities. Further discussion of the relationship between the infall velocity and the upper level, critical density and velocity dispersion can be found in Appendix B.
The V in values we obtain are lower than the infall velocities (4.5-6.5 km s −1 ) measured on smaller scales of 0.02-0.05 pc in W31C (Keto et al. 1987a(Keto et al. , 1988Keto 2002;. As already mentioned, our beams cover both W3(H 2 O) and W3(OH). High angular HCO + (3-2) resolution observations suggest that the infall velocity of W3(H 2 O) is 2.7±0.3 km s −1 at a linear scale of ∼0.02 pc (Qin et al. 2016), higher than our values of 1.5-1.8 km s −1 . Both cases suggest that the infall velocities are lower at larger scales. This is readily explained by gravitydominated velocity fields that are commonly assumed to follow a power-law distribution (Keto et al. 1988;Li 2018), for instance, the free-fall velocity follows V in ∼ r −0.5 . Therefore, our observations are likely to trace the large-scale inward motions of the two sources. Our spectra of W31C indicate that inward motions may be present on scales larger than previously reported. However, our observations can only loosely constrain the linear scale ( 1 pc) with the beam size, so further mapping observations are needed to better constrain the scale of inward motions.
Assuming a uniform density sphere undergoing spherically symmetric free fall for both sources (e.g. Pineda et al. 2012), we estimate the mass infall rates to be (3-15)×10 −3 M yr −1 for W31C and (1-2)×10 −3 M yr −1 for W3(OH), similar to the values estimated for massive clumps in other studies (e.g, Schneider et al. 2010). Mass infall rates of this magnitude are able to overcome the radiation pressure due to the luminosity of the central star during the high-mass star formation process (McKee & Tan 2003). The large-scale inward motions and the high mass infall rates favour a global collapse for both sources, consistent with the global hierarchical collapse and clump-fed scenario (e.g. Motte et al. 2018;Vázquez-Semadeni et al. 2019) that is different from the low-mass star formation picture (see also Appendix B).
Methanol absorption in different environments
Our observations have revealed redshifted absorption in multiple methanol transitions towards both W31C and W3(OH). However, absorption features are not always observed in the same transitions for different sources. For example, redshifted absorption features are detected in the 2 1 − 3 0 E line at 19.9 GHz and the 9 2 − 10 1 A + line at 23.1 GHz towards W31C, but these two transitions are found to show (class II) maser emission in W3(OH), and thus may also affect absorption of W31C, towards which also the 6.7 and 12.1 GHz transitions show absorption as well as maser features.
Furthermore, towards W3(OH), the nine J 2 − J 1 E lines all show (apparent) inverse P-Cygni profiles towards W3(OH). These resemble very closely the profiles of the 24/25 GHz J, K inversion transitions of NH 3 observed towards this region (like us) with the Effelsberg 100 m telescope by Wilson et al. (1978); Mauersberger et al. (1986). Mapping of the J, K = (5, 5) line by Mauersberger et al. (1986) (with the 38 Effelsberg beam) showed that the emission component arises from a position ≈ 5 east of the absorption, which arises from the UCHII region. Consequently, Mauersberger et al. (1986) persuasively argued that the emission arises from the hot core(s) discussed in Sec. 1, which actually was one of the very first hot molecular cores ever identified (by Turner & Welch 1984) at the site of H 2 O maser emission in the region (Forster et al. 1977).
Because of this source-by-source variation, it might not be straightforward to use a single methanol transition to systematically search for infall motions towards high-mass star formation regions. Abundant methanol transitions that are accessible by ground-based telescopes cover a broad range of energy levels and critical densities (see Table A.1). This allows us to investigate inward motions at different scales and for different environments. Therefore, observations of multiple methanol transitions provide a suitable choice for such studies.
Based on K-band single-dish observations, NGC7538 IRS1 shows methanol maser emission and absorption features similar to W3(OH), but the absorption velocities with respect to their respective systemic velocities are different. In both sources, absorption features have been detected in the J 2 − J 1 E (J=2,3,4,5,6,9) lines near 25 GHz (Menten et al. 1986a), the 10 1 −9 2 A − line at 23.4 GHz, and the 11 1 −10 2 A + line at 20.1 GHz (Menten et al. 1986b), while maser emission has been detected in the 2 1 − 3 0 E line at 19.9 GHz (Wilson et al. 1985) and the 9 2 − 10 1 A + line at 23.1 GHz . The systemic velocity of NGC7538 IRS1 is about −57.4 km s −1 derived from the 2 K − 1 K thermal methanol quartet lines near 96 GHz (Minier & Booth 2002). The methanol transitions at 20.1, 23.4 GHz and six J 2 − J 1 E lines near 25 GHz show absorption and the velocities of these absorption features range from −59.9 to −59.1 km s −1 , revealing that the absorption features are actually blueshifted with respect to the systemic velocity. This result is also in line with previous NH 3 (1,1), (2,2), (3,3) measurements (e.g. Wilson et al. 1983;Henkel et al. 1984;Keto 1991). These blueshifted absorption lines are indicative of expansion in NGC7538 IRS1 which is likely caused by its associated powerful outflows (e.g. Qiu et al. 2011). This is in stark contrast to our observed redshifted absorption in W3(OH). This indicates that these two sources have very different dominant large-scale motions on the clump-scales probed by single-dish observations. Therefore, our single-dish methanol observations support that both W31C and W3(OH) host ongoing large-scale inward motions, although both sources also show outflows on smaller scales (Liu et al. 2010a;Zapata Fig. 6. Observed spectra of 16 CH 3 OH transitions towards W3(OH). The quantum numbers and rest frequencies of transitions are labelled in their respective panels. Class I and II CH 3 OH maser transitions are labelled in red and blue, respectively. Their Gaussian fitting results are plotted in the corresponding colour. The transitions that have no maser detection yet are labelled in black, and the Gaussian fitting results are plotted in green. The vertical magenta dashed line represents the systemic velocity of V LSR =−46.19 km s −1 . Qin et al. 2016). On the other hand, higher angular resolution (∼0. 2) observations of the high-energy line HCN (4−3) v 2 =1 (E up = 1050 K) shows redshifted absorption towards NGC7538 IRS1, indicating infall motions in its innermost re-gions (Beuther et al. 2013). Therefore, one might need high energy and high critical density methanol transitions to disentangle the infall motions and outflows in sources exhibiting complex velocity structures. Nevertheless, the single-dish redshifted methanol absorption measurements have been demonstrated to be a good tracer of the large-scale inward motions for high-mass star-formation regions, providing motivation for follow-up studies with higher angular resolution observations.
It is important to note that CH 3 OH absorption in the 6.7 and 12.1 GHz lines has also been detected in low-mass star formation regions including TMC1, L183, and NGC 1333 Pandian et al. 2008). These generally do not show centimetre continuum radiation. Instead, the absorption is produced by overcooling against the cosmic microwave background (see Sec. 1). In these sources the absorption dips are at the systemic velocity, that is, the absorption is neither redshifted nor blueshifted, although they are known to show infall motions from other studies (e.g. Di Francesco et al. 2001;Schnee et al. 2007;Kirk et al. 2009). There are two possible explanations for this. One is that the relatively compact size will cause a significant dilution in single-dish observations. Unlike the case of high-mass star formation regions, which can show large-scale collapse, low-mass star formation regions usually exhibit local collapse, that is, collapse only becomes prominent towards compact dense cores. For the NGC1333 IRAS4A observations, the beam size was about 40 for the 6.7 GHz methanol absorption observations (Pandian et al. 2008), while the collapsing region is known to have an angular scale of ∼5-7 from interferometric observations of H 2 CO 3 12 − 2 11 and N 2 H + (1−0) (Di Francesco et al. 2001). This implies that the filling factor was only 2%-3%. On the other hand, anti-inversion of the 6.7 GHz methanol transition is expected in a more extended region with densities of < 10 6 cm −3 (Pandian et al. 2008). Therefore, the redshifted methanol absorption might be largely diluted by the large beam of the single-dish observations. The second possible explanation is that the infall velocities of these low-mass star formation regions are 0.5 km s −1 (e.g. Di Francesco et al. 2001;Schnee et al. 2007), generally lower than those of their high-mass counterparts. The low infall velocities will result in small velocity shifts in the spectra which are difficult to disentangle. For example, the velocity shift is as small as 0.15 km s −1 in TMC1 (Schnee et al. 2007). Interferometric observations of CH 3 OH absorption with high spectral resolution are therefore required to determine whether redshifted methanol absorption can trace infall motions towards low-mass star formation.
Redshifted methanol absorption is common or not in high-mass star formation regions
As discussed in Sec. 4.1, the two studied objects contain UCHII regions with high free-free continuum flux densities. If bright continuum sources are required to be able to detect the absorption, then redshifted methanol absorption can only be observed in a limited number of sources at a late evolutionary stage where they host a bright UCHII region. In order to roughly estimate the number of potential sources, we have used the results from the Coordinated Radio "N" Infrared Survey for High-mass star formation (CORNISH) project that covers 110 deg 2 of the northern sky (10 • < l < 65 • , |b| <1 • ) using the VLA in B and BnA configurations at 5 GHz (Hoare et al. 2012;Purcell et al. 2013 flux threshold appears to contradict the observations of Peng & Whiteoak (1992) who reported a detection rate of 66% (38/58) for methanol absorption at 12.2 GHz towards bright southern Hii regions and some selected dark clouds. This suggests that the criteria of using the flux density of W31C is an unnecessarily high threshold, however, the detection of methanol absorption towards sources without UCHII regions suggests that collisional pumping effects cannot be neglected in at least a handful of cases (e.g. Walmsley et al. 1988;Pandian et al. 2008). This is also supported by our observations that the absorption intensities differ for different transitions with similar rest frequencies. The overcooling of methanol transitions due to collisional pumping can greatly aid in the detection of methanol absorption.
Based on our current work, we can only give a lower limit of ∼4% for the fraction of potential UCHII regions showing methanol absorption. For many regions the overcooling of methanol transitions may also enhance absorption, as is observed towards low-mass sources and dark clouds, but this requires specific physical conditions (e.g. Walmsley et al. 1988;Pandian et al. 2008;Leurini et al. 2016). The work undertaken to date suggests that redshifted methanol absorption may be most suitable for studying infall motions towards high-mass star formation regions hosting bright HII regions.
Appendix B: Infall velocity as a function of the upper energy level, critical density, and velocity dispersion
In this Appendix, we compare the derived infall velocity with the upper energy level, critical density as well as velocity dispersion. Figures B.1 and B.2 show the infall velocity in W31C and W3(OH) as a function of the upper energy level and critical density, respectively. In W31C, apart from the CH 3 OH transitions at 25.541, 86.6 and 86.9 GHz (which may have inaccurate V LSR or poor S/N ratio), the derived V in shows an increasing trend with increasing upper energy level and critical density. While in W3(OH), V in distributes within a narrow velocity range and shows no clear trend with the upper energy level and critical density. Figure B.3 shows that there is a different behaviour between W31C and W3(OH) when comparing σ with V in that are derived from the MCMC fitting. We find that V in is comparable to σ in W31C for V in 2.5 km s −1 , indicating that the collapse can sustain the turbulence in W31C at a large scale. For V in 2.5 km s −1 , σ becomes lower than V in . Because the higher V in is derived from lines with high energies and critical densities, the behaviour is likely to occur at small scales. This is also the case for W3(OH). At the small scales, the dissipation rate should be higher because of higher densities (e.g. Elmegreen & Scalo 2004). The more efficient dissipation rate may explain the observed behaviour that σ is lower than V in . If the observed σ follows the typical line width-size scaling relation (e.g. Larson 1981), the lower σ corresponds to the smaller scale. Hence, the trend that V in increases with decreasing σ might indicate a multilayer view of global collapse with hierarchical inflows in W31C.
|
2022-01-06T16:09:35.225Z
|
2022-01-05T00:00:00.000
|
{
"year": 2022,
"sha1": "177d65713bea1543615cf7ab19850472815f1300",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cf2b01e4acc30eeb997765698061dcabd38080ef",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
256253568
|
pes2o/s2orc
|
v3-fos-license
|
Urban ecology and biological studies in Brazilian cities: a systematic review
The number of papers focusing on ecological interactions in urban environments has increased in recent years. This review aimed to address the panorama of urban ecology and biological surveys in Brazil. A systematic search was carried out using the Web of Science and Scopus platforms for papers on urban ecology to understand which institutions, authors, themes, cities, biomes, states, and regions have addressed the theme in Brazil to date. A total of 932 articles were found, in 196 scientific journals. Most papers were published between 2010 and 2019. This involved 350 municipalities in the five Brazilian regions, with Curitiba, São Paulo, and Rio de Janeiro being the municipalities with the most papers. São Paulo was the state that presented the most papers, with 23.7% of the total, and the Southeast region was the most representative with 36.6%. The biome with the highest concentration of papers (61.2%) was the Atlantic Forest. A total of 2537 authors were registered, affiliated with a total of 413 institutions from 19 countries. The institutions with the most papers were the Federal University of Paraná and University of São Paulo. The most discussed topic was related to botany (69%), and the most used keyword was “urban afforestation”. The number of papers published was greater in municipalities with higher human development index, number of inhabitants and relative urbanized area. This review revealed the scarcity of studies in low-income areas, and the need for greater incorporation of the social aspect, landscape ecology and ecological interactions in urban ecology.
Introduction
The gradual increase in human population culminates in greater urbanization, in turn, leads to biotic and abiotic transformations on the planet (Pickett et al. 2001). Such transformations, as intensive land use, use of natural resources, formation of heat islands and vegetation fragmentation, modify the urban landscape unevenly (Grise et al. 2016), leading cities to a major political-sanitary crisis (Martinez-Alier 2018). Considering that there are both ecological and social aspects that compose the urban environment, in order to reduce such inequalities, the study of urban ecology is emerging (Cadenasso et al. 2006;Niemelä al. 2011).
As a multidisciplinary science, urban ecology seeks to integrate ecological and social knowledge with other fields of study (Grimm et al. 2000(Grimm et al. , 2008(Grimm et al. , 2013Boone and Fragkias 2012;Childers et al. 2015), such as urban planning, natural resource management, economics, among others (McPhearson 1 3 et al. 2016a, b) as presenting cities as complex, highly interactive social ecosystems (Alberti 2008). Through it, the ecological patterns and processes that act in urban ecosystems are identified (McDonnell 2011). In addition, urban ecology studies the interactions between organisms, the built structures that make up cities, and social attributes such as lifestyle, economic, and political processes, which interact and influence the physical environment (Cadenasso et al. 2006;Grimm et al. 2008;Collins et al. 2011;Forman 2014).
Urban ecology research has grown rapidly across the world over the past two decades, in both research and practice. Pioneering research has been conducted in Europe and Asia (Kowarik 2005;Sukkopp 2008;Wu et al. 2014;Collins et al. 2011), and in Brazil, there has been a clear trend of expansion (de Camargo Barbosa et al. 2021). In this sense, studies carried out in Brazilian cities can contribute to better planning of green areas by showing the wide use of exotic tree species (Moro and Castro 2015), their impact on native fauna (de Camargo Barbosa et al. 2021), the potential afforestation as ecological corridors (Freitas et al. 2020), and the uneven distribution of green areas across cities (Sartori et al. 2019;. Therefore, in view of the role of urban ecology for human well-being, reduction of socio-environmental inequalities (Porto and Martinez-Alier 2007) and conservation of biodiversity (Brun et al. 2007;Almeida et al. 2018) a diagnosis of how Urban Ecology has been approached is fundamental. To this end, carry out a survey of the most studied fields, where the research is being carried out and the main results obtained and, consequently, if there is a concentration of studies in specific regions. With this, contribute to a better distribution of public investments, seeking socio-environmental equality and biodiversity gain. To this end, this article proposes the first integrative review of urban ecology and related papers in Brazil and seeks to answer the following questions: i) What is the current panorama of urban ecology in Brazil? ii) Which are the most studied themes in this area? iii) Does the socio-economic aspect of regions directly affect the number of publications in the area of urban ecology? and iv) Which are the traits of municipalities linked to the highest number of publications?
Literature search
We consider that the term "urban ecology" includes not only articles developed purely in the scope of ecology, but also everything that surrounds it, such as wildlife ecology, as well as economic and social research linked to green areas and landscape analysis. We performed a systematic search for articles as suggested by Page et al. (2021) (i.e., PRISMA statement) (Fig. 1). The searches were carried out in the Web of Science, SciELO, and Google Scholar databases. The bibliographic survey was carried out in March 2020 and did not limit publications by year.
From the initial survey of articles, we created a list of the journals with the greatest coverage of the topic and the most cited authors. Therefore, further searches were carried out for papers in the journals with most articles, considering articles accepted until 2019. Articles were also searched in the Curriculum Vitae of the most cited authors, to ensure that no article had been excluded.
After the removal of duplicate records, the remaining records were analyzed, and the following inclusion criteria were applied: (i) articles focused on urban ecology, (ii) studies were carried out in Brazil, and (iii) studies were located in urban and peri-urban areas.
Data collection
From the articles obtained in the bibliographic survey, the following data were collected: A) Study Location; B) Central theme and subthemes; C) Paper information. Regional data (D) were obtained a posteriori from the list of cities using IBGE data (IBGE 2021). The data collected is described in detail in Table 1. To be accepted in the present review, the only guideline was that the paper had to be conducted within cities and in relation to living beings.
Most data included in the biome category included urban field sites such as urban green areas. Urban green areas are defined as forest fragments, urban trees, and squares. Biome classification used here was according to Coutinho (2006), who define biome as a "set of life (vegetable and animal) constituted by the grouping of contiguous and identifiable vegetation types on a regional scale, with similar geoclimatic conditions and history shared change, resulting in a biological diversity of its own." With this, six biomes were described for Brazil: Atlantic Forest, Cerrado, Amazon, Caatinga, Pampas, and Pantanal.
Data analysis
Data was compiled and analyzed using Microsoft Excel software. The R program (R Core Team 2021) was used through the base package for the construction of graphs and the ggplot2 and ggspatial packages for the construction of the map. For the analysis between the social data of the municipalities and the publications, a spearman correlation was performed using the R base package.
Results
A total of 1209 scientific articles were collected from the search engines. In the first selection, articles that did not fit were removed, and a selection was made by the authors, leaving 932 articles belonging to 181 scientific journals ( Fig. 1; Complementary Table S1). The journals with the most published articles were Revista Brasileira de Arborização Urbana with 311 articles (33.4%), followed by Biotemas with (5.7%), Revista Árvore (4.3), Revista Brasileira de Geografia Física (3.6%), Ciência Florestal (3%), Biota Neotropica (2.6%) and Urban Ecosystems (2.6%). Of the scientific journals, 15 presented more than 10 scientific articles, representing 68.2% of the total.
Most of the articles (75.2%) were published between 2010 and 2019. Between 2000 and 2009, 23.6% were published, and finally, from 1990 to 1999, 1.1% were published ( Fig. 2A). Only one article was found in digital format between 1980 and 1989. Between the years 2017 and 2019, the highest number of articles was published, representing 9.44% and 9.12% of the total, respectively. These years were followed by 2018, 2010, and 2016 (Fig. 2B).
The reviewed papers were carried out in 350 municipalities, representing 6.3% of Brazilian municipalities (Instituto Brasileiro De Geografia E Estatística 2021) (Complementary Table S2). Most of the publications (88%) were carried out in only one location. Papers in more than one location accounted for 5.2%, ranging from two to 50 municipalities. Theoretical papers (not related, therefore, to any specific location) corresponded to 6.8% of the total.
The regions with the highest number of municipalities with published papers were the Southeast, with 36.6% of the total, followed by the South (26.3%), Northeast (16.6%), Central-West (11.4%) and North (9.1%). Proportional to the total number of municipalities in the region, the Central-West had 8.6% of the municipalities represented, followed by the Southeast and South with 7.7% each, the North with 7.1% and the Northeast with 3.2%. As for the Brazilian regions, the Southeast represents 42.3% of the papers, followed by the South (25.8%), Northeast (14.5%), North (8.9%) and Central-West (8.5%) ( Table 2). Considering that Brazil is a country of extreme inequality, and most of its population, GDB and academic institutions are in the Southeast, a higher concentration of papers in this region was expected. Regions with less economic productivity are also the least researched when it comes to urban ecology.
Among the biomes, 61.2% of the reviewed papers were carried out in municipalities located in the Atlantic Forest, 20.8% were carried out in the Cerrado, 8.3% in the Amazon, 1 3 5.3% in the Caatinga, 4.3% in the Pampas, and only 0.2% in the Pantanal. Papers carried out in only one biome added up to 82.4% of the total, while papers carried out in more than one biome, meaning those also carried out in more than one municipality, added up to 17.6%. Of these, 17% were developed in two biomes, 0.34% in three biomes, and only one paper presented four biomes. Most of those focused on two biomes included the Atlantic Forest (Fig. 4a, b).
We found 2537 authors from 932 reviewed papers (Complementary Table S3). The number of authors in each of the reviewed papers varies from one to 32, and the values decreased as the number of authors increased. The most common number of authors per paper was three (24.5% of articles), followed by two (21.5%), and four authors (18.3%). Papers with 10 authors or more represented 3.6% of the total (Fig. 5a, b).
The authors are affiliated with 413 institutions, located in 19 countries. Brazil's largest research partners were the United States, Germany, England, and Spain. Brazilian institutions were responsible for 95% of registered institutions. The institutions with the highest number of published articles were the UFPR, University of São Paulo (USP), Federal University of Uberlândia (UFU) and ESALQ. Institutions with only one record accounted for 62.3% of the total, while institutions with two papers accounted for 12.8%. Only 7.7% of institutions had more than 10 records (Complementary Table S4). Among the universities that stood out, only one is not public, this As for the topics covered in the articles, those with the highest incidence were Botany (68.6%) and Zoology (34.9%), followed by Social (33%), Ecology (13%), Landscape (12.4%), Theoretical (10.8%), and Environment (7.3%). Botany, the subject that has the largest number of papers published, was divided into six sub-themes, with floristic surveys being the main sub-theme with 32% of the total. As for social issues, the most discussed sub-themes were related to society (21.5%), the economy (5.5%) and public management (2.2%). As for zoology, the reviewed papers were classified into 15 sub-themes, and papers on invertebrates (10.6%), birds (6.9%), and mammals (6.7%) were the most abundant (Table 3; Fig. 6).
We found 1915 keywords used in the reviewed articles. Words used only once accounted for 76% of the total. The words that appeared more often were: "Urban afforestation," "Urban planning," "Green areas," "Urban ecology," "Urban ecology," "Afforestation," "Urban trees," "Exotic species," "Inventory," and "Urban forestry". Table 4 shows the 20 most-used keywords.
Regarding the attributes of municipalities acquired through the IBGE data platform, when only municipalities with five or more reviewed papers were observed, the variables that showed a significant correlation with the number Fig. 4 Most representative biomes in terms of the percentage of published articles on urban ecology and the percentage of area that the biome occupies in the country of publications were HDI, number of inhabitants, and percentage of urbanized area (Table 5). The total area of the municipality and percentage of afforestation of public roads did not correlate with the number of published articles.
Discussion
Ecology is a branch of science that was initially dedicated to rural areas with little human influence (Grimm et al. 2008). However, with the increase in urban populations and the increasing alteration of natural landscapes, new studies have also begun to be conducted in urban environments (Ramalho and Robbs 2012). Urban ecology was established in Brazil during the 1970s (Noyes and Progulske 1974;Sukopp 1998;Grimm et al. 2008), with an emphasis on purely biological research, ignoring the social aspect (Wu et al. 2014). In recent years, there has been interest in researching the relationship between ecological, socioeconomic, and cultural attributes of urban centers (Ramalho and Hobbs 2012;Shackleton et al. 2021).
This increase may be related to the growing concern of societies about the environment, which is a global trend in countries such as China (Yao et Most of the articles found in this review focus on urban Botany. The attention given to urban afforestation can be explained by the "luxury effect" (Hope et al. 2003), that is, the tendency of neighborhoods and cities with high socioeconomic standards to be "greener" and to present a more biodiverse flora. Another important aspect is the immediate and measurable benefits of afforestation (Sartori et. al 2021), which could promote greater interest in this area of study. It is also relatively easier to study plants as a proxy for urban ecology benefits in biodiversity, socioeconomic wellbeing and ecosystem services provision, when compared to other study focuses. Davis et al. (2011), for example, demonstrated the possibility of reducing carbon emissions by 80% with afforestation of streets and backyards in the city of Leicester, England. Jones (2021) reported an increasing improvement in the quality of life of New York residents with each tree planted. In addition, health improvements in the population due to the increase in air quality (Jones and McDermott 2018) and climate mitigation (Martini et al. 1 3 2017) result from investment in biodiverse and abundant urban afforestation. The studies analyzed a combination of socioeconomic aspects, human well-being, and biodiversity. More such efforts are needed to achieve transdisciplinary research in urban ecology. Brazilian journals focusing on botany have the highest number of records, as indicated by the Revista da Sociedade Brasileira de Arborização Urbana (REVSBAU), with onethird of the articles published. In addition, there are a significant number of records published in botanical journals such as Rodriguésia, Cerne, Árvore, and Revista da Sociedade Brasileira de Botânica, in addition to REVSBAU. Once again, this is due to the fact that papers focused on plant biology have a greater tradition, especially those focused on urban afforestation. Studies involving fauna and human health in the context of urban ecology are more recent and sporadic; therefore, journals that publish these themes were less frequent in the present review. Furthermore, we can see that 92% of journals published less than ten times on urban ecology.
Despite growth, areas such as landscape ecology, environment, and ecology are rarely addressed in urban ecology, and this is emphasized by the use of the keyword "urban ecology" in recent years. The importance of landscape ecology, for example, is expressed by the potential of the afforestation of streets and avenues to connect urban forest fragments (Brun et al. 2007). Understanding the relationship between the isolation of urban green areas, the behavioral characteristics of the fauna in these fragments, and the role of street afforestation as ecological corridors is essential to make cities a shelter for biodiversity (Soares and Lentini 2019). Landscape ecology allows an integrated look at spatial heterogeneity on the ecological scale: ecosystem, community, and population (Metzger 2001).
The context of the word "ecology" is broad and needs to be understood within the scope of species and the relationships between them. Thus, we consider that surveys, economic, and social studies are equally important for the understanding of this subject. The papers incorporated in the theme of "environment" ("climate," "soil," "acoustics," and "water") and "ecology" ("ecosystem services," "restoration," "ecological interactions," and "conservation") proved to be little-addressed in the urban ecology theme. As being transdisciplinary is one of the objectives of urban ecology (Young and Wolf 2006), it is essential to further study these themes, which directly affect the human population. The urgency of this inclusion is highlighted when we observe the key role played by afforestation of urban roads in noise attenuation (Oliveira et al. 2018), the role of green areas in greater thermal comfort (Tejas et al. 2011), water regulation in cities, and carbon sequestration (Almeida et al. 2018).
The main universities are in the cities of Curitiba and São Paulo, both with a significant population and a high HDI (IBGE 2021). This relationship between socioeconomic factors and publications shows a pattern: municipalities with a higher HDI present more research in urban ecology. At the same time, this shows that the scarcity of studies, access to green areas, and ecosystem services mainly affect marginalized populations (Porto and Martinez-Alier 2007). These populations are often exposed to greater risks of flooding (Pontes et al. 2017) and live in precarious housing in "sacrifice zones" (Bullard 1994;Santos 2019). In addition, low-income populations, which are already in environmentally unstable areas, will be the most affected by climate change, according to the Intergovernmental Panel on Climate Change (IPCC) climate report (2021).
This review shows the spatial heterogeneity in urban ecology research in Brazil. It is clear that the municipalities with highest HDI and urban population density are the most researched, while the others are overlooked. This is likely linked to the concentration of research institutes and universities in these municipalities. However, this result evidences the environmental injustice still predominant in the country (Costa 2018). Rich states and municipalities present numerous urban green areas and tree-lined streets, while poor communities are denied these spaces (Schwarz et al. 2015;Gould and Lewis 2016;Rigolon et al. 2018). Similarly, urban ecology research in Brazil focuses on already developed and biodiverse sites, disregarding marginalized communities, Schell et al (2020) defined this process like a "environmental racism". Even though research in urban ecology is becoming increasingly popular, future research must focus on these underprivileged areas. Poor municipalities are also more vulnerable to environmental degradation, pollution, deforestation, and natural disasters, including landslides and flooding (de Loyola Hummell et al. 2016;Rasch 2017;Debortoli et al. 2017;). Research on urban ecology is essential to assess the present state of degradation in these areas, and propose better city planning (Pickett et al. 2014;Childers et al. 2015;McPhearson et al. 2016a, b). Only through detailed research can we uncover the ongoing issues in urban areas and create solutions for a better quality of life, for the people and for the biodiversity that coexist in the cities.
In addition to municipalities, the results showed a great disparity between states and regions in terms of the number of articles published. The South and Southeast regions were responsible for 63% of published papers, despite having an area of less than 20% of the national territory. These are also the most developed states with a longer tradition of urban research universities.
The Atlantic Forest, despite occupying only 15% of the national territory, is where 70% of the population and 60% of Brazilian municipalities are concentrated (IBGE 2021). This region houses the main universities and researchers, thus being the most studied region in the country in terms of urban ecology. At the same time, in the Cerrado and in the Amazon, the number of reviewed papers is related to the population that occupies these areas, and not to the territorial area. For example, the Amazon occupies about 50% of the national territory, and is responsible for only 8% of the papers, proportional to the population of the biome.
The studies reflect the reality in Brazil. Municipalities with more universities and researchers have more research in the area, even more whith the consolidated line and a group of researchers in urban ecology. We were also able to present the need for research in biomes such as the Caatinga, the Amazon and the Pantanal. We believe that with the results presented we can show bottlenecks for future research.
Final considerations
With this review, we present a panorama of publications focused on urban ecology in Brazil. Our study revealed the growing interest and importance of ecology in Brazil, as demonstrated by the increase in publications on the topic over the last few decades. However, these papers are still concentrated in the major areas of biological sciences (i.e., botany and zoology), with an emphasis on research on urban afforestation and surveys of urban fauna, especially birds and insects. Thus, we emphasize the knowledge gap in studies that intrinsically relate ecology and society, approach the ecology of landscape in the urban environment, or focus on ecology as research into the interaction between fauna and flora in the urban environment, as well as essential knowledge for the conservation of these species.
We found that the universities that publish the most on this subject are in Curitiba and São Paulo, which represent the relationship between cities with a dense population, higher HDI, and a greater representation of publications related to urban ecology. Our results highlight the scarcity of studies on the subject in regions other than the southern and southeastern regions of the country. It is expected that the number of papers will increase over time, with a greater concentration of frequent authors currently starting their studies in this line of research.
|
2023-01-26T16:13:32.061Z
|
2023-01-24T00:00:00.000
|
{
"year": 2023,
"sha1": "d6206ad0029088c605d3c4c941e3c3f3e3c19733",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1608899/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "e7854c00d0fb3eadc7f4c365c305b64f30739900",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
254522858
|
pes2o/s2orc
|
v3-fos-license
|
Synthesis, crystal structure and thermal properties of tetrakis(3-methylpyridine-κN)bis(thiocyanato-κN)nickel(II)
In the crystal structure of the title compound, the nickel cations are octahedrally coordinated by two terminal N-bonded thiocyanate anions and four 3-methylpyridine ligands.
Chemical context
Thiocyanate anions are versatile ligands that can coordinate in many different ways to metal cations. The most common coordination is the terminal mode, in which these anionic ligands are only connected via the N or S atom, while the latter is only rarely observed. For several reasons, the -1,3 bridging coordination is more interesting and can lead to the formation of chains or layers (Nä ther et al., 2013). There are also a few compounds with more condensed thiocyanate networks that can form if these anionic ligands take up, for example, the -1,3,3 (N,S,S) bridging mode (Nä ther et al., 2013).
We have been interested in this class of compounds for several years targeting, for example, compounds that show interesting magnetic properties (Suckert et al., 2016;Werner et al., 2014Werner et al., , 2015a. In most cases, the neutral coligands used by us and others comprise pyridine derivatives and many such compounds have been reported in the literature (Mautner et al., 2018;Bö hme et al., 2020;Rams et al., 2020). If less chalcophilic metal cations such as Mn II , Fe II , Co II or Ni II are used, compounds with the composition M(NCS) 2 (L) 4 (M = Mn, Fe, Co, Ni and L = pyridine derivative) are frequently obtained, in which the metal cations are octahedrally coordinated by two terminal N-bonded thiocyanate anions and four coligands. Many of them have already been reported in the literature. If such compounds are heated, in several cases two of the coligands are removed, leading to a transformation to coliganddeficient compounds, in which the metal cations are linked by the anionic ligands and this is the reason why we are also interested in such discrete complexes (Nä ther et al., 2013).
Throughout these investigations, we became interested in Ni compounds with 3-methylpyridine as coligand for which some complexes have already been reported in the literature. However, all of these compounds consist of octahedral discrete complexes and the majority forms solvates with the composition Ni(NCS) 2 (3-methylpyridine) 4 ÁX with X = bis(trichloromethane) (LAYLOM; Pang et al., 1992), which crystallizes in space group P1, bis(dichloromethane) (LAYLIG; Pang et al., 1992), which crystallizes in space group C2/c, mono-tetrachloromethane, mono-dibromo-dichloromethane, mono-dichloromethane and mono-2,2-dichloropropane clathrates (JICMIR, LAYLAY, LAYLUS and LAYLEC;Pang et al., 1990Pang et al., , 1992 as well as mono-trichloromethane (CIVJEW and CIFJEW01; Nassimbeni et al., 1984Nassimbeni et al., , 1986, all of which crystallize in the orthorhombic space group Fddd. Surprisingly, for unknown reasons, the crystal structure of the ansolvate is unknown. What is common to all of the solvates mentioned above is the fact that they contain non-polar solvents, which cannot coordinate to metal cations. We used solvents with donor atoms able to coordinate when attempting to prepare compounds with the composition Ni(NCS) 2 (3methylpyridine) 2 (solvent) 2 . Upon heating, these should lose their two solvent molecules, transforming into compounds with a bridging coordination. Surprisingly, even in this case, octahedral complexes with the composition Ni(NCS) 2 (3methylpyridine) 4 ÁX (X = acetonitrile, ethanol, diethyl ether) were obtained . We have found that these solvates are unstable and have lost their solvents already at room temperature. X-ray powder diffraction (XRPD) proves that, independent of the crystal structure of the precursor, the same crystalline phase is always obtained (Fig. 1) which, according to IR spectroscopic data, bears only terminal Nbonded anionic ligands. Unfortunately no single crystals were obtained by this procedure, which means that the crystal structure of the ansolvate remained unknown. Starting from these observations, we tried to prepare crystals of the ansolvate using a variety of solvents and we eventually obtained crystals with the desired composition from H 2 O. The CN stretching vibration of the anions in the crystals is observed at 2072 cm À1 , indicating the presence of terminal thiocyanate anions (Fig. S1). Single crystal X-ray diffraction proves that the hitherto missing ansolvate has formed and XRPD investigations reveal the formation of a phase-pure sample (Fig. S2). Comparison of the experimental powder pattern obtained by solvent removal from the acetonitrile, ethanol and diethyl ether solvates with that calculated for the ansolvate proves that all of these crystalline phases are identical (Fig. 1). TG-DTA measurements show that the title compound decomposes in three steps, which are all accompanied by an endothermic event in the DTA curve (Fig. S3). The calculated mass loss per coligand amounts to 17.0%, which means that the first step (33.3%) is in reasonable agreement with the loss of two ligands and the second (15.7%) and third (14.9%) step with the loss of one ligand each, indicating the formation of additional compounds.
Structural commentary
The asymmetric unit of the title compound, Ni(NCS) 2 (3methylpyridine) 4 , consists of one Ni II cation, two thiocyanate anions and four 3-methylpyridine coligands that occupy general positions. One of the 3-methylpyridine coligands is disordered and was refined using a split model (Fig. 2). In the crystal structure of the title compound, the nickel cations are sixfold coordinated by two terminal N-bonded thiocyanate anions and four 3-methylpyridine coligands and from the bond lengths and angles it is obvious that the octahedra are slightly distorted (Table 1). This can also be seen from the octahedral angle variance (with a value of 11.2355 2 ) and the mean octahedral quadratic elongation (with a value of 1.0042) determined by the method of Robinson et al. (1971).
Supramolecular features
In the crystal structure of the title compound, the discrete complexes are arranged into layers that are located in the ab plane ( Fig. 3: top). These layers are separated from neighbouring layers by pairs of 3-methylpyridine ligands that show a butterfly-like arrangement. There are no indications forstacking or intermolecular hydrogen bonding. There are only C-HÁ Á ÁN and C-HÁ Á ÁS contacts, but from the distances and angles it is obvious that these are not significant interactions. The arrangement of the complexes in the title compound is similar to that in the solvates Ni(NCS) 2 (3-methylpyridine) 4 Áethanol and the isotypic compound Ni(NCS) 2 (3methylpyridine) 4 Áacetonitrile , indicating some structural relationship (Fig. 3). However, the third solvate, Ni(NCS) 2 (3-methylpyridine) 4 Ádiethyl ether is not isotypic to the ethanol and acetonitrile solvates, yet also transforms into the title compound upon solvent removal. Even in this compound, a similar arrangement of the complexes is formed, which strongly suggests that the same crystalline ansolvate phase is particularly stable.
Figure 2
Crystal structure of the title compound with atom labeling and displacement ellipsoids drawn at the 50% probability level using XP in SHELX-PC (Sheldrick, 1996). The disorder of one of the 3-methylpyridine ligands is shown as full and open bonds.
Experimental details
The data collection for single crystal structure analysis was performed using an XtaLAB Synergy, Dualflex, HyPix diffractometer from Rigaku with Cu K radiation.
The XRPD measurements were performed with a Stoe Transmission Powder Diffraction System (STADI P) equipped with a MYTHEN 1K detector and a Johansson-type Ge(111) monochromator using Cu K 1 radiation ( = 1.540598 Å ).
The IR spectra were measured using an ATI Mattson Genesis Series FTIR Spectrometer, control software: WINFIRST, from ATI Mattson.
Thermogravimetry and differential thermoanalysis (TG-DTA) measurements were performed in a dynamic nitrogen atmosphere in Al 2 O 3 crucibles using a STA-PT 1000 thermobalance from Linseis. The instrument was calibrated using standard reference materials.
Refinement
All crystals are of poor quality and merohedrally twinned with at least two componenents that are difficult to separate as is obvious from a view along the b* direction (Fig. S4). Therefore, a twin refinement using data in HKLF-5 format was performed, leading to a BASF parameter of 0.457 (5). Refinement using anisotropic displacement parameters leads to relatively large components of the anisotropic displacement parameters, indicating static or dynamic disordering. For one of the four crystallographically independent 3-methylpyridine coligands, the disorder was resolved and this ligand was refined using a split model with restraints. The C-bound H atoms were positioned with idealized geometry (methyl H atoms allowed to rotate but not to tip) and were refined isotropically with U iso (H) = 1.5U eq (C) for methyl H atoms and with U iso (H) = 1.2 U eq (C) for all other H atoms using a riding model. Crystal data, data collection and structure refinement details are summarized in Table 2
Special details
Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Refinement. Refined as a two-component twin.
Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 )
x y z U iso */U eq Occ. (
|
2022-12-11T16:16:50.953Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "3cf58495eecb26806111ab938b1722ef4986a36f",
"oa_license": "CCBY",
"oa_url": "https://journals.iucr.org/e/issues/2023/01/00/yz2025/yz2025.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "04951e25a3acdbefe6cff0e89596a303eb727be6",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258920288
|
pes2o/s2orc
|
v3-fos-license
|
New Network Polymer Electrolytes Based on Ionic Liquid and SiO2 Nanoparticles for Energy Storage Systems
Elementary processes of electro mass transfer in the nanocomposite polymer electrolyte system by pulse field gradient, spin echo NMR spectroscopy and the high-resolution NMR method together with electrochemical impedance spectroscopy are examined. The new nanocomposite polymer gel electrolytes consisted of polyethylene glycol diacrylate (PEGDA), salt LiBF4 and 1—ethyl—3—methylimidazolium tetrafluoroborate (EMIBF4) and SiO2 nanoparticles. Kinetics of the PEGDA matrix formation was studied by isothermal calorimetry. The flexible polymer–ionic liquid films were studied by IRFT spectroscopy, differential scanning calorimetry and temperature gravimetric analysis. The total conductivity in these systems was about 10−4 S cm−1 (−40 °C), 10−3 S cm−1 (25 °C) and 10−2 S cm−1 (100 °C). The method of quantum-chemical modeling of the interaction of SiO2 nanoparticles with ions showed the advantage of the mixed adsorption process, in which a negatively charged surface layer is formed from Li+ BF4— ions on silicon dioxide particles and then from ions of the ionic liquid EMI+ BF4−. These electrolytes are promising for use both in lithium power sources and in supercapacitors. The paper shows preliminary tests of a lithium cell with an organic electrode based on a pentaazapentacene derivative for 110 charge–discharge cycles.
Introduction
In recent years, ionic liquids (ILs) have been increasingly used as components of polymer electrolytes for energy storage systems [1,2]. ILs have a number of advantages, such as low flammability, low vapor pressure and a wide thermal, chemical and electrochemical stability window [3].
He et al. [14] found that ILs are utilized as synthesis and dispersion media for nanoparticles as well as for surface functionalization. Ionic liquid and nanoparticle hybrid systems are governed by a combined effect of several intermolecular interactions between their constituents, for each interaction, including van der Waals, electrostatic, steric and hydrogen bonding. Various self-organized structures based on nanoparticles in ionic liquids are generated as a result of a balance of these intermolecular interactions. These structures, including nanoparticle-stabilized ionic-liquid-containing emulsions, possess properties of both ionic liquids and nanoparticles, which render them useful as novel materials, especially in electrochemical applications.
Unlike traditional salts, ionic liquids usually consist of large asymmetric polyatomic ions with an ionic radius above about 5 to 10 times that of monatomic ions such as Li + or Na + . The large ionic size of the ion liquids increases the average distance between the charge cation and anion centers, reducing the electrostatic interaction strength. The ions or ion clusters are attracted to the nanoparticle surface by electrostatic forces [14]. The particles of metals have great attraction. The ionic liquid cations are attracted to the surface of a negatively charged nanoparticle to form a positive ion layer, and then counter ions form a second layer on the nanoparticle surface by electrostatic attraction.
In addition, there are nanoparticles that interact less with IL ions. One such example is colloidal fumed silica SiO 2 . Lithium salt additives are used to stabilize them, which is a requirement for the use of such systems in lithium-ion batteries. Thus, Nordström et al. [15] investigated the stability and interactions in dispersions of colloidal fumed silica SiO 2 (Aerosil 200, Evonik Resource Efficiency GmbH, Antwerp, Belgium) and the ionic liquid 1-butyl-3-methylimidazolium tetraflouroborate (BMIBF 4 ) as a function of the Li salt concentration (LiBF 4 ). The increased stability with the addition of Li salt was found by Raman spectroscopy which is explained by the formation of a more stable solvation layer, where Li ions accumulate on the surface.
Electrolytes based on ionic liquids are mainly used in supercapacitors [16][17][18], where ionic liquid ions are charge carriers. There are also polymer electrolytes based on ILs for lithium current sources. Here, the competitive ion transport of lithium cations and IL cations has strong influences on the electrochemical processes [19][20][21][22][23].
The addition of different ionic liquids has various effects on the properties of Li + ion conductive polymer electrolytes. In addition, the physical properties of ILs (in particular, viscosity and dielectric constant) have an important role in the structure design and conducting properties of polymer electrolytes. Low viscosity leads to an increase in the segmental mobility of the polymer chains. On the other hand, a high dielectric constant of the ionic liquid increases ion pair dissociation and, therefore, is the cause of the charge carrier concentration. Both of the above aspects contribute to the increase in polymer gel electrolyte ionic conductivity.
This study deals with the synthesis and study of an electro mass transfer of new network polymer electrolytes based on an EMIBF 4 ionic liquid with the introduction of various amounts of fumed silica SiO 2 (Aerosil 380) nanoparticles (0, 2, 4, 6 wt.%). The SiO 2 nanoparticles have a very small particle size of 7 nm with a narrow distribution and a very developed surface of 380 g/m 2 due to porosity. On such a surface, lithium salt molecules can be adsorbed and dissociated into free ions [26][27][28][29].
We have previously studied polymer gel electrolytes based on polyethylene glycol diacrylate (PEGDA) as a polymer network, which showed good properties as a threedimensional network with ethylene oxide units, which retain well a large amount of a liquid phase, consisting of both traditional aprotic solvents and ionic liquids such as EMIBF 4 Membranes 2023, 13, 548 3 of 21 and 1-butyl-3-methylimidazolium tetrafluoroborate BMIBF 4 [4,30]. The mechanism of ionic and molecular transport in new nanocomposite systems based on fumed SiO 2 was investigated by the NMR method, electrochemical impedance spectroscopy and quantumchemical modeling, which are the most informative methods for such complex systems.
The structures of PEGDA and EMIBF 4 are shown in Figure 1.
phase, consisting of both traditional aprotic solvents and ionic liquids such as EMIBF4 1-butyl-3-methylimidazolium tetrafluoroborate BMIBF4 [4,30]. The mechanism ionic and molecular transport in new nanocomposite systems based on fumed SiO2 investigated by the NMR method, electrochemical impedance spectroscopy and qu tum-chemical modeling, which are the most informative methods for such complex tems.
PEGDA Kinetics of Radical Polymerization
The kinetics of radical polymerization of PEGDA in the presence of the ionic liq salt LiBF4, EC, SiO2 nanopowder and benzoyl peroxide as an initiator was studied by thermal calorimetry on a DAK-1-1 differential automatic calorimeter (EZAN, C nogolovka, Russia) at 60 °C. The reaction mixture was placed into glass ampoules for orimetric measurements and sealed.
Synthesis of Nanocomposite Polymer Electrolytes
Nanocomposite polymer electrolytes (NPEs) were synthesized by the rad polymerization of PEGDA in the presence of the radical initiator PB without an inert vent.
The composition of the polymerizable mixture was as follows: PEGDA, LiBF4, EMIBF4, SiO2 and 1 wt.% PB for the whole sample. The compositions of NPEs are liste Table 1 in molar proportions of the components and in Table S1, ESI in mass perc where the synthesis procedure in detail (ESI) is given. (1)(2)(3)(4)(5) indicate the sites of the 1 H and 13 C atoms (for the description of the NMR spectra).
PEGDA Kinetics of Radical Polymerization
The kinetics of radical polymerization of PEGDA in the presence of the ionic liquid, salt LiBF 4 , EC, SiO 2 nanopowder and benzoyl peroxide as an initiator was studied by isothermal calorimetry on a DAK-1-1 differential automatic calorimeter (EZAN, Chernogolovka, Russia) at 60 • C. The reaction mixture was placed into glass ampoules for calorimetric measurements and sealed.
Synthesis of Nanocomposite Polymer Electrolytes
Nanocomposite polymer electrolytes (NPEs) were synthesized by the radical polymerization of PEGDA in the presence of the radical initiator PB without an inert solvent.
The SEM image of the initial SiO 2 powder is shown in Figure 2a. To study samples by NMR method, the NPEs were synthesized in closed glass capillaries with a diameter of d = 4 mm, l = 50 mm. The capsules with NPEs were closed and placed in standard 5 mm ampoules for NMR examination.
The SEM image of the initial SiO2 powder is shown in Figure 2a. An optical photo of the final nanocomposite electrolyte film is shown in Figure 2b. It can be seen from Figure 2b that the film is transparent.
Differential Scanning Calorimetry (DSC) Method
The glassy transition temperature in the temperature range from −150 to 50 °С and the homophase nature of the NPEs were determined from the differential scanning calorimetry (DSC) data obtained on a DSC 822e Mettler-Toledo instrument (Kutznacht an der Zürichsee, Switzerland) with the Star software at a scanning rate of 5 deg min −1 .
Fourier Transform Infrared Spectroscopy (FTIR)
The FTIR spectra of thin-film electrolyte samples and initial components (PEGDA, IL and EC) were recorded on an IRTracer-100 FTIR spectrometer (Shimadzu, Germany) at room temperature in a wave number range of 400-4000 cm −1 with a spectral resolution of 2-4 cm −1 .
Thermogravimetric Analysis (TGA) Method
The TGA data for the samples were obtained on a TGA/SDTA851 Mettler-Toledo instrument (China) in the temperature range from 30 to 150 °С at a heating rate of 5 deg min −1 . An optical photo of the final nanocomposite electrolyte film is shown in Figure 2b. It can be seen from Figure 2b that the film is transparent.
Differential Scanning Calorimetry (DSC) Method
The glassy transition temperature in the temperature range from −150 to 50 • C and the homophase nature of the NPEs were determined from the differential scanning calorimetry (DSC) data obtained on a DSC 822e Mettler-Toledo instrument (Kutznacht an der Zürichsee, Switzerland) with the Star software at a scanning rate of 5 deg min −1 .
Fourier Transform Infrared Spectroscopy (FTIR)
The FTIR spectra of thin-film electrolyte samples and initial components (PEGDA, IL and EC) were recorded on an IRTracer-100 FTIR spectrometer (Shimadzu, Germany) at room temperature in a wave number range of 400-4000 cm −1 with a spectral resolution of 2-4 cm −1 .
Thermogravimetric Analysis (TGA) Method
The TGA data for the samples were obtained on a TGA/SDTA851 Mettler-Toledo instrument (China) in the temperature range from 30 to 150 • C at a heating rate of 5 deg min −1 .
High-Resolution NMR
High-resolution spectra for 1 H, 7 Li, 11 B, 13 C and 19 F were recorded on a Bruker Avance III 500 MHz NMR spectrometer. The measurements at frequencies of 500, 194, 160, 126 and 471 MHz for 1 H, 7 Li, 11 B, 13 C and 19 F, respectively, were carried out at room temperature (22 ± 1 • C). The chemical shift scale was calibrated with the DMSO-d6 signal in the capillary as an external standard (2.50 ppm for 1 H). The 1 H, 7 Li and 19 F NMR spectra were obtained using the standard sequence π/2 pulses, FID. No signal accumulation was applied. To obtain the 13 C NMR spectra, a standard sequence from the TopSpin (Bruker, Billerica, MA, USA) zgpg30 library was used. The sequence is an accumulation of signals from 30 • pulses with the suppression of the 1 H spin-spin interaction for the duration of all of the experimental times. The number of repetitions was ns = 512, and the delay between the repetition sequence was d1 = 1.0 s. For interpretation of 1 H and 13 C, twodimensional 13 C-1 H HSQC correlation spectra were recorded (standard pulse sequence from the TopSpin library (Bruker)).
NMR with Pulsed Field Gradient
The NMR measurements on a Bruker Avance-III 400 MHz NMR spectrometer equipped with the diff60 gradient unit (the maximum field gradient amplitude was 30 T m −1 ) were carried out at the temperature 22 ± 1 • C. The NMR measurements of 1 H (diffusion of solvent molecules EC and ionic liquid IL, EMI − ), 7 Li (diffusion of lithium cations) and 19 F (diffusion of anions BF 4 − ) were carried out with operating frequencies of 400, 155.5 and 376.5 MHz, respectively. The stimulated spin echo sequence was applied. The details of self-diffusion coefficient measurements are given in [31,32]. The experimental NMR parameters of pulse sequences were the following: π/2 pulse was 9 µs ( 1 H), 9 µs ( 7 Li) and 10 µs ( 19 F); gradient pulse duration time δ was 1.0 ( 1 H), 1.0 ( 7 Li) and 3.0 ( 19 F) ms; diffusion time was 19.7 ( 1 H), 19.7 ( 7 Li) and 49.0 ( 19 F) ms; repetition time 3 s; and the diffusion 32 steps with maximum field gradient amplitude g were 3.5 ( 1 H), 11.5 ( 7 Li) and 4.0 (19F) T m −1 . The strength of the gradient changed linearly. The measurement error of the self-diffusion coefficients was 5%. The temperature dependences of the diffusion coefficients were measured at the temperature range from 22 to 60 • C.
Electrochemical Methods
To measure the conductivity of NPE film samples by the electrochemical impedance method in symmetrical stainless steel cells (SS)//SS with an area equal to 0.2 cm 2 , a Z-2000 impedance meter (Elins, Chernogolovka, Russia) was used in the frequency range from 1 Hz to 2 MHz with a signal amplitude of 10 mV. The cell impedance was detected in the temperature range from −40 to 100 • C. Four measurements were carried out for each sample. The measurement error was not higher than 2%.
Symmetrical cells with Li metal and LiOTAP organic cathodes were assembled in coin-type cells CR2032. To measure the resistance of the boundary of NPE/electrode by the electrochemical impedance method in symmetrical cells Li/Li and LiOTAP/LiOTAP, a Z-2000 impedance meter was used analogically.
The electrochemical performance of the Li//LiOTAP batteries was evaluated using a BTS-5 V 10 mA battery analyzer (Neware Technology Ltd., Shenzhen, China) by performing charge/discharge cycling at current rates C/2 in a range of 0.7-3.5 V. LiOTAP was synthesized and characterized in [33]. The electrochemical performance of LiOTAP was evaluated in coin-type lithium batteries. The cathode composition comprised 45 wt.% of LiOTAP, 50 wt.% of conductive carbon black (Timical Super C65) and 5 wt.% of PVDF polymer binder (Kynar Flex HSV 900, Arkema, Colombes, France). The procedure for assembling cells with a polymer electrolyte differed from that mentioned in [33] in that an NPE film was placed instead of a separator with a liquid electrolyte.
Quantum-Chemical Modeling
The structure of complexes of different ions with solvent molecules and SiO 2 was studied using the nonempirical Perdew-Burke-Erzernhof (PBE) exchange-correlation functional [34] [35]. The geometry of larger systems containing a counterion and additional solvent molecules was optimized using the effective Hamiltonian method [36] taking into account the van der Waals interaction. The Priroda package [37] was used for all the calculations carried out at the Joint Supercomputer Center of the Russian Academy of Sciences.
Quantum-Chemical Modeling
The structure of complexes of different ions with solvent molecules and SiO2 was studied using the nonempirical Perdew-Burke-Erzernhof (PBE) exchange-correlation functional [34] [35]. The geometry of larger systems containing a counterion and additional solvent molecules was optimized using the effective Hamiltonian method [36] taking into account the van der Waals interaction. The Priroda package [37] was used for all the calculations carried out at the Joint Supercomputer Center of the Russian Academy of Sciences. The kinetic curves ( Figure 3a) followed that the main part of the polymerization of the studied compositions takes about 3 h. To accelerate the synthesis time of finished electrolyte films, it is necessary to carry out polymerization in a stepwise mode (60, 70 and 80 °C). This is because an increase in temperature by 10° leads to an increase in the polymerization rate by several times. Therefore, the post-polymerization time reduced, and the polymer electrolytes with a maximum conversion were obtained.
DSC of NPEs
All compositions of NPEs and initial ionic liquid were studied by DSC. The ionic liquid had only one phase transition; the glass transition temperature Tg = −103 °C. All NPE samples had two Tg values that characterized EMIBF4 and PEGDA. The results are presented in Table 2. Figure 4 shows the DSC diagrams of EMIBF4, NPE1, NPE4 and NPE5 as an example. The kinetic curves ( Figure 3a) followed that the main part of the polymerization of the studied compositions takes about 3 h. To accelerate the synthesis time of finished electrolyte films, it is necessary to carry out polymerization in a stepwise mode (60, 70 and 80 • C). This is because an increase in temperature by 10 • leads to an increase in the polymerization rate by several times. Therefore, the post-polymerization time reduced, and the polymer electrolytes with a maximum conversion were obtained.
DSC of NPEs
All compositions of NPEs and initial ionic liquid were studied by DSC. The ionic liquid had only one phase transition; the glass transition temperature T g = −103 • C. All NPE samples had two T g values that characterized EMIBF 4 and PEGDA. The results are presented in Table 2. Figure 4 shows the DSC diagrams of EMIBF 4 , NPE1, NPE4 and NPE5 as an example.
Note. T 01 is the onset of the relaxation transition; T g1 is the temperature of the first relaxation transition (relaxation transition of the crosslinked polymer matrix); ∆T 1 is the range of the first relaxation transition; T g2 is the glass transition temperature of the second relaxation transition; ∆T 2 is the range of the second relaxation transition. Note. T01 is the onset of the relaxation transition; Tg1 is the temperature of the first relaxation trans tion (relaxation transition of the crosslinked polymer matrix); ΔT1 is the range of the first relaxatio transition; Tg2 is the glass transition temperature of the second relaxation transition; ΔT2 is the rang of the second relaxation transition.
FTIR Аnalysis of NPEs
The peak of the carbonyl group of PEGDA at 1721 cm −1 shifts to a range of 1733 cm − which is caused by the three-dimensional crosslinking of diacrylate in a medium of a larg amount of the liquid phase as shown by the quantum-chemical modeling of PEGDA cross linking ( Figure S1, ESI) and confirmed experimentally in [38].
The peak of the carbonyl group of ethylene carbonate in the composition o PEGDA-LiBF4-3EC undergoes a strong shift. This is due to the formation of a solvat shell of the lithium cation by EC molecules. The theoretical IR spectra of the LiBF4-3EC solvate showed this effect ( Figure S2, ESI).
The peaks of the carbonyl group of EC upon the addition of SiO2 nanoparticles retur to their original position ( Figure 5). Most likely, they came out of the EC solvate environ ment. As can be seen in Figure S3, ESI showed an enlarged spectrum of the carbony group.
FTIR Analysis of NPEs
The peak of the carbonyl group of PEGDA at 1721 cm −1 shifts to a range of 1733 cm −1 , which is caused by the three-dimensional crosslinking of diacrylate in a medium of a large amount of the liquid phase as shown by the quantum-chemical modeling of PEGDA crosslinking ( Figure S1, ESI) and confirmed experimentally in [38].
The peak of the carbonyl group of ethylene carbonate in the composition of PEGDA-LiBF 4 -3EC undergoes a strong shift. This is due to the formation of a solvate shell of the lithium cation by EC molecules. The theoretical IR spectra of the LiBF 4 -3EC solvate showed this effect ( Figure S2, ESI).
The peaks of the carbonyl group of EC upon the addition of SiO 2 nanoparticles return to their original position ( Figure 5). Most likely, they came out of the EC solvate environment. As can be seen in Figure S3, ESI showed an enlarged spectrum of the carbonyl group.
TGA of NPEs
The TGA dependences of all NPE compositions are shown in Figure 6. It is seen from Figure 6a that the polymer electrolytes are stable up to 100 °С. Instrumental inaccuracy is the mass loss of up to 1%. A slight weight loss may indicate a loss of moisture. Moisture during preparation for the study could get into the sample. The end
TGA of NPEs
The TGA dependences of all NPE compositions are shown in Figure 6.
TGA of NPEs
The TGA dependences of all NPE compositions are shown in Figure 6. It is seen from Figure 6a that the polymer electrolytes are stable up to 100 °С. Instrumental inaccuracy is the mass loss of up to 1%. A slight weight loss may indicate a loss of moisture. Moisture during preparation for the study could get into the sample. The end It is seen from Figure 6a that the polymer electrolytes are stable up to 100 • C. Instrumental inaccuracy is the mass loss of up to 1%. A slight weight loss may indicate a loss of moisture. Moisture during preparation for the study could get into the sample. The end of the first stage of the TGA diagram at 100 • C confirmed this. In addition, when examining samples by IR spectroscopy before and after heating up to 150 • C, it was shown that the main peaks of all components did not change ( Figure S4, ESI). Figure 6b shows the TGA diagrams up to 600 • C of the NPE4 and NPE5 compositions with maximum conductivity. Figure 6b shows the multistage character of sample weight loss. This explains the gradual loss of each component. The loss of ethylene carbonate (bp = 248 • C) occurs first. The ionic liquid, apparently, decomposes together with the polymer matrix at 390 • C. Silicon dioxide remains in the residue. Under extreme conditions, it is possible that SiO 2 will insulate between the electrodes (if they are stable up to these temperatures).
High-Resolution NMR
The 1 H and 13 C NMR spectra to check the purity and confirm the NPE compositions were recorded. The 1 H and 13 C NMR spectra for all NPEs compared to the ionic liquid are shown in Figures 7 and 8, respectively. The 1 H and 13 C spectra differ in integral signal intensities due to different molar ratios of EMIBF 4 to solvent. The 7 Li, 19 F and 11 B NMR spectra were also obtained ( Figures S5-S7, ESI). of the first stage of the TGA diagram at 100 °C confirmed this. In addition, when examining samples by IR spectroscopy before and after heating up to 150 °C, it was shown that the main peaks of all components did not change ( Figure S4, ESI). Figure 6b shows the TGA diagrams up to 600 °C of the NPE4 and NPE5 compositions with maximum conductivity. Figure 6b shows the multistage character of sample weight loss. This explains the gradual loss of each component. The loss of ethylene carbonate (bp = 248 °C) occurs first. The ionic liquid, apparently, decomposes together with the polymer matrix at 390 °С. Silicon dioxide remains in the residue. Under extreme conditions, it is possible that SiO2 will insulate between the electrodes (if they are stable up to these temperatures).
High-Resolution NMR
The 1 H and 13 C NMR spectra to check the purity and confirm the NPE compositions were recorded. The 1 H and 13 C NMR spectra for all NPEs compared to the ionic liquid are shown in Figures 7 and 8, respectively. The 1 H and 13 C spectra differ in integral signal intensities due to different molar ratios of EMIBF4 to solvent. The 7 Li, 19 F and 11 B NMR spectra were also obtained ( Figures S5-S7, ESI). Figure 7 shows that the signals in the 1 H NMR spectra of the polymer electrolytes are significantly broader than those in the pure EMIBF4. The signal of ethylene carbonate is also broadened (~4 ppm). The signal broadening is caused by the formation of a branched network polymer structure formed by PEGDA, which considerably impedes the chaotic motion of EMIBF4 and ЕС [39]. The 1 H NMR spectrum of the electrolyte exhibits a very broad signal from -O-CH2-CH2-O-of the polymer matrix with a maximum at ~3 ppm. This signal in the 13 C-1 H HSQC spectrum correlates with the 13 С signal at 69.2 ppm in the 13 C NMR spectrum ( Figure S8, ESI).
A 2D DOSY spectrum to confirm signal assignment in the proton spectrum was also recorded (Figure 9). Figure 7 shows that the signals in the 1 H NMR spectra of the polymer electrolytes are significantly broader than those in the pure EMIBF 4 . The signal of ethylene carbonate is also broadened (~4 ppm). The signal broadening is caused by the formation of a branched network polymer structure formed by PEGDA, which considerably impedes the chaotic motion of EMIBF 4 and EC [39]. The 1 H NMR spectrum of the electrolyte exhibits a very broad signal from -O-CH 2 -CH 2 -O-of the polymer matrix with a maximum at 3 ppm. This signal in the 13 C-1 H HSQC spectrum correlates with the 13 C signal at 69.2 ppm in the 13 C NMR spectrum ( Figure S8, ESI).
A 2D DOSY spectrum to confirm signal assignment in the proton spectrum was also recorded (Figure 9).
A spectrum DOSY signal in the spectrum of a mixture of molecules depending on the diffusion coefficients (Y-axis, Ds) allows them to separate. The EC solvent molecule is smaller than the IL and therefore more mobile. Thus, the signals from the less mobile IL (Ds(IL)) and the signal from the EC (Ds(EC)) from Figure 9 are easy to separate.
Self-Diffusion Coefficients (SDCs) According to the PFG NMR Data
The SDCs on 1 H, 7 Li and 19 F for all the NPE compositions were measured by NMR with PGF. The diffusion decays on all nuclei of all compositions were exponential ( Figure S9, ESI). The measurements of the self-diffusion coefficients D s on 1 H make it possible to determine the mobility of EMIBF 4 and EC (the analysis of diffusion decays of the signals from the ionic liquid or ethylene carbonate solvent allows one to estimate their mobilities separately). The D s on 7 Li corresponds to the mobility of lithium cations, and that on 19 F corresponds to the mobility of the BF 4 − anion. A spectrum DOSY signal in the spectrum of a mixture of molecules depending on the diffusion coefficients (Y-axis, Ds) allows them to separate. The EC solvent molecule is smaller than the IL and therefore more mobile. Thus, the signals from the less mobile IL (Ds(IL)) and the signal from the EC (Ds(EC)) from Figure 9 are easy to separate.
Self-Diffusion Coefficients (SDCs) According to the PFG NMR Data
The SDCs on 1 H, 7 Li and 19 F for all the NPE compositions were measured by NMR with PGF. The diffusion decays on all nuclei of all compositions were exponential ( Figure S9, ESI). The measurements of the self-diffusion coefficients Ds on 1 H make it possible to determine the mobility of EMIBF4 and EC (the analysis of diffusion decays of the signals from the ionic liquid or ethylene carbonate solvent allows one to estimate their mobilities separately). The Ds on 7 Li corresponds to the mobility of lithium cations, and that on 19 F corresponds to the mobility of the BF4 − anion.
Self-Diffusion Coefficients on 1 H Nucleus
The results of measuring Ds for the NPE1-5 compositions are given in Table 3. The Ds values for pure ionic liquid EMIBF4 are presented for comparison. Table 3 shows that the mobility of EC and IL molecules increases with an increase in the fraction of IL in the polymer electrolyte.
Self-Diffusion Coefficients on 1 H Nucleus
The results of measuring D s for the NPE1-5 compositions are given in Table 3. The D s values for pure ionic liquid EMIBF 4 are presented for comparison. Table 3 shows that the mobility of EC and IL molecules increases with an increase in the fraction of IL in the polymer electrolyte.
The D s of EC increases by more than six times with an increase in the mass content of IL from 0 to 6% (compositions NPE1-4). In this case, the D s of IL increases by four times. This is probably related to the "loosening" of the electrolyte polymer network upon the introduction of IL molecules. In this case, the D s of pure IL is more than an order of magnitude higher than the D s of IL in the polymer electrolyte (NPE 2). An increase in the content of the SiO 2 additive at the same mass content of IL (NPE4 and NPE5) does not lead to a significant change in the mobility of the NPE components. The temperature dependences of the self-diffusion coefficients D s on 1 H for EMIBF 4 and EC molecules were measured in the range from 22 to 60 • C ( Figure 10). Self-diffusion coefficient temperature D s (T) dependences are approximated by the Arrhenius equitation: where D 0 is the temperature independent value, R is a gas constant, and T is the absolute temperature. E a is self-diffusion activation energy. The temperature dependences of the self-diffusion coefficients Ds on 1 H for EMIBF4 and EC molecules were measured in the range from 22 to 60 °С ( Figure 10). Self-diffusion coefficient temperature Ds(T) dependences are approximated by the Arrhenius equitation: where D0 is the temperature independent value, R is a gas constant, and T is the absolute temperature. Ea is self-diffusion activation energy.
(a) (b) The dependences are Arrhenius. The activation energies of diffusion were calculated (Table 3). It is shown that an increase in the IL content in the composition of the polymer electrolyte leads to a decrease in the activation energy of the diffusion of EC and IL molecules up to ~40 to ~30 kJ mol −1 . The activation energy of the diffusion of the pure ionic liquid was 21 kJ mol −1 .
Self-Diffusion Coefficients on 7 Li Nucleus
The results of measuring Ds ( 7 Li) for the NPE1-5 compositions are given in Table 4. An increase in the SDC for Li + cations with an increase in the IL content in the polymer electrolyte, as in the case of the molecular mobility of the IL and EC components, was observed. The Ds of lithium cations increases by more than an order of magnitude (with an increase in the mass content of IL from 0 to 6 wt.%).
An increase in the addition of SiO2 from 2 to 6 wt.% (transition from composition 4 to composition 5) leads to a slight increase in the Ds of lithium cations. Note that the SDC (Li + ) is ten times lower than the SDC of EMIBF4 despite the small size of the lithium cation compared to the IL molecule. The temperature dependences of the self-diffusion coefficients Ds on 7 Li were measured in the range from 22 to 60 °С (Figure 11a). The dependences are Arrhenius. The activation energies of diffusion were calculated (Table 3). It is shown that an increase in the IL content in the composition of the polymer electrolyte leads to a decrease in the activation energy of the diffusion of EC and IL molecules up to~40 to~30 kJ mol −1 . The activation energy of the diffusion of the pure ionic liquid was 21 kJ mol −1 .
Self-Diffusion Coefficients on 7 Li Nucleus
The results of measuring D s ( 7 Li) for the NPE1-5 compositions are given in Table 4. An increase in the SDC for Li + cations with an increase in the IL content in the polymer electrolyte, as in the case of the molecular mobility of the IL and EC components, was observed. The D s of lithium cations increases by more than an order of magnitude (with an increase in the mass content of IL from 0 to 6 wt.%).
An increase in the addition of SiO 2 from 2 to 6 wt.% (transition from composition 4 to composition 5) leads to a slight increase in the D s of lithium cations. Note that the SDC (Li + ) is ten times lower than the SDC of EMIBF 4 despite the small size of the lithium cation compared to the IL molecule. The temperature dependences of the self-diffusion coefficients D s on 7 Li were measured in the range from 22 to 60 • C (Figure 11a). Membranes 2023, 13, x FOR PEER REVIEW 13 of 21 (a) (b) Figure 11. Temperature dependences of the diffusion coefficients on (a) 7 Li and (b) 19 F nuclei.
The dependences are Arrhenius. The activation energies of diffusion were calculated (Table 4). Both Ea values for 7 Li and 19 F nuclei increase upon passing from the NPE5 to NPE1 composition (with a decrease in the IL content in the composite). It is shown that an increase in the IL content in the composition of the polymer electrolyte leads to a decrease in the activation energy of the diffusion of Li + from ~50 to ~35 kJ mol −1 . Figure S7, ESI) shows one singlet, which is a superposition of the signals from the lithium salt anion BF4 − and the second BF4 − in the EMIBF4 ionic liquid. This was caused by the rapid chemical exchange of BF4 − . Thus, the SDCs measured on the 19 F nuclei are the weighted average of the mobility of the BF4 − and BF4 − anions in EMIBF4. Table 4 shows that the mobility of BF4 − increases by an order of magnitude with an increase in the mass content of IL from 0 to 6% (compositions NPE1-4). An increase in the content of the SiO2 additive at the same mass content of IL (compositions 4 and 5) does not lead to a significant change in the mobility of BF4 − cations in the NPEs. The temperature dependences of the self-diffusion coefficients Ds on 19 F were measured in the range from 22 to 60 °С (Figure 11b). The dependences are Arrhenius. The diffusion activation energies calculated by Formula (1) are presented in Table 4. It is shown that an increase in the IL content in the composition of the polymer electrolyte leads to a decrease in the activation energy of the diffusion of BF4 − from ~37 to ~27 kJ mol −1 .
Thus, IL molecules contribute to an increase in the mobility of all components in the polymer matrix. The SDCs of Li + and BF4 − increase by an order of magnitude with an increase in the mass content of IL from 0 to 6% (compositions NPE1-4). The dependences are Arrhenius. The activation energies of diffusion were calculated (Table 4). Both E a values for 7 Li and 19 F nuclei increase upon passing from the NPE5 to NPE1 composition (with a decrease in the IL content in the composite). It is shown that an increase in the IL content in the composition of the polymer electrolyte leads to a decrease in the activation energy of the diffusion of Li + from~50 to~35 kJ mol −1 .
Self-Diffusion Coefficients on 19 F Nucleus
The 19 F NMR spectrum ( Figure S7, ESI) shows one singlet, which is a superposition of the signals from the lithium salt anion BF 4 − and the second BF 4 − in the EMIBF 4 ionic liquid. This was caused by the rapid chemical exchange of BF 4 − . Thus, the SDCs measured on the 19 F nuclei are the weighted average of the mobility of the BF 4 − and BF 4 − anions in EMIBF 4 . Table 4 shows that the mobility of BF 4 − increases by an order of magnitude with an increase in the mass content of IL from 0 to 6% (compositions NPE1-4). An increase in the content of the SiO 2 additive at the same mass content of IL (compositions 4 and 5) does not lead to a significant change in the mobility of BF 4 − cations in the NPEs. The temperature dependences of the self-diffusion coefficients D s on 19 F were measured in the range from 22 to 60 • C (Figure 11b). The dependences are Arrhenius. The diffusion activation energies calculated by Formula (1) are presented in Table 4. It is shown that an increase in the IL content in the composition of the polymer electrolyte leads to a decrease in the activation energy of the diffusion of BF 4 − from~37 to~27 kJ mol −1 . Thus, IL molecules contribute to an increase in the mobility of all components in the polymer matrix. The SDCs of Li + and BF 4 − increase by an order of magnitude with an increase in the mass content of IL from 0 to 6% (compositions NPE1-4).
The Li + cation has the lowest diffusion coefficient D s . According to the obtained data, D s (EMI + ) > Ds (BF 4 − ) >> Ds(Li + ). Increasing the addition of SiO 2 from 2 to 6 wt.% (transition from composition 4 to composition 5) leads to a slight increase in the D s of lithium cations.
NPE Conductivity
The conductivity of the obtained NPE samples was measured by the electrochemical impedance method in symmetrical SS//SS cells in the temperature range from −40 • C to 100 • C. The typical Nyquist plots of the cells are shown in Figure S10, ESI. The measurement results are given in Table S2, ESI and in Figure 12
NPE Conductivity
The conductivity of the obtained NPE samples was measured by the electrochemical impedance method in symmetrical SS//SS cells in the temperature range from −40 °С to 100 °С. The typical Nyquist plots of the cells are shown in Figure S10, ESI. The measurement results are given in Table S2, ESI and in Figure 12. The Arrhenius temperature dependence of the conductivity for all compositions (Figure 12) had a break in the temperature range from 15 to 25 °С, and, hence, the effective activation energy of conductivity was calculated in two ranges ( Table 5). The activation energies of conductivity and diffusion are compared. For the NPE1 composition, the activation energy of conductivity is noticeably lower than the activation energy of 7 Li diffusion and comparable with the activation energies of 19 F diffusion. With an increase in ionic liquid content, there is the same tendency of activation energies, but Ea values for conductivity decrease most strongly, 2.5 times. It is seen from Figure 12 and Table 5 that the composition with 6 mol IL (NPE4 and NPE5) has the highest conductivity and the most effective activation energy among all thin-film electrolytes. However, these values are higher for the NPE5 composition (with 6 wt.% SiO2). This indicates the positive contribution of SiO2 nanoparticles.
Electrochemical Study of NPEs in Li//LiOTAP Cells
In this work, battery prototypes with a cathode based on the lithium salt of the tetraazapentacene derivative LiOTAP were assembled. The electrochemical reduction and oxidation of LiOTAP are shown in Scheme 1. Each molecule of LiOTAP can undergo eight- The Arrhenius temperature dependence of the conductivity for all compositions ( Figure 12) had a break in the temperature range from 15 to 25 • C, and, hence, the effective activation energy of conductivity was calculated in two ranges ( Table 5). The activation energies of conductivity and diffusion are compared. For the NPE1 composition, the activation energy of conductivity is noticeably lower than the activation energy of 7 Li diffusion and comparable with the activation energies of 19 F diffusion. With an increase in ionic liquid content, there is the same tendency of activation energies, but E a values for conductivity decrease most strongly, 2.5 times. It is seen from Figure 12 and Table 5 that the composition with 6 mol IL (NPE4 and NPE5) has the highest conductivity and the most effective activation energy among all thin-film electrolytes. However, these values are higher for the NPE5 composition (with 6 wt.% SiO 2 ). This indicates the positive contribution of SiO 2 nanoparticles.
Electrochemical Study of NPEs in Li//LiOTAP Cells
In this work, battery prototypes with a cathode based on the lithium salt of the tetraazapentacene derivative LiOTAP were assembled. The electrochemical reduction and oxidation of LiOTAP are shown in Scheme 1. Each molecule of LiOTAP can undergo eightelectron oxidation also releasing eight Li + cations, which corresponds to the theoretical specific capacity of 468 mA h g −1 .
FOR PEER REVIEW 15 of 21 electron oxidation also releasing eight Li + cations, which corresponds to the theoretical specific capacity of 468 mA h g −1 .
Scheme 1. Electrochemical reduction and oxidation of LiOTAP.
First, the compatibility of NPE3-5 with Li-anode and LiOTAP-cathode materials was investigated by the electrochemical impedance method. The method of "liquid-phase therapy" for LiOTAP//LiOTAP cells was applied. The liquid electrolyte 1M LiTFSI in Scheme 1. Electrochemical reduction and oxidation of LiOTAP.
First, the compatibility of NPE3-5 with Li-anode and LiOTAP-cathode materials was investigated by the electrochemical impedance method. The method of "liquid-phase therapy" for LiOTAP//LiOTAP cells was applied. The liquid electrolyte 1M LiTFSI in DOL/DME (1:1 vol.) was used similar to [39]. The results are shown in Figure 13.
Scheme 1. Electrochemical reduction and oxidation of LiOTAP.
First, the compatibility of NPE3-5 with Li-anode and LiOTAP-cathode materials was investigated by the electrochemical impedance method. The method of "liquid-phase therapy" for LiOTAP//LiOTAP cells was applied. The liquid electrolyte 1M LiTFSI in DOL/DME (1:1 vol.) was used similar to [39]. The results are shown in Figure 13. Figure 14a. Figure 14b shows the charge-discharge profiles of Li/NPE3/LiOTAP cells at C/2 current for the 1-20, 100 cycle numbers. Figure S12 (ESI) shows the Coulomb efficiency (CE) during the cycling of this cell. Figure 14 shows that the LiOTAP organic cathode material loses its capacity in the first cycles but then stabilizes at 150 mAh g −1 . This substance belongs to the class of "small molecules" and is able to dissolve in the process of charge-discharge. The effect of LiOTAP dissolution in a liquid electrolyte was observed in [33]. However, with the use of a polymer electrolyte, no LiOTAP dissolution was observed in this work. This was confirmed by IR spectroscopy in the study of NPE4 samples before and after charge-discharges cycling. The opening of the cell after cycling was made in an Ar glove box. The lithium anode remained a shiny metal. No characteristic peaks of the IR spectra of the LiOTAP material in the film composition were observed ( Figure S13, ESI).
A significant excess of the Coulomb efficiency above 100% can be seen in Figure S12, ESI. Obviously, this effect is due to SiO 2 nanoparticles, since in their absence, the reversible operation of the Li/EMIBF 4 /LiOTAP cell was fixed, although short-lived ( Figure S14, ESI). We attribute this effect to the reduction reaction of the EMI + cation on the surface of the SiO 2 nanoparticle. Quantum-chemical calculations show a significant binding energy (49.4 kcal mol −1 or 2.14 eV) of two radicals formed during the reduction of EMI + ( Figure S15a, ESI). An excess amount of discharge capacity over charging capacity (Figure 14b) occurs at low potentials of 0.7 V. Apparently, here, the activity of an ionic liquid with a carbon material (50 wt.% of cathode) similar to supercapacitors or a dual graphite battery manifests itself [40][41][42][43][44]. The Coulomb efficiency would be 100% if EMIBF 4 Figure 14 shows that the LiOTAP organic cathode material loses its capacity in the first cycles but then stabilizes at 150 mAh g −1 . This substance belongs to the class of "small molecules" and is able to dissolve in the process of charge-discharge. The effect of LiOTAP dissolution in a liquid electrolyte was observed in [33]. However, with the use of a polymer electrolyte, no LiOTAP dissolution was observed in this work. This was confirmed by IR spectroscopy in the study of NPE4 samples before and after charge-discharges cycling. The opening of the cell after cycling was made in an Ar glove box. The lithium anode remained a shiny metal. No characteristic peaks of the IR spectra of the LiOTAP material in the film composition were observed ( Figure S13, ESI).
A significant excess of the Coulomb efficiency above 100% can be seen in Figure S12, ESI. Obviously, this effect is due to SiO2 nanoparticles, since in their absence, the reversible operation of the Li/EMIBF4/LiOTAP cell was fixed, although short-lived ( Figure S14, ESI). We attribute this effect to the reduction reaction of the EMI + cation on the surface of the SiO2 nanoparticle. Quantum-chemical calculations show a significant binding energy (49.4 kcal mol −1 or 2.14 eV) of two radicals formed during the reduction of EMI + (Figure S15a, ESI). An excess amount of discharge capacity over charging capacity (Figure 14b) occurs at low potentials of 0.7 V. Apparently, here, the activity of an ionic liquid with a carbon material (50 wt.% of cathode) similar to supercapacitors or a dual graphite battery manifests itself [40][41][42][43][44]. The Coulomb efficiency would be 100% if EMIBF4 remained unchanged.
We suppose that during a long cycle (24 h), the next reaction has time to occur. An electron from the carbon material is transferred to the imidazolium cation, and the resulting di-radical ( Figure S15a, ESI) binds to the surface of the SiO2 nanoparticle ( Figure S15b, ESI). This radical then adds the next radical but with a significant gain in energy. This mechanism was confirmed by the presence of changes in the IR spectra of the polymer electrolyte before and after charge-discharge cycling in the battery ( Figure S16, ESI).
The discharge capacity in the first cycle is close to the theoretical one (8e − LiOTAP redox transition) ( Figure 14). Subsequently, it decreases to 150 mAh g −1 , which corresponds to a two-electron transition. Since we consider the active participation of the EMI + cation in these processes, it can be assumed that during the discharge, a partial insertion of the bulky EMI + cation instead of Li + occurs. These bulky cations shield the redox active groups of LiOTAP, and the capacitance efficiency drops in the first 40 charge-discharge cycles. We suppose that during a long cycle (24 h), the next reaction has time to occur. An electron from the carbon material is transferred to the imidazolium cation, and the resulting di-radical ( Figure S15a, ESI) binds to the surface of the SiO 2 nanoparticle (Figure S15b, ESI). This radical then adds the next radical but with a significant gain in energy. This mechanism was confirmed by the presence of changes in the IR spectra of the polymer electrolyte before and after charge-discharge cycling in the battery ( Figure S16, ESI).
The discharge capacity in the first cycle is close to the theoretical one (8e − LiOTAP redox transition) ( Figure 14). Subsequently, it decreases to 150 mAh g −1 , which corresponds to a two-electron transition. Since we consider the active participation of the EMI + cation in these processes, it can be assumed that during the discharge, a partial insertion of the bulky EMI + cation instead of Li + occurs. These bulky cations shield the redox active groups of LiOTAP, and the capacitance efficiency drops in the first 40 charge-discharge cycles.
Quantum-Chemical Modeling
Using quantum-chemical calculations by means of the density functional theory, models of the LiBF 4 salt and the EMIBF 4 ionic liquid were constructed in the form of an associate of two ion pairs, as well as a (LiBF 4 ) 2 (EMIBF 4 ) 2 cluster containing both types of associates ( Figure 15).
3, x FOR PEER REVIEW 17 of 21
Quantum-Chemical Modeling
Using quantum-chemical calculations by means of the density functional theory, models of the LiBF4 salt and the EMIBF4 ionic liquid were constructed in the form of an associate of two ion pairs, as well as a (LiBF4)2(EMIBF4)2 cluster containing both types of associates ( Figure 15). As a model of silicon dioxide nanoparticles, we considered a cluster with a core of 17 bound SiO2 molecules and a hydrated surface as a result of the addition of 6 H2O molecules ( Figure 16). As a model of silicon dioxide nanoparticles, we considered a cluster with a core of 17 bound SiO 2 molecules and a hydrated surface as a result of the addition of 6 H 2 O molecules ( Figure 16). As a model of silicon dioxide nanoparticles, we considered a cluster with a core of 17 bound SiO2 molecules and a hydrated surface as a result of the addition of 6 H2O molecules ( Figure 16). At the same time, the incorporation of the (EMIBF 4 ) 2 cluster into the surface layer of ions for Model I leads to a greater gain in energy (34.6 and 51.1 kcal mol −1 , respectively). This indicates the advantage of the mixed adsorption process, which results in the formation of a negatively charged surface layer of Li + and BF 4 -ions. This conclusion is consistent with the results of experimental work [15] on the study of the interaction of SiO 2 (Aerosil 200), LiBF 4 and BMIBF 4 nanoparticles using Raman spectroscopy. In the case of our system with SiO 2 nanoparticles (Aerosil 380), due to their highly porous structure, the Raman spectra turned out to be not informative.
Conclusions
New nanocomposite polymer gel electrolytes based on polyethylene glycol diacrylate (PEGDA), LiBF 4 salt and 1-ethyl-3-methylimidazolium tetrafluoroborate (EMIBF 4 ) and SiO 2 nanoparticles (Aerosil 380) were synthesized and studied. The SiO 2 nanoparticles with a highly porous surface are unique in their nature of interaction with ions of lithium salt ions and ionic liquid. In this work, the electro mass transfer in such complex systems was investigated. The NMR method of high resolution and with a pulsed field gradient was used. Together with the electrochemical impedance data, information was obtained on the high conductivity of these systems. The total electrical conductivity was about 10 −4 S cm −1 (−40 • C), 10 −3 S cm −1 (25 • C) and 10 −2 S cm −1 (100 • C). Based on the obtained data, quantum-chemical modeling of the adsorption of ionic complexes on the surface of a SiO 2 nanoparticle was carried out. It was shown that the most energy-efficient process is mixed adsorption, which forms a negatively charged surface layer of Li + BF 4 − ions and then EMI + BF 4 − ions. This is also confirmed by the results of NMR with a pulsed field gradient. These electrolytes are promising for use both in lithium power sources and in supercapacitors. This paper shows the preliminary tests of a lithium cell with an organic electrode based on a pentaazapentacene derivative for 110 charge-discharge cycles.
Supplementary Materials: The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/membranes13060548/s1. Table of Contents; Table S1: Compositions of the nanocomposite polymer gel electrolytes; Synthesis of the nanocomposite polymer electrolytes, Figure S1: Calculated structure of (a) the initial PEGDA, consisting of 6 (-CH2CH2O-) units; (b) the simplest element of a polymer network, consisting of 4 connected PEGDA fragments, where the dotted line indicates the places of their crosslinking, in which broken C-C bonds are replaced by C-H bonds; and (c) the theoretical IR spectra of these models, Figure S2: The calculated structure of (a) the solvate complex of the Li+ cation with three EC molecules and the BF4counterion; and (b) the theoretical IR spectrum of this solvate complex, Figure S3: FTIR spectra of NPE4 and NPE5 versus EC solvent, PEGDA polymer and the polymer electrolyte without SiO2 and EMIBF4 in a range of 1650÷1900 cm−1, Figure S4 Figure S8: The 13C-1H HSQC spectrum of the polymer electrolyte NPE1, Figure S9: Diffusion decays of NPE1-5 on (a) 7Li and (b) 19F; 1H nuclei for (c) EMIBF4 and (d) EC, Figure S10: Nyquist plots of the SS/NPE/SS cell at room temperature (a) and equivalent scheme (b), where R1 is the resistance of the electrolyte, and R2 is the resistance at the SiO2/electrolyte interface; CPE1 is the capacity of electrical double layer, Table S2: Conductivity of the nanocomposite polymer gel electrolytes, Figure S11: (a) Nyquist plots of the Li/NPE4/Li cells after assembly; (b) the equivalent scheme, where R1 is the electrolyte resistance, and R2 is the resistance of the Li/NPE4 interface; CPE1 is the capacity of electrical double layer; and (c) the visualization of the equivalent circuit parameter calculation performed with the program ZView2, Figure S12: Dependence of the Coulomb efficiency on the cycle number for the Li/NPE3/LiOTAP cells at the C/2 current rate in a voltage range of 0.7-3.5 V, Figure S13: FTIR spectra of the polymer electrolytes NPE3 and NPE3 before and after cycling cells versus LiOTAP, Figure S14: (a) The charge-discharge profiles and (b) dependence of the discharge capacity on the cycle number for the Li/EMIBF4/LiOTAP cells at the C/10 current rate in a voltage range of 0.7-3.5 V, Figure S15: The models of di-radical from imidazolium cation (a) and the adsorption of EMI•-radical on the surface of SiO2 nanoparticle (b), Figure S16: The theoretical IR spectra of dimer EMIBF4 and di-radical EMI (a); the experimental spectra of NPE3 sample before and after cell cycling (b), Table S3: Attachment energy of various ionic complexes to the surface of a SiO 2 nanoparticle. References [45,46] are cited in there.
|
2023-05-27T15:10:40.890Z
|
2023-05-24T00:00:00.000
|
{
"year": 2023,
"sha1": "1230d0305fa4406890f751d8edeecd590620a7e3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/membranes13060548",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b80b39f2453f3dcf86f040bcac12cd1095c2aa32",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": []
}
|
52922388
|
pes2o/s2orc
|
v3-fos-license
|
Multi-view X-ray R-CNN
Motivated by the detection of prohibited objects in carry-on luggage as a part of avionic security screening, we develop a CNN-based object detection approach for multi-view X-ray image data. Our contributions are two-fold. First, we introduce a novel multi-view pooling layer to perform a 3D aggregation of 2D CNN-features extracted from each view. To that end, our pooling layer exploits the known geometry of the imaging system to ensure geometric consistency of the feature aggregation. Second, we introduce an end-to-end trainable multi-view detection pipeline based on Faster R-CNN, which derives the region proposals and performs the final classification in 3D using these aggregated multi-view features. Our approach shows significant accuracy gains compared to single-view detection while even being more efficient than performing single-view detection in each view.
Introduction
Baggage inspection using multi-view X-ray imaging machines is at the heart of most aviation security screening programs.Due to inherent shortcomings in human inspection arising from gradual fatigue, occasional erroneous judgments, and privacy concerns, computer-aided automatic detection of dangerous goods in baggage has long been a sought-after goal [19].However, earlier approaches, mostly based on hand-engineered features and support vector machines, fell far short of providing detection accuracy comparable to human operators, which is critical due to the sensitive nature of the task [3,8].Thanks to recent advances in object detection using deep convolutional neural networks [10,11,22,23] with stunning success in photographic images, the accuracy of single-view object detection in X-ray images has improved significantly [16].Yet, most X-ray machines for baggage inspection provide multiple views (two or four views) of the screening tunnel.An example of this multi-view data is shown in Fig. 1.Multi-view approaches for these applications have been only used in the context of classical methods, but whether CNN-based detectors can benefit is unclear.In fact, [16] found that a naive approach feeding features extracted from multiple views simultaneously to a fully-connected layer for detection leads to a performance drop over the single-view case.
Fueled by applications such as autonomous driving, 3D object detection has lately gained momentum [9].These 3D detection algorithms, unlike their 2D Fig. 1.An example of multi-view X-ray images of hand luggage containing a glass bottle.
counterparts, are not general purpose and rely on certain sensor combinations or employ heavy prior assumptions that make them not directly applicable to multiview object detection in X-ray images.Some of these 3D detection algorithms assume that the shape of the desired object is known in the form of a 3D model [2,24].Yet, 3D models of objects are more difficult to acquire compared to simple bounding box annotations and a detector that relies on them for training may not generalize well on objects with highly variable shape such as handguns.Other methods use point clouds from laser range finders alone or in conjunction with RGB data from a camera [5,21].Our setup, in contrast, provides multiview images of objects in a two channel (dual-energy) format, which is rather different from stereo or point cloud representations.Using 3D convolutions and directly extending existing 2D methods is one possibility, but the computational cost and memory requirements can be prohibitive when relying on very deep CNN backbone architectures such as ResNet [14].
In this work, we extend the well-known Faster R-CNN [23] to multi-view X-ray images by employing the idea of late fusion, which enables our use of very deep CNNs to extract powerful features from each individual view, while deriving region proposals and performing the classification in 3D.We introduce a novel multi-view pooling layer to enable this fusion of features from single views using the geometry of the imaging setup.This geometry is assumed fixed all through training and testing and needs to be calculated once.We do not assume further knowledge of the detected objects as long as we have sufficient bounding box annotations in 2D.We show that our method, termed MX-RCNN, is not only highly flexible in detecting various hazardous object categories with very little extra knowledge required, but is also considerably more accurate while even being more efficient than performing single-view detection in the four views.
Related Work
Inspired by the impressive results in 2D object detection, several recent works [20,30,32,33] build upon a 2D detection in an image and then attempt to estimate the 3D object pose.These methods, however, rely predominantly on a set of prior constraints such as objects being located on the ground for estimating the 3D position of the object.These prior constraints do not appear easily extensible to our problem case, since objects inside bags can take an arbitrary orientation and position.
Other approaches have tried to work directly with depth data [18,27,28], where most methods voxelize the space into a regular grid and apply 3D convolutions to the input.While this yields the most straightforward extension of well-proven 2D detectors, which are based on 2D convolution layers, increasing the dimensionality of convolution layers can only be done at very low resolution as denser voxelizations of the space result in unacceptably large memory consumption and processing time.Our method also uses the idea of 3D convolutions but defers them to very late stages in which we switch from 2D to 3D.In doing so, we enable the detector to leverage high image resolution in the input while extracting powerful features that leverage view-consistency.
A number of methods use the geometry of the objects as prior knowledge to infer a 6-DoF object pose.These methods mainly rely on CAD models or ground truth 3D object models and match either keypoints between these models and 2D images [2,24,34] or entire reconstructed objects [25].Borrowing ideas from robotics mapping, the estimated pose of an object produced by a CNN can also be aligned to an existing 3D model using the iterative closest points (ICP) algorithm [13].
Hang et al. [29] proposed a method most closely related to ours, which aggregates features extracted from multiple views of a scene using an elementwise max operation among all views.Yet, unlike our multi-view pooling layer this aggregation is not geometry-aware.
Despite the recent focus on the problem of visual object detection, its application to X-ray images has not received as much attention.Some older methods [3,8] exist for this application, but they perform considerably weaker than deep learning-based methods [1].However, the use of CNNs on X-ray images for baggage inspection has been limited to the direct application of basic 2D detection algorithms, either with pretraining on photographic images or training from scratch.Jaccard et al. [16] propose a black-box approach to multi-view detection by extracting CNN features from all views, concatenating them, and feeding them to fully-connected layers.Yet, the accuracy fell short of that of the original single-view detection.To the best of our knowledge, there exists no previous end-to-end learning approach that successfully uses the geometry of the X-ray machine to perform a fusion of features from various views.
Multi-view X-ray R-CNN
We build on the standard Faster R-CNN object detection method [23], which works on single-view 2D images and is composed of two stages.They share a common feature extractor, which outputs a feature map.The first stage consists of a Region Proposal Network (RPN) that proposes regions of interest to the second stage.Those regions are then cut from the feature map and individually classified.The RPN uses a fixed set of 9 standard axis-aligned bounding boxes in 3 different aspect ratios and 3 scales, so-called anchor boxes.Those anchor boxes are defined at every location of the feature map.The RPN then alters their position and shape by learning regression parameters in training.Additionally, a score to distinguish between objects and background is learned in a classagnostic way, which is used for non-maximum suppression and to pass only a subset of top-scoring region proposals to the second stage.The second stage classifies the proposals and outputs additional regression values to fine-tune the bounding boxes at the end of the detection process.
MX-RCNN.
The basic concept of our Multi-view X-ray R-CNN (MX-RCNN) approach is to perform feature extraction on 2D images to be able to utilize standard CNN backbones including ImageNet [26] pretraining.This addresses the fact that the amount of annotated X-ray data is significantly lower than that of photographic images.We then combine the extracted feature maps of different projections, or views, into a common 3D feature space in which object detection takes place, identifying 3D locations of the detected objects.
Our MX-RCNN uses a ResNet-50 [15] architecture, where the first 4 out of its 5 stages are used for feature extraction on the 2D images.Then a novel multi-view pooling layer, provided with the fixed geometry of the imaging setup, combines the feature maps into a common 3D feature volume.
The combined feature space is passed to a RPN, which has a structure similar to the RPN in Faster R-CNN [23], but with 3D convolutional layers instead of 2D ones.Further, it has 6 × A regression parameter outputs per feature volume position, where A is the number of anchor boxes, because for 3D bounding boxes 6 regression parameters are needed.Following Faster R-CNN, we define those regression parameters as where the index a denotes the parameters of the anchor box and with bounding box center (x, y, z), width w, height h, and depth d.
The RPN proposes volumes to be extracted by a Region-of-Interest pooling layer with an output of size 7×7×7 to cover a part of the feature volume similar in relative size to the 2D case.The 3D regions are then fed into a network similar to the last stage of ResNet-50 in which all convolutional and pooling layers are converted from 2D to 3D and the size of the last pooling kernel is adjusted to fit the feature volume size.In contrast to the 2D stages, these 3D convolutions are trained from scratch since ImageNet pretraining is not possible.Afterwards, the regions are classified and 3D bounding box regression parameters are determined.A schema of our MX-RCNN is depicted in Fig. 2.
K-means clustering of anchor boxes
When we expand the hand-selected aspect ratios of the Faster R-CNN anchor boxes of 1:1, 1:2, and 2:1 at 3 different scales to 3D, we arrive at a total of 21 anchor boxes.Since this number is large and limits the computational efficiency, we aim to improve over these standard anchor boxes.To that end, we assess the quality of the anchor boxes as priors for the RPN.Specifically, we use their intersection over union (IoU) [17] with the ground-truth annotations of the training set.We instantiate anchor boxes at each position of the feature map used by the RPN and for each ground-truth annotation we find its highest IoU with an anchor box.For the standard anchor boxes expanded to 3D, this yields an average IoU of 0.5.
To improve upon this while optimizing the computational efficiency, we follow the approach of the YOLO9000 object detector [22] and use k-means clustering on the bounding box dimensions (width, height, depth) in the training set to find priors with a better average overlap with the ground-truth annotations.We employ the Jaccard distance [17] between boxes a and b as a distance metric.We run k-means clustering for various values of k.Fig. 3 shows the average IoU between ground-truth bounding boxes and the closest cluster (blue, circles).To convert clusters into anchor boxes, they have to be aligned to the resolution of a feature grid.To account for this, we also plot the IoU between the ground-truth bounding boxes and the closest cluster once it has been shifted to the nearest feature grid position (red, diamonds).We choose k = 10, which achieves an average IoU of 0.56 for the resulting anchor boxes distributed in a grid.This is clearly higher than using the standard 21 hand-selected anchor boxes, while maintaining the training and inference speed of a network with only 10 anchor boxes.
Multi-view pooling
Our proposed multi-view pooling layer maps the related feature maps of the 2D views of an X-ray recording into a common 3D feature volume by providing it with the known geometry of the X-ray image formation process in the form of a weight matrix.To determine the weights, we connect each group of detector locations related to one pixel in the 2D feature map to their X-ray source to form beams across the 3D space.For each of the output cells of the 3D feature volume, we use the volume of their intersection with the beams normalized by the cell volume as relative weight factors.The multi-view pooling layer then computes the weighted average of the feature vectors of all beams for each output cell, normalized by the number of views in each X-ray recording; we call this variant MX-RCNN avg .Additionally, we implemented a version of the multi-view pooling layer that takes the maximum across the weighted feature vectors of all beams for each output cell; we call this variant MX-RCNN max .An example of a mapping, specific to our geometry, is shown in Fig. 4.
Conversion of IoU thresholds
Since the IoU in 3D (volume) behaves differently than in 2D (area), we aim to equalize for this.Specifically, we aim to apply the same strictness for spatial shifts that are allowed per bounding box dimension such that a proposed object is still considered a valid detection.We assume that the prediction errors of the bounding box regression values are equally distributed across all dimensions.For simplicity, we further assume that errors are only made up of shifts compared to the ground-truth bounding boxes.In 2D, the allowed relative shift s per dimension of a bounding box of arbitrary dimensions for an IoU threshold of t 2 is given by The same relative shift applied to an arbitrarily sized 3D bounding box would require the threshold An IoU threshold t 2 applied to the 2D case therefore becomes for use with 3D bounding boxes if the same strictness per dimension is to be maintained.The evaluation of the Pascal VOC challenge [7] for detection in 2D images uses a standard threshold of 0.5, which yields a threshold of 0.374 in the 3D detection case.
Computational cost
Our single-view Faster R-CNN implementation reaches a frame rate of 3.9 fps for training and 6.1 fps for inference on a NVIDIA GeForce GTX Titan X GPU. 4 frames or images need to be processed for one complete X-ray recording.Our MX-RCNN achieves frame rates of 4.6 fps for training and 7.6 fps for inference; note that it can share work across the 4 frames as they are processed simultaneously.Thus, our method is 18% faster in training and 25% faster in inference.We attribute this to the lower number of regions extracted per recording, because of the common classification stage in the multi-view detector, despite the higher computational costs for its 3D convolutional layers.
Multi-view X-Ray Dataset
Lacking a standardized public dataset for this task, we leverage a custom dataset1 of dual-energy X-ray recordings of hand luggage made by an X-ray scanner used for security checkpoints.The X-ray scanner uses line detectors located around the tunnel through which the baggage passes.Its pixels constitute the x-axis in the produced image data while the movement of the baggage, respectively the duration of the X-ray scan and the belt speed, define the y-axis of the image.Each recording consists of four different views, three from below and one from the side of the tunnel the baggage is moved through.The scans from each view produce two grayscale attenuation images from the dual-energy system, which are converted into a false-color RGB image that is used for the dataset.This is done to create images with 3 color channels that fit available models pretrained on ImageNet-1000 [26].An example recording is shown in Fig. 5.
The following types of recordings are available in the dataset: Glass Bottle.Recordings of baggage containing glass bottles of different shapes 1.The dataset is split into its subsets such that views belonging to one and the same recording are not distributed over different subsets.Pascal VOC-style annotations [7] with axis-aligned 2D bounding boxes per view are available for two different classes of hazardous objects, weapon and glassbottle.
3D bounding box annotations
To be able to train and evaluate our multi-view object detection with 3D annotations, we generate those out of more commonly available 2D bounding box annotations.Specifically, we generate axis-aligned 3D bounding boxes from several axis-aligned 2D bounding boxes.Because all our X-ray recordings have at most one annotated object,2 there is no need to match multiple annotations across the different views.In case of multiple annotated objects per image, a geometrically consistent matching could be used.
Recall the specific imaging setup from above.If we now align the 2D yaxis (belt direction) to the 3D z-axis, the problem of identifying a suitable 3D bounding box reduces to a mapping from the 2D x-axis to a xy-plane in 3D.The lines of projection between the X-ray sources and the detectors corresponding to the x-axis limits of the 2D bounding boxes define the areas where the object could be located in the xy-plane per view.We intersect those triangular areas using the Vatti polygon clipping algorithm [31] and choose the minimum axisaligned bounding box containing the resulting polygon as an estimation of the object's position in 3D space.For the z-axis limits of the 3D bounding box, we take the mean of the y-limits of the 2D bounding boxes.An example of the generation process is shown in Fig. 6.Note that while our process of deriving 3D bounding boxes is customized to the baggage screening scenario, analogous procedures can be defined for more general imaging setups.Note that in the ideal case, all projection lines would intersect in the 4 corners of the bounding box.However, due to variances in the annotations made independently for each view, this does not hold in practice.As a result, the bounding box enclosing the intersection polygon is an upper bound of the estimated position of the object in 3D space.We thus additionally project the 3D bounding boxes back to the 2D views to yield 2D bounding boxes that include the geometry approximation made in the generation of the 3D bounding box.The difference between an original 2D annotation and a reprojected 3D bounding box can be seen in Fig. 5.We use these 2D annotations to train a single-view Faster R-CNN as a baseline that assumes the same object localization as the multi-view networks trained on the 3D annotations.If more precise 3D bounding boxes are desired, they could be obtained from joint CT and X-ray recordings, which are becoming more common in modern screening machines.
Training
We scale all images used for training and evaluation by a factor of 0.5 so they have widths of 384, 384, 352, and 416 px for the views 0 through 3 of each X-ray recording.For all experiments we use a ResNet-50 [14] with parameters pretrained on ImageNet-1000 [26].The first two stages of the ResNet-50 are fixed and we fine-tune the rest of the parameters.If not mentioned otherwise, hyperparameters remain unchanged against the standard implementation of Faster R-CNN.All networks are trained by backpropagation using stochastic gradient descent (SGD).
As a baseline we train a standard implementation of single-view Faster R-CNN on the training set without data augmentation.Single-view Faster R-CNN detects objects on all views independently.We start with a learning rate of 0.001 for the first 12 epochs and continue with a rate of 0.0001 for another 3 epochs.
For training and evaluation of our MX-RCNN, we use a mini-batch size of 4 with all related views inside one mini-batch.We reduce the number of randomly sampled anchors in each 3D feature volume from 256 to 128 to better match the desired ratio of up to 1:1 between sampled positive and negative anchors.Additionally, we reduce the number of sampled RoIs to 64 to better match the desired fraction of 25% foreground regions that have an IoU of at least 0.5 with a 3D ground-truth bounding box.We accordingly reduce the learning rate by the same factor [12].In the multi-task loss of the RPN, we change the balancing factor λ for the regression loss and the normalization by the number of anchor boxes to an empirically determined factor of λ = 0.05 without additional normalization.In practice, this ensures good convergence of the regression loss.We initialize all 3D convolutional layers as in the standard Faster R-CNN implementation.We then train MX-RCNN avg on the training set for 28 epochs with a learning rate of 0.0005.A cut of the learning rate did not show any benefit when evaluating with the validation set.Additionally, we train MX-RCNN max for 17 epochs with the same learning rate.Again, a learning rate cut did not show any further improvement on the validation set.
Evaluation criteria
Since there is no established evaluation criterion for our experimental setting, we are using the average precision (AP), which is the de-facto standard for the evaluation of object detection tasks.Specifically, we compare the trained networks with the evaluation procedure for object detection of the Pascal VOC challenge that is in use since 2010 [6].To be considered a valid detection, a proposed bounding box must exceed a certain threshold with a ground-truth bounding box.If multiple proposed bounding boxes match the same groundtruth bounding box, only the one with the highest confidence is counted as a valid detection.The Pascal VOC challenge uses an IoU of 0.5 as threshold for the case of object detection in 2D images.As discussed in Section 3.3, to apply the same strictness for relative shifts that are allowed per dimension, we set this threshold to 0.374 for the evaluation of 3D bounding boxes per recording.Nevertheless, the corresponding IoU threshold we derived for 3D is an estimate.Hence, we additionally project the proposed 3D bounding boxes onto the 2D views and evaluate them per image with a threshold of 0.5 to directly compare to standard 2D detection.
Experimental results
The standard single-view Faster R-CNN [23] used as a baseline reaches 91.2% mean average precision (mAP) per image on the test set (average over classes).With 93.9% mAP per recording, our MX-RCNN max is 2.7% points better than the baseline when evaluating in 3D and 0.5% points better when projected to 2D (91.7% mAP per image).Using a weighted average in the multi-view pooling layer of MX-RCNN (MX-RCNN avg ) shows consistently better detection accuracy than MX-RCNN max except for the AP of the glassbottle class projected to 2D.With an mAP per recording for 3D evaluation of 95.6%, our MX-RCNN avg is 4.4% points better than the baseline when evaluating in 3D and 1.7% better when projected to 2D (mAP of 92.9%).Detailed numbers of the evaluation can be found in Table 2. Additionally, precision-recall curves are provided in Fig. 7 to allow studying the accuracy at different operating points.We observe that the ranking of the curves of the different networks within each object class is consistent with their AP values.Moreover, it becomes apparent that the benefit of the proposed multi-view approach is particularly pronounced in the important high-recall regime.
Ablation study
To test the importance of the different views in our multi-view setup on the final detection result, we disabled individual views while evaluating and in the mapping provided to the multi-view pooling layer, respectively.The evaluation was done for both variants of the multi-view pooling layer on the test set and we directly compared the proposed 3D bounding boxes to 3D ground-truth annotations with 0.374 as IoU threshold without reprojecting them to 2D.The detailed results can be found in Table 3.We notice that the tolerance for missing views from below the tunnel is higher than for a missing side view.Also, the impact is more pronounced on weapons whose appearance is more affected by out-of-plane rotations.In general, the use of weighted averaging when combining features in the multi-view pooling seems to be more fault tolerant than the use of a weighted maximum, with the exception of weapons in combination with a missing side view.The results show that the network indeed relies on all views to construct the feature volume and to propose and validate detections.
Conclusion
In this paper we have introduced MX-RCNN, a multi-view end-to-end trainable object detection pipeline for X-ray images.MX-RCNN is a two stage detector similar to Faster R-CNN [23] that extracts features from all views separately using a standard CNN backbone and then fuses these features together to shape a 3D representation of the object in space.This fusion happens in a novel multiview pooling layer, which combines all individual features leveraging the geometry of the X-ray imaging setup.An experimental analysis on a dataset of carry-on luggage containing glass bottles and hand guns showed that when trained with the same annotations, MX-RCNN outperforms Faster R-CNN applied to each view separately and is computationally cheaper than separate processing of all views.We also showed in an ablation study that the method works by far better when the view angles do not all fall in one line (degenerate 3D case), showing that the pipeline is indeed leveraging its 3D feature representation.
Fig. 2 .
Fig.2.Schema of the hybrid 2D-3D MX-RCNN architecture.The features of each view are extracted independently in 2D and combined in the multi-view pooling layer (marked in red ).The resulting common 3D feature volume is passed to the RPN and to the RoI pooling layer where regions are extracted for evaluation in the 3D R-CNN layers.
Fig. 3 .
Fig.3.Average IoU between ground-truth boxes and the closest cluster from k-means clustering as a function of k.The IoU is calculated for anchor boxes centered on the ground-truth boxes (blue, circles) and for anchor boxes distributed in the grid of the feature map (red, diamonds).Even a low number of clusters already outperforms the standard hand-selected anchor boxes (IoU of 0.5).
Fig. 4 .
Fig. 4. Example plot illustrating the relevant beams (color-coded per view ) in the multi-view pooling of a specific output cell (marked in red ).The actual geometry differs slightly.
Fig. 5 .
Fig.5.False-color images of all views of an example X-ray recording of baggage containing a handgun.The handgun is easier to spot in certain images depending on their angle of projection.Bounding boxes show the original 2D annotations (black ) and the reprojected 3D annotations (red ).
Fig. 6 .
Fig. 6.Example of the 3D bounding box generation.The lines of projection (color-coded per view ) for the 2D bounding box positions overlap in the xy-plane of the 3D volume.The resulting polygon of their intersection (pink ) is enclosed by the 3D bounding box (red ).The actual geometry differs slightly.
Fig. 7 .
Fig. 7. Precision-recall curves of the different networks evaluated on the test set.The plots show the precision-recall curves of the single-view (orange, dashed ), MX-RCNNavg (turquoise, solid ) and MX-RCNNmax (pink, dotted ) networks.For the multi-view networks the precision-recall curves are shown for the evaluation in 3D.
Table 1 .
[4]ber of recordings (images) in the different subsets of the dataset.Synthetic recordings of baggage where a pre-recorded scan of a handgun is randomly projected onto a baggage recording by a method called Threat Image Projection (TIP)[4].A limited set of handguns is repeatedly used to generate all recordings.Real Weapon.Recordings that contain a handgun of various types and are obtained using a conventional scan without the use of TIP.Negative.Recordings containing neither handguns nor glass bottles.The synthetic TIP images are only used for training and validation; the complete scans with real weapons are used for testing to evaluate if the trained network generalizes.A detailed overview of the different subsets of the data is given in Table
Table 2 .
Experimental results of the different networks evaluated on the test set.For the multi-view networks the evaluation was done with the proposed 3D bounding boxes and their projections onto the 2D views.
Table 3 .
Tolerance of MX-RCNN for disabling individual views when evaluating on the test set.The proposed 3D bounding boxes were directly evaluated without reprojection to 2D.
|
2018-10-04T17:48:54.000Z
|
2018-10-04T00:00:00.000
|
{
"year": 2018,
"sha1": "c3d60c8b1dff411982ccd8875496f1e74d2cefc4",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "c3d60c8b1dff411982ccd8875496f1e74d2cefc4",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
235453187
|
pes2o/s2orc
|
v3-fos-license
|
Bayesian Reference Analysis for the Generalized Normal Linear Regression Model
This article proposes the use of the Bayesian reference analysis to estimate the parameters of the generalized normal linear regression model. It is shown that the reference prior led to a proper posterior distribution, while the Jeffreys prior returned an improper one. The inferential purposes were obtained via Markov Chain Monte Carlo (MCMC). Furthermore, diagnostic techniques based on the Kullback–Leibler divergence were used. The proposed method was illustrated using artificial data and real data on the height and diameter of Eucalyptus clones from Brazil.
Introduction
Although the normal distribution is used in many fields for symmetrical data modeling, when the data come from a distribution with lighter or heavier tails, the assumption of normality becomes inappropriate. Such circumstances show the need for more flexible models such as the Generalized Normal (GN) distribution [1], which encompasses various distributions such as the Laplace, normal, and uniform distributions.
In the presence of covariates, the normal linear regression model can be used to investigate and model the relationship between variables, assuming that the observations follow a normal distribution. However, it is well known that a normal linear regression model can be influenced by the presence of outliers [2,3]. In these circumstances, as discussed above, it is necessary to use models in which the error distribution presents heavier or lighter tails than normal such as the GN distribution.
Due to its flexibility, the GN distribution is considered a tool to reduce the impact of the outliers and obtain robust estimates [4][5][6]. This distribution has been used in different contexts, with different parameterizations, but the main difficulties to adopting GN distribution modeling have been computational problems since there are no explicit expressions for estimators of the shape parameter. The estimation of the shape parameter should be done through numerical methods. However, in the classical context, there are problems of convergence, as demonstrated in the studies [7,8].
In the Bayesian context, the methods presented in the literature for the estimation of the parameters of the GN distribution are restricted and applied to particular cases. West [9] proved that a scale mixture of the normal distribution could represent the GN distribution. Choy and Smith [10] used the prior GN distribution for the location parameter in the Gaussian model. They obtained the summaries of the posterior distribution, estimated through the Laplace method, and examined their robustness properties. Additionally, the authors used the representation of the GN distribution as a scale mixture of the normal distribution in random effect models and considered the Markov Chain Monte Carlo method (MCMC) to obtain the posterior summaries of the parameter of interest.
Bayesian procedures for regression models with GN errors have been discussed earlier. Box and Tiao [4] used the GN distribution from the Bayesian approach in which they proposed robust regression models as an alternative to the assumptions of normality of errors in regression models. Salazar et al. [11] considered an objective Bayesian analysis for exponential power regression models, i.e., a reparametrized version of the GDN. They derived the Jeffreys prior and showed that such a prior results in an improper posterior. To overcome this limitation, they considered a modified version of the Jeffreys prior under the assumption of independence of the parameters. This assumption does not hold since the scale and the shape parameters correlate with the proposed distribution. Additionally, the use of the Jeffreys prior is not appropriate in many cases and can cause strong inconsistencies and marginalization paradoxes (see Bernardo [12], p. 41).
Reference priors, also called objective priors, can be used to overcome this problem. This method was introduced by Bernardo [13] and enhanced by Berger and Bernardo [14]. For the proposed methodology, the prior information is dominated by the information provided by the data, resulting in a vague influence of the prior distribution. Estimations are made based on priors through the maximization of the expected Kullback-Leibler (KL) divergence between the posterior and prior distributions. The resulting reference prior affords a posterior distribution that has interesting features, such as consistent marginalization, one-to-one invariance, and consistent sampling properties [12]. Some applications of reference priors can be seen for other distributions in [15][16][17][18][19][20].
In this paper, we considered the reference approach for estimating the parameters of the GN linear regression model. We showed that the reference prior led to a proper posterior distribution, whereas the Jeffreys prior brought an improper one and should not be used. The posterior summaries were obtained via Markov Chain Monte Carlo (MCMC) methods. Furthermore, diagnostic techniques based on the Kullback-Leibler divergence were used.
To exemplify the proposed model, we considered the 1309 entries on the height and diameter of Eucalyptus clones (more details are given in Section 8). For these data, a linear regression model using the normal distribution for the residuals was not adequate, and so, we used the GN linear regression approach. This paper is composed as follows. Section 2 presents the GN distribution with some special cases, and in the following Section 3, we introduce the GN linear regression model. Section 4 shows the reference and Jeffreys priors, respectively, for a GN linear regression model. Then, the model selection criteria, in Section 6, are discussed, as well as their applications. Section 7 shows the proposed method for analyzing compelling cases considering the Bayesian reference prior approach through the use of the Kullback-Leibler divergence. Section 8 presents studies with an artificial and a real application, respectively. Finally, we discuss the conclusions in Section 9.
Generalized Normal Distribution
The Generalized Normal distribution (GN distribution) has been referred to in the literature under different names and parametrizations such as the exponential power distribution or the generalized error distribution. The first formulation of this distribution [1] as a generalization of the normal distribution was characterized by the location, scale, and shape parameters.
Here, we considered the form presented by Nadarajah [21]. It is understood that the random variable Y is the GN distribution given its probability density function (pdf) as: The parameter µ is the mean; τ is the scale factor; and s is the shape parameter. In particular, the mean, variance, and coefficient of the kurtosis of Y are given by: respectively. The GN distribution characterizes leptokurtic distributions if s < 2 and platykurtic distributions if s > 2. In particular, the GN distribution displays the Laplace distribution when s = 1 and the normal distribution when s = 2 and τ is equal to √ 2 σ, where σ is the standard deviation, and when s → ∞, the pdf converges to a uniform distribution in (1).
This distribution is flexible-symmetrical concerning the average and unimodality. Moreover, it allows a more flexible fit for the kurtosis than a normal distribution. Furthermore, the ability of the GN distribution to provide a precise fit for the data depends on its shape.
Zhu and Zinde-Walsh [22] proposed a reparameterization of the asymmetric exponential power distribution that allows us to observe the effect of the shape parameter on the distribution, which was adapted for the GN distribution. Using a similar reparameteriza- The reparametrization above is used throughout the paper. Figure 1 shows the density functions of the GN distribution in (2) for various parameter values.
Generalized Normal Linear Regression Model
The GN linear regression model is defined as: where Y i is the vector of the response for the ith case, x i = (x i1 , . . . , x ip ) contains the values of the explanatory variables, β = (β 1 , . . . , β p ) is the vector of regression coefficients, and i is the vector of random errors that follows a GN distribution with mean zero, scale parameter σ, and shape parameter s.
Therefore, the likelihood function is: The log-likelihood function (4) is given by: The first-order derivatives of the log-likelihood function in (5) are given by: where Ψ(s) = Γ (s) Γ(s) is the digamma function. The score function obtains the Fisher information matrix. This matrix is helpful to get the reference priors for the model parameters. The following proposition obtains the elements of the Fisher information matrix for the model in (3).
Proposition 1.
Let I(θ) be the Fisher information matrix, with θ = (β, σ, s). The elements of the Fisher information matrix, with I ij (θ) = I ji (θ) and θ j the jth element of θ, are given by, x i x i , where s > 1 and Ψ (s) = ∂Ψ(s) ∂s is the trigamma function. The restriction s > 1 ensures that the elements I ij (θ), calculated for i, j = 1, 2, 3, are finite and the information matrix I(θ) is positive definite.
For further details of this proof, please see Proposition 5 in Zhu and Zinde-Walsh [22]. Then, Fisher's information matrix is given by: The corresponding inverse Fisher's information matrix is given by: The matrix in (9) coincides with the Fisher information matrix found by Salazar et al. [11] due to the one-to-one invariance property.
Objective Bayesian Analysis
An important class of objective priors was introduced by Bernardo [13] and later developed by Berger and Bernardo [14]. This class of prior is known as reference priors. A vital feature of the method developed by Berger-Bernardo is the specific treatment given to interest and nuisance parameters. The construction of the reference prior, in the case of nuisance parameters, must be made using an ordered parameterization. The parameter of interest is selected, and the procedure is followed below (see Bernardo [12], for a detailed discussion).
Then, it holds that the conditional reference priors can be represented as: and: where dω j = dω j × · · · × dω m and π R (ω i |θ, ω 1 , . . . , ω i−1 ), i = 1, . . . , m are all proper. A compact approximation will be only required, for the corresponding integrals, if any of those conditional reference priors is not proper.
Reference Prior
The parameter vector (β, σ, s) is ordered and divided into three distinct groups, according to their inferential importance. We considered here the case in which β is the parameter of interest and σ and s are the nuisance parameters. To obtain a joint reference prior for the parameters β, σ, and s, the following ordered parameterization was adopted: Consider the Fisher matrix in (9), the inverse Fisher matrix in (10), and Corollary 1. Let Therefore, a joint reference prior for the ordered parameter is given by: where β ∈ p , σ ∈ + , and s > 1.
Using the likelihood function (4) and the joint reference prior (11), we obtain the joint posterior distribution for β, σ, and s, The posterior conditional probability densities are given by, The densities in (13) and (15) do not belong to any known parametric family, and the densities in (14) can be easily reduced to an inverse-gamma distribution form by the transformation λ = σ s . The parameters of interest were obtained by Monte Carlo methods in a Markov Chain (MCMC). Thus, the posterior densities were evaluated by applying the Metropolis-Hastings algorithm; see, e.g., Chen et al. [23].
A Problem with the Jeffreys Prior
The use of the Jeffreys prior in the multiparametric case is often controversial. Bernardo ([12], p. 41) argued that the use of the Jeffreys prior is not appropriate in many cases and can cause strong inconsistencies and marginalization paradoxes. This prior is obtained from the square root of the determinant of the Fisher information matrix of (9), where: x i x i , and: [ is given by: Such a prior was also presented in Salazar et al. [11]. Both priors are from the family of prior distributions represented as: they are usually improper, and c is a hyperparameter, while π(s) is the prior related to the shape parameter. The Jeffreys prior and reference prior are of the form (17) with, respectively, The posterior distribution associated with the prior in (17) is proper if: where L(s|y) is the integrated likelihood for s, given by: Corollary 2. The marginal prior for s given in Equations (18) and (19) is a continuous function in Proof. See Appendix A.
Proposition 3. The reference prior given in (11) yields a proper posterior distribution, and the Jeffreys prior given in (16) leads to an improper posterior distribution.
Proof. See Appendix A.
Therefore, the Jeffreys prior leads to an improper posterior distribution and cannot be used in a Bayesian analysis. Another objective prior known as the maximum data information prior could be considered [16,[24][25][26]; however, such a prior is not invariant under one-to-one transformations, which limits its use. Additionally, the main aim was to consider objective priors. We avoided the use of normal or gamma priors due to the lack of invariance in the parameters. Moreover, the prior may depend on hyperparameters that are not easy to elicit for the GN distributions, and the posterior estimates may vary depending on the included information. Finally, Bernardo [12] pointed out that the use of our "flat" priors to assign "non-informative" priors should be strongly discouraged because they often result in the suppression of important inappropriate and unjustified assumptions that can easily have a strong influence on the analysis, or even make it invalid.
Metropolis-Hastings Algorithm
Here, the Metropolis-Hastings algorithm is considered to sample from µ, σ, and s. In this case, the following conditional distributions are considered: π R (µ|σ, s, y), π R (σ|µ, s, y), and π R (s|µ, σ, y), respectively. On the other hand, µ ∈ , σ > 0, s > 1, we considered the following change of variables (µ, σ, s) forω = (ω 1 , ω 2 , ω 3 ) = (µ, log(σ), log(s − 1)). This modification leads the parametric space to 3 , which allows us to sample from the posterior distribution in a more efficient way. Considering the Jacobian transformation, the posterior distribution is given by: The construction of the Metropolis-Hastings algorithm is done using a random walk for the parameter ω 1 , for instance the transition is obtained using q(ω 1 , ω * 1 ), where ω * 1 is generated from ω * 1 = ω 1 + kz where z ∼ N(0, η 2 ) and k is a constant that controls the acceptance rate. We have that η 2 is the diagonal element of the covariance matrix from the joint posterior distribution of ω, obtained assuming the maximum a posterior estimate of π R (ω|y).
The computational stability was improved considering the logarithm scale. The steps to sample from the posterior distribution are:
After we generated the values of ω, we computed (µ, σ = exp(ω 2 ), s = 1 + exp(ω 3 )). It was assumed that η has the same values for all steps. The value of k is defined as aiming for the acceptance rate to be between 20% and 80% [27]. To confirm the convergence of the chains, the Geweke diagnostic [28] was used, as well as graphical analysis.
Selection Criteria for Models
In the Bayesian context, there are a variety of criteria that can be adopted to select the best fit between a collection of models. This paper considered the following criteria: the Deviation Information Criterion (DIC) defined by Spiegelhalter et al. [29] and the Expected Bayesian Information Criterion (EBIC) proposed by Brooks [30]. These criteria are based on the posterior mean deviation, E(D(ω)), which is the deviation evaluated at the posterior mean of ω; thus, it is estimated withD = 1 Q ∑ Q q=1 D(ω (q) ), where the index q indicates the q − threalization of a total of Q realizations and D(ω) = −2 ∑ n i=1 log( f (y i |ω)), where f (.) is the probability density of the GN distribution. The criteria DIC and EBIC are given by: Another widely accepted criterion for model selection is the Conditional Predictive Ordinate (CPO). A detailed description of this selection criterion and the CPO statistic, as well as the applicability of the method for selecting models can be found in Gelfand et al. [31,32]. Let D denote the complete data set and D (−i) denote the data with the i-th observation excluded. Consider π(ω|D (−i) ) for i = 1, . . . , n, the posterior density of ω given D (−i) . Thus, we can define the i-th observation of the CPO i by: where f (y i |ω) is the probability density function. High values of CPO i indicate the best model. An estimate for CPO i can be obtained using an MCMC sample of the posterior distribution of ω given D, π(ω|D). For this, let ω 1 , . . . , ω Q be a sample of the distribution π(ω|D), of size Q. A Monte Carlo approximation of the CPO i [23] is given by: A summary statistic of the CPO i 's is ∑ n i=1 log( CPO i ), wherein the higher the value of B, the better the fit of the model. To illustrate the proposed methodology, a comparison between the normal and GN models is presented in Section 8.
Bayesian Case Influence Diagnostics
One way to observe the influence of observations on a fit of the model is via the global diagnosis; for instance, one can remove some cases from the analysis and analyze the effect of the removal [33]. The diagnostics of the case influence in a Bayesian perspective are based on the Kullback-Leibler divergence (K-L). Let K(π, π (−i) ) denote the K-L divergence between π and π (−i) , where π denotes the posterior distribution of ω for all data and π (−i) denotes the posterior distribution of ω without the i-th case. Specifically, and therefore, K(π, π (−i) ) measures the effect of deleting the i-th case from the full one on the joint posterior distribution of ω. The calibration can be obtained by solving the equation: where the Bernoulli distribution B(p) is expressed by the parameter p with success probability [34] and p i is a calibration measure of the K-L divergence. After some algebraic manipulation, the obtained expression is: bounded by 0.5 ≤ p i ≤ 1. Therefore, if p i is significantly higher than 0.5, then the i-th case is influential. The posterior expectation (22) can also be written in the form: where E ω|D (.) denotes the expectation from the posterior π(ω|D). Thus, (23) can be estimated by using the MCMC methods to achieve the sample from the posterior π(ω|D). Therefore, if ω 1 , . . . , ω Q is a sample of size Q of π(ω|D), then:
Applications
In this section, the proposed method is illustrated using artificial and real data.
Artificial Data
An artificial sample of size n = 500 was generated in accordance with (3) with p = 2, x i = (1, x i1 ), x i1 ∼ N(2.5, 1), β = (2, −1.5) , σ = 1, and s = 2.5. The posterior samples were generated by the Metropolis-Hastings technique through the MCMC implemented in the R software [35]. A single chain of dimensions 300,000 was considered for each parameter, and also, we discarded the first 150,000 iterations (burn-in), aiming to reduce correlation problems. A space with a size of 15 was used, resulting in a final sample size of 10,000. The convergence of the chain was verified by the criterion proposed by Geweke [28]. Table 1 shows the posterior summaries for the parameters of the GN linear regression model. It can be seen that the estimates were close to the true values, and the 95% HPD credibility intervals covered the true values of the parameters. We used the same sample previously simulated to investigate the K-L divergence measure in detecting the GN linear regression model's influential observations. We selected Cases 50 and 250 for perturbation. For each of these two cases and also considering both cases simultaneously, the response variable was disturbed as follows: y i = y i + 5S y , where S y is the standard deviation of y i . The MCMC estimates were done similarly to those in the last section. Note that, due to the invariance property, µ can be computed for the standard GN distribution using the Bayes estimates of β 1 and β 2 , that is µ = x i β.
To reveal the impact of the influential observations in the estimates of β 1 , β 2 , σ, and s, we calculated the measure of Relative Variation (RV), which was obtained by RV = θ −θ 0 θ × 100%, whereθ andθ 0 are the posterior averages of the model parameters considering the original data and perturbed data, respectively. Table 2 shows the posterior estimates for the artificial data and the RVs of the estimates of the parameters concerning the original simulated data. The data set denoted by (a) consisted of the original simulated data set without perturbation, and the data sets denoted by (b) to (d) consisted of data sets resulting from perturbations in the original simulated data set. Higher values of the relative variations for the parameters σ and s showed the presence of influential points in the data set. However, the estimate of s did not differ so much from the perturbed Cases (c) and (d). Considering the samples generated from the posterior distribution of the GN linear regression model parameters, we estimated the measures of the K-L divergence and their respective calibrations for each of the cases considered (a-d), as described in Section 7. The results in Table 3 show that for the data without perturbation (a), the selected cases were not influential because they had small values for K(π, π (−i) ) and calibration close to 0.577. However, when the data were perturbed (b,d), the values of K(π, π (−i) ) were more extensive, and their calibrations were close or equal to one, indicating that these data were influential.
Real Data Set
In order to illustrate the proposed methodology, recall the Brazilian Eucalyptus clones data set on the height (in meters) and the diameter (in centimeters) of Eucalyptus clones. The data belong to a large pulp and paper company from Brazil. As a strategy for the rising rentability of the forestry enterprise and keeping pulp and paper production under control, the company needs to keep an intensive Eucalyptus clone silviculture. The height of the trees is an important measure for selecting different clone species. Moreover, it is desirable to have trees of similar heights, possibly with a slight variation, and consequently with a distribution function with lighter tails.
The objective is to relate the tree's diameter (explanatory variable) with its height (response variable). The GN and normal linear regression models were fit to the data via the Bayesian reference process. The posterior samples were generated by the Metropolis-Hastings technique, similar to the simulation study, in which we considered a single chain of dimensions 300,000 for each parameter, with a burn-in of 150,000 iterations; additionally, jumps with a size of 15 were used, resulting in a final sample size of 10,000. The Geweke criteria verified the convergence of the chain. Table 4 shows the posterior summaries for the parameters of both distributions and the model selection criteria. The GN linear regression model was the most suitable to represent the data as it performed better than the normal linear regression model for all the criteria used. For the fit regression model chosen, note that β 2 was significant, and then, for every one-unit increase in the diameter, the average height of Eucalyptus 0.95 meters increased. In particular, the analysis of the shape parameter (s > 2) provided strong evidence of a platykurtic distribution for the errors, and this favored the GN linear regression model. This was further confirmed by graphical analyses of the quantile residuals of the GN linear regression model presented in Figure 2d. Figures 2a and 3a show the scatter plot of the data and the adjusted normal and GN linear regression models. It was observed that, on average, the estimated heights were close to those observed, indicating that the models considered had a good fit. The residuals graph by the fit values and the residuals graph by the observations were also quite similar for both models. The presence of heteroscedasticity (see Figures 2b and 3b) was noted, as well as the quadratic trend (see Figures 2c and 3c) for the height-to-diameter ratio of the Eucalyptus. The graphs of the quantiles of the GN distribution and the normal distribution for the residuals of the models are presented in Figures 2d and 3d, respectively. It can be seen that in the setting of the normal linear regression model, many points were far from their tails, indicating an inadequate specification of the error distribution for the model. On the other hand, using our proposed approach in the GN distribution, we observed a good fit as points were following the line, indicating that the theoretical residuals were close to the observed residuals. Therefore, there was evidence that the model chosen outperformed the normal linear regression model in fitting the data. To investigate the influence of height and diameter data from Eucalyptus on the fit of the generalized normal linear regression model chosen, we calculated the K-L divergence measures and their respective calibrations. Figure 4 shows the K-L measurements for each observation. Note that Cases 335, 479, and 803 exhibited higher values of the K-L divergence when compared with other observations. The K-L divergences and calibrations concerning three observations that showed the highest calibration values are presented in Table 5. It can be seen that Observation 803 was possibly an influential case, which was also shown to be an outlier from the visual analysis. To assess whether this observation altered the parameter estimates of the GN linear regression model, we carried out a sensitivity analysis. Figure 4. Index plot of K(π, π (−i) ) for the height and diameter data of Eucalyptus. Table 6 shows the new estimates of the model parameters after excluding the case with the greatest calibration value and the relative variations (RV) for these estimates regarding the Eucalyptus data. Here, the relative variations were obtained by RV = θ −θ 0 θ × 100%, whereθ andθ 0 are the posterior averages of the model parameters obtained from the original data and from the data without influential observation, respectively. We noted a slight change in the RV of the s parameter when we excluded influential observations. However, such a change was insignificant. This indicated that the GN linear regression model was not affected by the compelling cases. Overall, regardless of the unit measurement/scale in both cases (synthetic and real data sets), the visual representation corroborated the explainability/adjustment of the GN model, by the quantile-quantile plot, residuals versus fits plot, and standardized errors showing no pattern left to explain equal error variances, as well as no outliers.
Discussion
In this paper, we presented the generalized normal linear regression model from objective Bayesian analysis. The Jeffreys prior and reference prior for the generalized normal model were discussed in detail. We proved that the Jeffreys prior leads to an improper posterior distribution and cannot be used in a Bayesian analysis. On the other hand, the reference prior leads to a proper posterior distribution.
The parameter estimates were based on a Bayesian reference analysis procedure via MCMC. Diagnostic techniques based on the Kullback-Leibler divergence were built for the generalized typical linear regression model. Studies with artificial and real data were performed to verify the adequacy of the proposed inferential method.
The result of the application to a set of actual data showed that the generalized normal linear regression model outperformed the normal linear regression model, regardless of the model selection criteria. Furthermore, through a study of artificial data and real data, the Kullback-Leibler divergence effectively detected the points that were influential in the fit of the generalized normal linear regression model. The withdrawal of such influential points from the set of real data showed that the generalized normal model was not affected by influential observations. This result was corroborated by the fact that the generalized normal distribution was considered a tool for reducing outliers and achieving robust estimates. The proposed methodology showed consistent marginalization and sampling properties and thus eliminated the problem of estimating the parameters of this important regression model. Moreover, adopting the reference (objective) prior, we obtained oneto-one consistent results, under the Bayesian paradigm, enabling the GN distribution in a practicalform.
Further works can explore a great number of extensions using this study. For instance, the method developed in this article may be applied to other regression models such as the Student t regression model and the Birnbaum-Saunders regression model, among others. Additionally, other generalizations of the normal distribution should be considered [36][37][38].
Conflicts of Interest:
The authors declare no conflict of interest.
Proof of Proposition 3.
Considering the reference prior given in Corollary 2, the result of Corollary 3, and Condition (20), it follows that the posterior reference distribution is proper if: Thus, Therefore, the reference prior leads to a proper posterior distribution.
Considering the Jeffreys prior given in Corollary 2, the result of Corollary 3, and Condition (20), it follows that the Jeffreys posterior distribution is proper if: Therefore, the Jeffreys prior returned a posterior that is improper, completing the proof.
|
2021-06-17T13:15:34.307Z
|
2021-05-12T00:00:00.000
|
{
"year": 2021,
"sha1": "4ed8226db7fbd06a8dbe9fcfe64f002aca97473f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/13/5/856/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "993f28554ee4e8410727bc087838bbe4d8af84ab",
"s2fieldsofstudy": [
"Mathematics",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
30802091
|
pes2o/s2orc
|
v3-fos-license
|
Heat Shock Element Architecture Is an Important Determinant in the Temperature and Transactivation Domain Requirements for Heat Shock Transcription Factor
ABSTRACT The baker’s yeast Saccharomyces cerevisiae possesses a single gene encoding heat shock transcription factor (HSF), which is required for the activation of genes that participate in stress protection as well as normal growth and viability. Yeast HSF (yHSF) contains two distinct transcriptional activation regions located at the amino and carboxyl termini. Activation of the yeast metallothionein gene, CUP1, depends on a nonconsensus heat shock element (HSE), occurs at higher temperatures than other heat shock-responsive genes, and is highly dependent on the carboxyl-terminal transactivation domain (CTA) of yHSF. The results described here show that the noncanonical (or gapped) spacing of GAA units in the CUP1HSE (HSE1) functions to limit the magnitude of CUP1transcriptional activation in response to heat and oxidative stress. The spacing in HSE1 modulates the dependence for transcriptional activation by both stresses on the yHSF CTA. Furthermore, a previously uncharacterized HSE in the CUP1 promoter, HSE2, modulates the magnitude of the transcriptional activation of CUP1, via HSE1, in response to stress. In vitro DNase I footprinting experiments suggest that the occupation of HSE2 by yHSF strongly influences the manner in which yHSF occupies HSE1. Limited proteolysis assays show that HSF adopts a distinct protease-sensitive conformation when bound to the CUP1HSE1, providing evidence that the HSE influences DNA-bound HSF conformation. Together, these results suggest that CUP1regulation is distinct from that of other classic heat shock genes through the interaction of yHSF with two nonconsensus HSEs. Consistent with this view, we have identified other gene targets of yHSF containing HSEs with sequence and spacing features similar to those ofCUP1 HSE1 and show a correlation between the spacing of the GAA units and the relative dependence on the yHSF CTA.
All organisms possess a highly conserved response to elevated temperatures and to a variety of chemical and physiological stresses commonly designated as the heat shock response (38). In eukaryotic cells this response involves the rapid activation of a transcription factor known as heat shock transcription factor (HSF) (70). Once activated, HSF induces the expression of genes whose products ensure the survival of the cell during stressful conditions by providing defense against general protein damage. These heat shock proteins (Hsps) also play essential roles in the synthesis, transport, translocation, proteolysis, and proper folding of proteins under both normal and stressful conditions (38). Although the heat shock response is conserved among eukaryotes, both the number and overall sequence of HSFs vary widely among different species. Yeasts (Saccharomyces cerevisiae, Schizosaccharomyces pombe, and Kluyveromyces lactis) and Drosophila melanogaster appear to have a single HSF gene, whereas most vertebrates and higher plants possess multiple HSF genes: at least three HSF genes have been isolated from the human, mouse, chicken, and tomato genomes (15,23,28,40,41,44,50,51,53,60,69). Despite sequence divergence, all members of the HSF family have two highly conserved features: a helix-turn-helix DNA binding domain and coiled-coil hydrophobic repeat domains which mediate the trimerization of HSF (26,45,66).
A key step in the induction of heat shock gene transcription is the interaction of HSF with a short, highly conserved cisacting DNA sequence, the heat shock element (HSE) found in the promoters of HSF-responsive genes. All HSEs contain multiple copies of the repeating 5-bp sequence 5Ј-nGAAn-3Ј (where n is any nucleotide) arranged in alternating orientation (2,71). The number of pentameric units in an HSE can vary; while a minimum of three is thought to be required for heatinducible expression, some HSEs harbor eight contiguous inverted repeats (19). Furthermore, the degree of homology of each pentameric unit to the consensus nGAAn motif can vary, as can the nature of the initial pentamer, beginning with either GAA or its complement TTC, with the latter displaying significantly higher levels of biological activity in yeast cells and the capability to bind two HSF trimers instead of one (4). A functional HSE can tolerate a 5-bp insertion between repeating units, provided that the spacing and orientation of the pentameric elements are maintained (2). The binding of HSF to DNA has been shown to be highly cooperative, and deviations from the nGAAn consensus sequence may be tolerated in vivo because multiple HSEs foster cooperative interactions between multiple HSF trimers (4,65,72). These variations in the sequence of the binding site can influence the affinity of HSF for the HSE(s) of a particular heat shock gene, thereby influencing the level of transcriptional activation, and ultimately fine-tune the nature of the heat shock gene response.
The existence of multiple HSF species in higher eukaryotes suggests that HSF isoforms may have specialized functions that can be triggered by distinct stimuli or may activate specific target genes. For example, in human K562 erythroleukemia cells, HSF2 responds to hemin treatment and is constitutively active in mouse embryonal carcinoma cells and at the blastocyst stage during embryogenesis and spermatogenesis (46,55). These observations are consistent with HSF2 functioning as a regulator of heat shock gene expression during development and differentiation, such as its potential regulation of the hsp70.2 gene during spermatogenesis (33,49). Human HSF1 responds to thermal stress and other stresses at the level of trimerization, phosphorylation and DNA binding to activate transcription of Hsp genes (16,48,74). Consistent with the possibility that distinct mammalian HSF isoforms activate different target genes, mouse HSF1 (mHSF1) utilizes a higher degree of cooperativity in DNA binding and demonstrates a preference for HSEs containing four to five pentamers, while mHSF2 has a binding preference for HSEs containing only two to three pentamers (30). This notion is further supported by a recent functional analysis of human HSF1 and HSF2 expressed in yeast, which showed that HSF1 bound with highest affinity to and activated transcription most potently from the SSA3 promoter, which has an extended array of pentameric elements in the HSE (35). On the other hand, HSF2 bound with highest affinity to and activated transcription most potently from the yeast metallothionein gene, CUP1, which has only three pentamers in HSE1 and has a gap between the last two pentamers and an A-to-G substitution (GAG) in the last pentameric unit (35).
Yeast cells utilize the single essential HSF to activate the expression of a wide variety of genes in response to heat and other stresses and to coordinate the expression of genes required for growth under normal physiological conditions. The DNA binding domain of yeast HSF (yHSF) may be more conformationally flexible than HSF1 or HSF2 from higher eukaryotes (20) and allow a wide range of distinct interactions of the DNA binding domain with HSEs. The observation that a single amino acid substitution in the DNA binding domain of yHSF alters the specificity of HSF on different promoters is consistent with this idea (54). A feature distinguishing yHSF from HSFs of higher eukaryotes is the presence of two transactivation domains which respond differentially to heat shock (42,56). Studies of a synthetic HSE-lacZ reporter gene suggested that the yHSF amino-terminal activation domain mediates a transient response to elevated temperatures, while the carboxyl-terminal activation domain (CTA) is required to regulate both a transient and a sustained response (56). Both activation domains are restrained under normal growth conditions by intramolecular interactions with the DNA binding domain, the trimerization domain, and a short conserved element, denoted CE2 (5,14,28,42,56). The presence of two activation domains in yHSF may provide additional levels of regulation or selectivity in gene activation. Previous studies have established that the CUP1 gene is transcriptionally activated by yHSF via heat and oxidative stress (36,63). Interestingly, expression of CUP1 in response to heat shock and oxidative stress exhibits a strong requirement for the CTA of HSF (36,63). In contrast, this region is largely dispensable for the heat shock activation of the SSA1 and SSA3 genes, encoding members of the Hsp70 family (63). It is interesting that in addition to the differential requirement for the CTA of yHSF, activation of CUP1 by yHSF differs from that of typical heat shock genes in that the robust activation of CUP1 requires a temperature of 39 rather than 37°C (63).
The CUP1 promoter HSE is thought to be atypical in that it contains only one HSE (HSE1) composed of three pentameric units. A compilation of HSEs from many organisms demonstrated that for promoters that contain an HSE composed of three pentameric units, additional flanking HSEs are present (4,43). Furthermore, HSE1 deviates significantly from consen-sus HSEs in that there is a gap between the second and third pentamers; however, the gap preserves both the spacing and the orientation between these two repeats. Since yHSF-dependent activation of CUP1 and the SSA genes is distinct, we have carried out a detailed analysis of CUP1 gene expression to understand how yHSF regulates the activation of genes via distinct HSEs and with distinct transactivation domain requirements. We present evidence for a second nonconsensus HSE in the CUP1 promoter, HSE2, which serves to modulate the transcriptional activation of CUP1 in response to both heat and oxidative stress. Furthermore, we demonstrate that the nature of HSE1 plays a crucial role in the dependence on the yHSF CTA for CUP1 activation by heat stress. The expression of two additional yeast genes which contain a gapped HSE is also strongly dependent on the yHSF CTA. Chymotrypsin sensitivity assays show that the arrangement of pentameric units in the CUP1 HSE1 affects the conformation of DNA-bound yHSF and suggests that at least part of the distinct features of CUP1 activation by yHSF may be due to the generation of specific yHSF structures by the HSE. Therefore, this work demonstrates that yeast cells activate and fine-tune the expression of a wide variety of target genes via a single HSF isoform, in part by virtue of the nature of the yHSF binding sites and distinct transactivation domain requirements.
MATERIALS AND METHODS
Strains and growth conditions. S. cerevisiae MCY1093, a gift from Marian Carlson, was used as the wild-type parental strain throughout this study and is designated DTY123. Strain PS145 (a gift from Hillary Nelson) contains a deletion of the endogenous yHSF gene (60). The HSF(1-583) strain, DTY179, has been previously described (63). Cell culture conditions for inducing CUP1 expression by heat shock and oxidative stress using menadione treatment were as previously described (36,63). All CUP1-lacZ fusion plasmids are URA3 based, and all strains were grown in synthetic complete medium minus uracil unless otherwise specified. Strain DTY123skn7 is isogenic to DTY123 and carries a hisG-URA3-hisG (1) disrupted SKN7 gene (10). The SKN7 gene was disrupted following previous protocols (11) by transforming DTY123 with an SKN7/hisG-URA3-hisG fragment that was released from plasmid pBS:SKN7:URA3 by XbaI-XhoI digestion. The chromosomal arrangement of this disrupted skn7 allele was confirmed by both PCR and the increased sensitivity of the skn7 disruption strain to tert-butyl hydroperoxide (37).
Plasmids. All plasmids are numbered according to the 5Ј and 3Ј termini of the CUP1 insert; numbering is relative to the start site of CUP1 transcription. Plasmids containing mutations in the CUP1 HSE that are used for RNA analyses of gene expression are denoted with "m" to distinguish them from plasmids containing mutations in the CUP1 HSE that are used for DNA binding analyses, which are denoted with "M." For analyzing regions of the CUP1 promoter important for CUP1 activation by HSF, restriction enzyme-generated fragments of the CUP1 promoter containing different 5Ј upstream termini but all extending through the 12th codon of CUP1 (BspHI site at ϩ105 from the transcription start site) were ligated into the lacZ fusion vector YEp357R (39). Plasmid pYEpCUP1-807 was generated by using a BspHI-BspHI CUP1 fragment from plasmid pGEXa (63). Plasmids pYEpCUP1-393, pYEpCUP1-241, and pYEpCUP1-167 were generated by using BamHI-, EcoRV-, and XbaI-BspHI CUP1 fragments, respectively, from plasmid pYep336 (12). Mutant CUP1 promoter plasmids pYEpCUP1HSE1P, pYEpCUP1HSE2m, and pYEpCUP1 ACEm were generated by using a Chameleon double-stranded site-directed mutagenesis kit (Stratagene, La Jolla, Calif.), plasmid pYEpCUP1-393, and the following oligonucleotides: CUP1HSE1P Ϫ131/Ϫ172 (5Ј-CGGAAAAGACGC ATCGCTCTGGAAGCTTCTAGAAGAAATGCC-3Ј), CUP1HSE2m (5Ј-GCG ATGCGTCTTTTTCGCTAAACCGTTTCAGCAAAAAAGACTACC-3Ј), and CUP1Acem (5Ј-GCGATGCGTCTTTTCCCGTGAACCGTTCCAGC-3Ј). By the same procedure, plasmid pYEpCUP1HSE1m and oligonucleotide CUP1 HSE2m were used to generate pYEpCUP1HSE1m2m. Plasmids pHSE-WT and pHSE-M were used to prepare identical-sized CUP1 electrophoretic mobility shift assay (EMSA) probes; pHSE-M contains a mutation in CUP1 HSE1, and both plasmids were described previously (63). Plasmids pHSE-2M and pHSE-1M2M contain a CUP1 fragment identical in size to that of pHSE-WT and pHSE-1M, all four plasmids contain CUP1 sequences from Ϫ183 to Ϫ80 cloned into the EcoRV site of pBluescript SK ϩ . pHSE-2M was constructed by ligating a PCR product derived from plasmid pYEpCUP1HSE2m into the EcoRV site of pBluescript SK ϩ . pHSE-1M2M was constructed by ligating a PCR product derived from plasmid pYEpCUP1HSE1m2m into the EcoRV site of pBluescript SK ϩ . The ability of HSEs from various genes to function as heat-inducible VOL. 18,1998 HSE STRUCTURE MODULATES HSF ACTIVITY upstream activation sequences (UAS) was tested using the CYC1-lacZ fusion plasmid pCM64, a gift from Charles Moehle. Plasmid pBS:SKN7:URA3 was constructed as follows. The hisG-URA3-hisG cassette (1) was removed from plasmid pNKY51 by BglII-BamHI digestion, filled in, and ligated into plasmid pBS:SKN7 (a generous gift from Richard Stewart) which had been digested with StyI-MscI. The StyI-MscI digestion removes nucleotides that code for approximately 480 of the 622 amino acids of SKN7 from plasmid pBS:SKN7. Digestion of pBS:SKN7:URA3 with XbaI-XhoI produces a fragment containing the hisG-URA3-hisG cassette with SKN7 sequence flanking each end. To facilitate the purification of full-length yHSF from Escherichia coli, plasmid pET3d-HSF-His6, which contains a six-His tag added to the carboxy terminus of the yHSF open reading frame cloned into pET3d, was constructed. The following plasmids were utilized for making antisense RNA probes by using T7 RNA polymerase and for RNase protection assays. Plasmids pKSACT1 and pKSlacZ, for determining CUP1-lacZ and ACT1 mRNA levels, were described elsewhere (32). Plasmid pKSSSA3 was constructed by ligating a 159-bp EcoRI-HincII fragment from the SSA3 gene into the EcoRI-SmaI sites of pBluescript KS ϩ . Plasmid pSKCUP1 was constructed by inserting a 149-bp EcoRI-BamHI fragment from the CUP1 gene into the same sites of pBluescript SK ϩ . pSKHSC82 was constructed by ligating a PCR product containing a 115-bp fragment of the HSC82 gene to which EcoRI-BamHI sites were introduced into the same sites of pBluescript SK ϩ . Plasmid pSKHSP82 was constructed by ligating a PCR product containing a 109-bp fragment of the HSP82 gene to which EcoRI-BamHI sites were introduced into the same sites of pBluescript SK ϩ . The latter two plasmids were used to generate antisense RNA probes which hybridize specifically to HSC82 or HSP82 mRNA spanning positions ϩ2161 to ϩ2275 and ϩ2196 to ϩ2305, respectively, in the 3Ј untranslated regions of both genes (18,25).
RNA isolation and RNase protection analyses. RNA from either control, heat shock-treated, or menadione-treated cells was isolated as previously described (36,63). 32 P-labeled antisense CUP1, HSC82, and HSP82 RNAs were produced from BamHI-linearized plasmids. The ACT1 mRNA level was used as a control for normalization for quantitation of RNase protection products throughout this study. RNase protection samples were separated on 6% acrylamide gels; radioactive bands on the dried gels were quantitated by using a PhosphorImager SP and IPLab Gel software (Molecular Dynamics) as described elsewhere (29).
In vitro DNA binding studies. EMSAs were carried out as described previously (57,59,63). Plasmids pHSE-WT, pHSE-M, and pHSE-1M,2M were used to prepare CUP1 HSEWT (wild-type HSE), HSE1M, and HSE1M,2M probes for EMSA by digesting with EcoRI and HindIII and filling in the 103-bp fragments with the Klenow fragment and [␣-32 P]dATP. Yeast extracts for EMSA were prepared from cells by glass bead disruption in 50 mM Tris-Cl (pH 7.5)-1 mM EDTA-protease inhibitors as previously described (63). Binding reactions were for 30 min at room temperature; the binding buffer was previously described (57,59). Protein levels in yeast extracts was measured by using the Bio-Rad protein assay (Bio-Rad, Hercules, Calif.). Radiolabeled probes were purified by nondenaturing polyacrylamide gel electrophoresis (PAGE), and probe concentrations were determined spectrophotometrically by UV light absorbance at 260 nm. Quantitative DNA binding studies were carried out, and apparent K d (K d,app ) and Hill coefficients were determined as described previously (29). Competitive DNA binding assays using CUP1 HSEWT, HSE1M, and HSE1M,2M probes were carried out as previously described (57,59). Purified yHSF and 0.1 nM probe were used for all K d,app determinations, and a 4% polyacrylamide gel system with a high cross-linking ratio, 5.6:1, of acrylamide to bisacrylamide was used (62) for all assays with purified yHSF. These gels were run at 4°C and contained 0.5ϫ Tris-borate-EDTA 10% glycerol, and 0.1% Nonidet P-40 (NP-40); the running buffer also contained 0.5ϫ Tris-borate-EDTA and 0.1% NP-40. Following electrophoresis, EMSA gels were fixed (10% acetic, 10% methanol), dried, exposed to X-ray film, and subjected to PhosphorImager analysis.
The DNase I footprinting reactions were carried out as for the EMSA DNAbinding reactions except that after the binding incubation, 1/10 volume of a buffer containing 25 mM MgCl 2 and 20 mM CaCl 2 and 1 l of a 1:2,000 dilution of DNase I (10 U/ml; Boehringer Mannheim, Indianapolis, Ind.) were added, and the mixture was incubated for 1 min. DNase I digestion was terminated by the addition of 1/10 volume of 250 mM EDTA and loaded immediately onto an EMSA gel (65). Radioactive bands were excised from the EMSA gel, DNA ethanol precipitated, and fractionated on denaturing polyacrylamide gels (47). The gels were dried and exposed to X-ray film and PhosphorImager screens.
Limited proteolysis of yHSF-HSE complexes. The proteolytic clipping band shift assay (52) was carried out as previously described (22,64). Briefly, purified yHSF-DNA complexes formed at room temperature for 30 min were subjected to limited proteolysis with chymotrypsin (amounts of chymotrypsin and lengths of incubation are indicated in figure legends). Binding reactions and the gel system used for these limited proteolysis experiments were identical to those described above used in K d,app determinations. The chymotrypsin (Worthington Biochemical Corporation) was diluted into water just before use. Chymostatin (Boehringer Mannheim) was used to terminate reactions in the limited proteolytic time course assays. Fixed-time limited proteolysis assays were terminated by direct loading to EMSA gels. Following electrophoresis, EMSA gels were fixed (10% acetic, 10% methanol), dried, exposed to X-ray film, and subjected to PhosphorImager analysis. The amount of yHSF-DNA complexes and free probe remaining after limited proteolysis was quantitated by PhosphorImager analysis of the dried gels.
Expression and purification of yHSF from E. coli. Full-length yHSF was expressed and purified by standard protocols (3), with minor modifications. Six liters of freshly transformed E. coli BL21(DE3)pLysS cells containing plasmid pET3d-HSF-His6 was grown in Superbroth (Digene Diagnostics, Beltsville, Md.) at 37°C to an A 600 of 0.8. Isopropyl--D-thiogalactopyranoside was added to a final concentration of 0.4 mM, and induction was carried out at 25°C for approximately 6 h. Cells were harvested, and cell pellets were frozen in liquid nitrogen and stored at Ϫ80°C. All subsequent steps in this purification were carried out at 4°C. Cell pellets were thawed in the presence of protease inhibitor cocktail (3); cells were resuspended in approximately 100 ml of breaking buffer (50 mM sodium phosphate [pH 8], 300 mM NaCl, 10% glycerol) and broken by one passage through a French pressure cell (SLM Aminco, Champaign, Ill.) at 16,000 lb/in 2 . The cell extract was centrifuged for 30 min at 30,000 ϫ g, the pH of the supernatant was adjusted to approximately 8, and incubation continued for 30 min with gentle mixing at 4°C in batch, using 3 ml of packed Ni-nitrilotriacetic acid (NTA) resin (Qiagen, Chatsworth, Calif.) per 50 ml of extract. The resin was washed twice by centrifugation at 500 rpm in a RT6000B swinging-bucket centrifuge (Sorvall, Wilmington, Del.) at 4°C with wash buffer 1, which was identical to breaking buffer except that it contained 500 mM NaCl and 5 mM imidazole. The resin was then washed once with wash buffer 2, which was identical to wash buffer 1 except that it contained 10 mM imidazole. The resin was pooled, and elution of yHSF was effected by two successive incubations with 6 ml of elution buffer (identical to wash buffer 1 except that it contained 200 mM imidazole) for 15 min with gentle mixing. The eluted sample was aliquoted, frozen by using liquid nitrogen, and stored at Ϫ80°C. HSF was further purified by gel filtration chromatography on a Superose 6 HR10/30 column (Pharmacia, Piscataway, N.J.). Procedures for calibration and FPLC (fast protein liquid chromatography) purification using the Superose 6 column (31, 59, 68) and the chromatography buffer (68) have been described elsewhere. Briefly, the Ni-NTA resin eluate was thawed and adjusted to 0.1 mM EDTA-0.1 mM EGTA-0.1% NP-40 immediately prior to injection of 0.2 ml onto the Superose column. The column chromatography buffer was modified by the addition of 0.1% NP-40, the column was run at 0.3 ml/min, and 0.3-ml fractions were taken. After binding of the centrifuged cell lysate to the Ni-NTA resin, the protease inhibitor mix was replaced by the single protease inhibitor Pefabloc (Boehringer Mannheim). Pefabloc was used in all buffers for all remaining steps. HSF eluted at approximately 12 ml (fractions 37 to 45), immediately before the position where the thyroglobulin standard (660 kDa) elutes. The purified yHSF was stable at 0°C for several weeks.
Protein extraction and immunoblotting. Whole-cell protein extracts for immunoblotting were prepared exactly as described previously (35) by glass bead extraction using sodium dodecyl sulfate (SDS) harvest buffer (0.5% SDS, 10 mM Tris-HCl [pH 7.4], 1 mM EDTA) containing protease inhibitors. Protein concentration was determined by the Bradford assay (Bio-Rad). Extracts were resolved by SDS-PAGE (10% gel), transferred to nitrocellulose, and immunoblotted under standard conditions. Immunoblotting was carried out with reagents and protocols from Amersham, using anti-yHSF polyclonal antiserum (a gift from P. Sorger), Hsc82/Hsp82p polyclonal antibody (a gift from S. Lindquist), Ssa3/Ssa4p polyclonal antibody (a gift from E. Craig), and monoclonal antibodies against phosphoglycerate kinase (Pgk1p; Molecular Probes, Eugene, Oreg.). Proteins of interest were detected by using the Renaissance chemiluminescence detection system (NEN Life Sciences, Boston, Mass.). Band intensity was estimated using NIH Image version v1.61.
RESULTS
The CUP1 promoter harbors two nonconsensus HSEs. We previously identified a single HSE in the CUP1 promoter required for transcriptional activation in response to heat shock, glucose starvation, and superoxide radical generation (36,63). Transcription of CUP1 via this HSE is distinct from that of HSP70 genes in its requirement for the yHSF CTA and for heat shock at 39 rather than 37°C (63). To identify other regulatory sites that might function in the transcriptional activation of CUP1 by yHSF, we fused a series of CUP1 promoter deletion mutants to the lacZ gene in YEp357R and analyzed expression from these plasmids by RNase protection experiments (Fig. 1). DNA sequences between Ϫ807 and Ϫ241 of the CUP1 promoter do not appear to contribute to the magnitude of CUP1 activation in response to either heat shock or menadione treatment (Fig. 1). Deletion of this segment of DNA increases the basal level of CUP1 expression approximately fourfold (Fig. 1). Truncation of the CUP1 promoter to Ϫ163, which destroys the first pentameric unit in HSE1, also increased basal transcription fourfold compared to longer pro-moter fragments but essentially eliminated the activation of CUP1 by HSF in response to both heat shock and oxidative stress (pYEpCUP1-163 [ Fig. 1]). Analysis of the 3Ј CUP1 promoter region showed that deletions of the CUP1 transcribed region to ϩ9 from the start site of transcription had no significant effect on the magnitude of the transcriptional activation of CUP1 in response to heat shock or oxidative stress (data not shown).
Based on the observation that DNA sequences upstream of Ϫ241 are not required for heat shock induction of CUP1, we investigated whether the CUP1 HSE1 alone was sufficient to function as a heat-inducible UAS. The HSEs from the SSA1, SSA3, and SSA4 genes were previously shown to be sufficient to function as heat-inducible UASs (6,73). DNA sequences encompassing the CUP1 HSEs from Ϫ168 to Ϫ141 or Ϫ168 to Ϫ116 from the transcription start site were unable to activate heat-induced transcription when fused to the yeast CYC1 basal promoter, while the SSA3 HSE strongly activated heat-induced transcription in this context (data not shown). Longer fragments of the wild-type CUP1 promoter (Ϫ393 to Ϫ91, Ϫ183 to Ϫ79, Ϫ393 to Ϫ1, or Ϫ168 to Ϫ1 from transcription start) were also unable to activate transcription in this context, implicating a requirement for specific CUP1 basal promoter elements for yHSF-mediated activation of CUP1. Therefore, the CUP1-lacZ fusions used throughout this study contain the CUP1 HSE and basal promoter fused to lacZ.
In contrast to the HSEs found within the SSA1 and SSA3 promoters, the CUP1 HSE1 contains only three pentameric units, with a gap between the second and third units ( Fig. 2A). However, DNase I footprinting and methylation interference analyses have shown that the HSF trimer interacts with all three pentameric sites (2,63). Since a 200-fold molar excess of an oligonucleotide containing sequences adjacent to HSE1 (Ϫ141 to Ϫ107) competes for the binding of yHSF in crude extracts with a probe containing Ϫ241 to ϩ37 of the CUP1 promoter (54), we investigated whether a second nonconsensus HSE, HSE2 ( Fig. 2A), might function in CUP1 transcriptional activation. The two HSEs are similar in that both contain only three GAA units and both start with TTC.
CUP1 HSE architecture and arrangement modulates transcriptional potency. To determine whether the gap in the CUP1 HSE1 or the putative HSE2 plays a role in the regulation of CUP1 transcription in response to heat stress, expression from CUP1 promoters with mutationally altered HSE1 and HSE2 elements (summarized in Fig. 2A) was analyzed by RNase protection. Two strategies were adopted. First, the spacing in HSE1 was altered to match the consensus HSEs such as those found in the SSA1 and SSA3 promoters. Second, the putative HSE2 was mutagenized within each pentamer at positions known to be essential for HSF binding to consensus HSEs (19). Conversion of the gapped CUP1 HSE1 to an HSE with four GAA repeats, designated HSE1 "perfect" (HSE1P/ 2), resulted in 3.9-and 3.6-fold increases in CUP1 activation in response to both heat shock and oxidative stress (data not shown), respectively, compared to the wild type (Fig. 2B). Mutagenesis of HSE2 (HSE1/2m [ Fig. 2]) resulted in a 3.5-fold increase in the transcriptional activation of CUP1 in response to heat shock and oxidative stress (data not shown) compared to the wild type (Fig. 2B). This hyperactivation depended on the functional integrity of HSE1, as demonstrated by the inactivity of the double mutant HSE1m/2m (Fig. 2B). Combination of the two mutations (HSE1P/2m) did not result in any significant difference in expression as compared to the CUP1 HSE1P/2 promoter (Fig. 2B). Mutation of HSE1 alone (HSE1m/ FIG. 1. Analysis of CUP1 promoter sequences required for heat shock and oxidative stress-inducible transcription. Total RNA was isolated from transformants of strain DTY123 harboring the indicated CUP1-lacZ promoter derivatives, and steady-state mRNA levels for lacZ and ACT1 (indicated with arrows) were analyzed by RNase protection experiments. Values for normalized expression were determined before (control [C]; 27°C) and after heat shock (HS; 39°C) or menadione treatment (MD; 500 M). Heat shock was carried out for 20 min; cells were exposed to menadione for 1 h. Quantitation was carried out with a PhosphorImager, and in each case the CUP1-lacZ mRNA level was normalized to the respective ACT1 mRNA level. The values represent averages of three separate determinations Ϯ standard deviations. The graph indicates the normalized expression levels of CUP1-lacZ mRNA detected in each lane. Nucleotide numbers refer to positions relative to the start site of transcription of the CUP1 gene; vector represents YEp357R. Fig. 2A]) also resulted in a CUP1-lacZ fusion that was transcriptionally inactive to both heat shock and oxidative stress ( Fig. 2B and data not shown), confirming our previous results demonstrating the requirement for HSE1 in the stress induction of CUP1 (36,63). These results with either double mutation (HSE1P/2m and HSE1m/2m) demonstrate that modulation of the transcriptional activation of CUP1 through HSE2 is highly dependent on the nature of HSE1.
[
Based on the results obtained with the HSE1P CUP1-lacZ fusion, we synthesized oligonucleotides spanning the HSE1P mutation to determine whether HSE1P could confer heat shock-inducible expression to the CYC1 basal promoter. An oligonucleotide containing the HSE1P mutation and spanning from Ϫ168 to Ϫ116 or from Ϫ168 to Ϫ141 of the CUP1 promoter potently activated the CYC1-lacZ reporter in response to heat shock (data not shown). Therefore, the requirement for the basal promoter region in the yHSF-mediated transcriptional activation of CUP1 in response to heat shock can be dispensed with by using a canonical HSE but not the CUP1 HSE1.
The activation of CUP1 by heat shock and oxidative stress has been previously shown to be independent of the Cu iondependent transcription factor, Ace1p (54,63). However, a high-affinity Ace1 binding site overlaps the TTC and partially overlaps the second GAA unit in HSE2 ( Fig. 2A), and the HSE2 mutation converts the GAA to AAA, disrupting one nucleotide in this Ace1p site. We therefore analyzed transcriptional activation from a CUP1-lacZ fusion which destroys the high-affinity Ace1p site that overlaps the HSE2 sequence but does not mutate HSE2 ( Fig. 2A, Ace1m). Activation of the Ace1m CUP1-lacZ fusion in response to heat stress was indistinguishable from the wild-type promoter (data not shown). Therefore, the increased transcriptional activation observed for the HSE1/2m CUP1-lacZ fusion gene is independent of activation by Ace1p.
Another stress-responsive transcription factor found in S. cerevisiae, Skn7p, possesses significant homology to the DNA binding domain of yHSF (9) and binds to a GAA-containing sequence of the TRX2 promoter (37). Therefore, we investigated whether the increased transcriptional response of the HSE1P/2 and HSE1/2m CUP1-lacZ promoters might be due to Skn7p-mediated activation. The heat shock responses of both wild-type and mutant CUP1-lacZ fusions in a skn7 disruption strain were indistinguishable from that of the wild-type SKN7 strain (data not shown). Taken together, these results demonstrate that the modulation of CUP1 expression in response to heat shock is mediated by HSF, HSE1, and HSE2.
The architecture of CUP1 HSEs imparts specificity to the mode of activation of CUP1. Transcriptional activation of CUP1 by yHSF differs from that of SSA3 in that CUP1 activation requires an optimal heat shock temperature of 39 rather than 37°C (63). Furthermore, CUP1 expression in response to heat shock is highly dependent on the CTA of yHSF, whereas the SSA1 and SSA3 promoters are much less dependent on this domain. To determine if HSE1 and HSE2 are determinants in these features of CUP1 transcriptional activation, we compared expression from the wild-type and mutationally altered CUP1-lacZ fusion genes at 37 and 39°C (Fig. 3A). Consistent with previous results (63), activation of CUP1 at 37°C was only 25% of that observed at 39°C (Fig. 3A). Interestingly, the generation of either HSE1P or HSE2M did not alter the temperature induction profile of CUP1; that is, expression of both derivatives was maximal at 39°C. Both mutations, however, change the efficacy of transcription at 37°C. The HSE1/2m and HSE1P/2 CUP1 derivatives give rise to a level of heat shockinducible transcription at 37°C that is comparable to that observed for the wild-type fusion at 39°C (Fig. 3A). Therefore, both HSE2 and the gap between pentamers 2 and 3 in HSE1 act to limit the expression of CUP1 at a temperature where many other HSF-responsive genes are near maximal expression.
Isogenic wild-type HSF and HSF(1-583) cells harboring wild-type and mutant CUP1-lacZ fusions were analyzed to ascertain whether HSE1 or HSE2 plays a role in the dependence of CUP1 transcriptional activation on the yHSF CTA. As shown in Fig. 3B and consistent with previous analyses (36,63), heat shock activation of the wild-type CUP1-lacZ reporter in the HSF(1-583) strain is greatly reduced (approximately 70%) compared to a strain with wild-type HSF. In contrast, heat shock activation of the HSE1P CUP1-lacZ reporter in the HSF(1-583) strain is reduced only 43% compared to a strain with wild-type HSF. These results suggest that the gapped HSE1 plays a critical role in determining the degree of dependence of CUP1 expression in response to heat shock on the HSF CTA. Transcription from the HSE1P/2 CUP1 promoter derivative was hyperactivated in the HSF(1-583) strain, exhibiting approximately threefold greater activation than the wildtype promoter in the wild-type HSF strain (Fig. 3B). In contrast to expression in a wild-type HSF strain, the HSE1/2m reporter expression is greatly reduced in the HSF(1-583) strain, with an induction approximately equal to that observed for the wild-type reporter in the HSF(1-583) strain (Fig. 3B). This finding suggests that the transcriptional activation observed for the HSE1/2m promoter under stress conditions is dependent on the interaction of yHSF with HSE1. Similar results were obtained in response to oxidative stress using the wild-type, HSE1/2m, and HSE1P/2 reporter plasmids in the HSF(1-583) strain and wild-type HSF strain (data not shown). The data for the HSF(1-583) strain suggest that the HSE1P promoter increases the ability of the amino-terminal activation domain of HSF to activate CUP1 expression. Since the HSF(1-583) protein completely lacks a CTA, the results in Fig. 3B suggest that the gapped HSE1 plays a critical role in determin-ing the contribution of the amino-terminal activation domain of HSF to the magnitude of CUP1 expression in response to both heat shock and oxidative stress.
To ascertain whether the correlation between a gapped HSE architecture and higher dependence on the yHSF CTA is a general phenomenon, other Hsp gene promoters with similarly organized HSEs were examined. Inspection of HSEs found in the promoters of HSP82, HSC82, and CUP1 suggests that nonconsensus HSEs may be commonly used for transcriptional responses to heat shock. Previous analysis of the HSE1 from the HSC82 and HSP82 promoters (8,18,24,25) showed that, like the CUP1 HSE1, they are composed of only three pentameric units containing a gap between pentamers 2 and 3, with all three sites properly oriented and spaced (Fig. 3C). Based on these observations, we measured the heat-induced levels of endogenous CUP1, HSC82, and HSP82 mRNAs to determine if these genes exhibit a strong dependence on the yHSF CTA. Heat shock-induced expression of the CUP1 and HSC82 genes in the HSF(1-583) strain are most affected, with heat shock transcription being only 23 and 24%, respectively, of that observed in an isogenic wild-type HSF strain. Expression of HSP82 in the HSF(1-583) strain is only 37% of that in the wild-type HSF strain, while heat shock expression of SSA3 in the HSF(1-583) strain was least affected by the loss of the yHSF CTA (56% of the level in wild-type strain). This result with SSA3 is identical to that observed with the HSE1P/2 reporter in Fig. 3B, where 56% of the steady-state expression level in the HSF(1-583) background was retained compared to that present in the HSF wild-type strain. To determine whether there are significant physiological consequences for the reduction in the magnitude of transcriptional activation for the HSP82, HSC82, and SSA3 genes in the HSF(1-583) strain, the levels of these proteins in control and heat-shocked cells were determined by immunoblotting. Consistent with the steadystate RNA measurements for the HSP82 and HSC82 genes demonstrating a strong requirement for the HSF CTA in heatinduced expression of these two genes, protein levels of both Hsp90 isoforms were severely diminished (80 to 90%) in HSF (1-583) cells (Fig. 3D). In contrast, the levels of Ssa3/Ssa4 proteins were only slightly reduced (10 to 20%) in HSF(1-583) cells (Fig. 3D). These results strongly suggest that heat shockinduced expression from promoters containing contiguous HSEs is less dependent on the yHSF CTA than expression from promoters with gapped HSEs. yHSF binding to HSE2 modulates interactions at HSE1. The data described here implicate the presence of a second CUP1 HSE, HSE2, in the modulation of CUP1 transcription that is dependent on both yHSF and the nonconsensus HSE1. To ascertain whether yHSF interacts directly with HSE2 and whether this interaction might modulate the occupancy of HSE1, in vitro DNA binding studies were carried out. Since yeast cells express two proteins (10, 21) bearing homology to the yHSF DNA binding domain that appear to play no role in CUP1 regulation but which may confound in vitro DNA binding studies, full-length yHSF was expressed in and purified from E. coli. To facilitate the purification of yHSF for DNA binding studies, we constructed a yHSF allele in which a His 6 tag was placed at the carboxyl terminus of the coding region. This HSF-His 6 protein fully complemented the viability defect associated with disruption of the single endogenous yHSF gene at both 30 and 37°C (data not shown). yHSF was obtained after sequential purification on Ni-NTA agarose and FPLC Superose-6 chromatography (31,59,68). Purified yHSF migrated on a Coomassie blue-stained SDS-polyacrylamide gel at approximately 150 kDa and comigrated with HSF present in wholecell yeast extracts from non-heat-shocked cells, as detected by VOL. 18, 1998 HSE STRUCTURE MODULATES HSF ACTIVITY HSE1P and HSE2m CUP1-lacZ fusions lower the temperature threshold for transcriptional activation of CUP1. The steady-state levels of CUP1-lacZ mRNA from the wild-type, HSE1P, and HSE2m fusions were analyzed before (control [C]; 27°C) and after heat shock (HS) at either 37 or 39°C for 20 min. ACT1 and CUP1-lacZ mRNA levels were assayed and quantitated as described for Fig. 1. Below the data are schematic representations of the CUP1-lacZ promoter derivatives assayed in this RNase protection experiment and normalized expression levels of CUP1-lacZ mRNA. Details of the mutations in the CUP1 HSEs are given in Fig. 2. (B) The HSE1P but not the HSE2m CUP1-lacZ fusion reduces dependency of CUP1 transcriptional activation on the yHSF CTA. Experiments were carried out as described for panel A except that the CUP1-lacZ fusions were also assayed in a yeast strain containing the HSF(1-583) allele. (C) HSC82 and HSP82 possess similar GAA unit arrangements in HSE1, with a gap between units 2 and 3; SSA3 contains a contiguous array of 5-bp units. (D) Deletion of the HSF carboxyl-terminal activation domain results in severe reduction in Hsp82/Hsc82 protein levels, while Ssa3/Ssa4 protein levels are only moderately affected, as determined by Western blot analysis of Hsp82/Hsc82, Ssa3/Ssa4, and Pgk1 protein levels in yeast strains containing either wild-type HSF or HSF(1-583). Yeast cells were heat shocked (HS) at 39°C for 1 h, and extracts were prepared by glass bead disruption as described in Materials and Methods. Samples were subjected to SDS-PAGE and immunoblotted with polyclonal antisera to Hsp82/Hsc82, Ssa3/Ssa4, and Pgk1. Pgk1 levels were used for normalizing sample loads.
Western blot analysis ( Fig. 4A and B). Furthermore, purified yHSF specifically bound to the CUP1 promoter in a manner similar to yHSF present in crude cell extracts from non-heatshocked cells (Fig. 4C). The amount of yHSF from crude cell extracts binding to CUP1 DNA is lower than that in the recombinant yHSF samples due to the low abundance of endogenous yHSF in the cell extracts used in the binding reaction.
The dissociation constants for yHSF-CUP1 promoter complexes were determined by quantitative EMSAs. As shown in Fig. 5, yHSF interacts with the CUP1 promoter fragment encompassing HSE1 and HSE2 with a K d,app of (3.7 Ϯ 0.5) ϫ 10 Ϫ9 M. Since the CUP1 HSE1P/2 derivative gave rise to increased expression in response to stress and this expression was less dependent on the yHSF CTA, we determined whether these effects were due to an increase in binding affinity of yHSF for the CUP1 HSE1P promoter. yHSF bound to the CUP1 HSE1P/2 probe with a K d,app of (3.4 Ϯ 0.9) ϫ 10 Ϫ9 M, demonstrating that yHSF does not have a significantly higher affinity for the CUP1 HSE1P compared to the wild-type promoter. The apparent affinity of yHSF for the CUP1 HSE2M probe, (4.0 Ϯ 0.2) ϫ 10 Ϫ9 M, was not significantly different from that for either the CUP1 HSEWT or CUP1 HSE1P. Furthermore, no difference was observed in the apparent Hill coefficients obtained for the three CUP1 promoter sequences (approximately 1.5), suggesting a lack of differences in yHSF binding cooperativity to these three CUP1 promoter derivatives. This Hill coefficient is highly reproducible, and the intermediate value for the apparent Hill coefficient of between 1 and 2 suggests that one yHSF trimer may bind stably, and a second may bind only weakly or partially, to the CUP1 HSE1. Since the SSA3 and CUP1 promoters also exhibit marked differences in heat shock-inducible gene expression as a function of their HSEs, we explored whether yHSF exhibits different binding affinities for these two promoters. In three independent experiments, differences in neither affinity nor binding cooperativity were observed (data not shown). Therefore, it does not appear that the differences in heat shock-responsive expression between the CUP1 HSE1P, CUP1 HSE2M, and SSA3 promoters and the CUP1 HSEWT promoter are due to differences in the affinity of yHSF for the HSEs or in binding cooperativity. Rather, differences may be due to binding site context-dependent alterations in bound HSF or interactions with other factors.
EMSAs using yHSF and CUP1 promoter mutations strongly suggest that yHSF binds to HSE2, albeit with a very low efficiency compared to HSE1 (data not shown). To precisely map the site of interaction of yHSF with the CUP1 HSE2, DNase I footprinting analysis was performed with the CUP1 HSEWT probe (Fig. 6A). At low yHSF concentrations (Fig. 6A, lane 2), strong protection over a region encompassing HSE1, from Ϫ172 to Ϫ143, was observed. Binding of yHSF to HSE1 is accompanied by DNase I hypersensitivity at several positions upstream of position Ϫ172. Additionally, at this concentration of yHSF (14 nM), there is modest protection over the region corresponding to CUP1 HSE2. At a sevenfold-higher concentration of yHSF, however, this DNase I cleavage pattern is altered in several distinct ways (Fig. 6A, lane 3). First, HSE2 is strongly protected from DNase I cleavage by yHSF from positions Ϫ134 to Ϫ120 on the bottom strand. Second, three sites of DNase I hypersensitivity within or flanking HSE2, at positions Ϫ123, Ϫ127, and Ϫ140, are observed (Fig. 6A). Third, concomitant with more complete occupation of HSE2, there is a marked increase in DNase I cleavage at several positions in the 3Ј end of HSE1, including Ϫ148, Ϫ152, Ϫ154, and Ϫ155. The same samples used to load the gel shown in panel A were electrophoresed and subjected to immunoblotting with polyclonal anti-yHSF antiserum. Purified yHSF, which comigrates with HSF present in non-heat-shocked yeast cell extract, is indicated by the arrowhead. (C) EMSA of purified yHSF. Lane 1 contains no yHSF protein; lanes 2 and 3 contain purified yHSF; lanes 4 and 5 contain crude yeast extract prepared from non-heat-shocked cells by glass bead disruption as described in Materials and Methods. EMSAs were carried out as described in Materials and Methods. All lanes contain 1 ng of 32 P-labeled CUP1 HSEWT probe with or without the CUP1 competitor DNA, at 30-fold molar excess, indicated above the gel. Therefore, occupation of the lower-affinity HSE2 site by yHSF appears to alter the interaction of yHSF with HSE1.
To verify the specificity of yHSF occupation of HSE2 and the effect of yHSF binding to HSE2 on HSE1 binding, DNase I footprinting was carried out with the CUP1 HSE2M probe (Fig. 6B). The complete lack of protection observed over HSE2, even at high yHSF concentrations, demonstrates the specificity of the interaction of yHSF with HSE2 (compare Ϫ134 through Ϫ120 in Fig. 6A and B). The 5Ј boundary of the protected region over HSE1 is identical to that of the HSEWT probe (Ϫ172 [Fig. 6]). However, in contrast to the wild-type CUP1 probe, there are no alterations in the protection over HSE1 as more yHSF is added (compare Ϫ148 through Ϫ155 in Fig. 6A and B). However, the extent of the protected region over HSE1 decreased, and the three hypersensitive sites observed with the wild-type CUP1 probe were abolished (compare Fig. 6A and B). Furthermore, the major cleavage site at Ϫ143 (Fig. 6B) and the residues immediately upstream of this site, TCG (bottom strand, Ϫ144 to Ϫ146) are no longer protected (compare Fig. 6A and B). Therefore, yHSF bound at HSE2 may facilitate the binding of yHSF to the third pentameric unit (GAG) of HSE1.
HSF adopts distinct conformations when bound to consensus and atypical HSEs. So far, our results show that differences in the transcriptional activity of the CUP1 promoter derivatives cannot be attributed to any changes in either the binding affinity or the cooperativity with which yHSF binds these DNA sequences. To more directly assess whether the differences in transcriptional activation from the CUP1 promoter derivatives are due to changes in the conformation of DNA-bound HSF, protease sensitivity assays were carried out with purified yHSF bound to the CUP1 HSEWT, HSE1P, and HSE2M DNA fragments. The proteolytic clipping band shift assay (52) utilizes limited proteolysis of DNA-bound protein and has been used to probe the structure of transcription factor-DNA complexes (22,52,64). HSF-CUP1 HSE complexes were subjected to limited proteolysis by incubation with increasing concentrations of chymotrypsin, and the resulting complexes were separated on EMSA gels (Fig. 7A). The differences in sensitivity to digestion of the HSF-HSEWT and HSF-HSE1P complexes were striking. The HSF-HSE1P complex was routinely more resistant to chymotrypsin treatment than either the HSF-HSEWT or HSF-HSE2M complex (compare lanes 3, 8, and 13 in Fig. 7A). There is approximately an order of magnitude difference in the amount of chymotrypsin required to obtain similar levels of proteolytic sensitivity for the HSEWT and HSE1P. The sensitivities of the HSF-HSEWT and HSF-HSE2M complexes were nearly indistinguishable, suggesting that the major determinant in the chymotrypsin sensitivity of the DNA-bound yHSF is HSE1. Addition of chymostatin to the binding reactions prior to chymotrypsin resulted in complexes which were completely resistant to chymotrypsin (Fig. 7A; compare lanes 5, 10, and 15 with lanes 1, 6, and 11). There were no obvious differences in the pattern of products generated by chymotrypsin digestion of yHSF bound to the HSEWT, HSE1P, or HSE2M ( Fig. 7A and data not shown). The data in Fig. 7B shows that the difference in chymotrypsin sensitivity between the HSF-HSE1P and HSF-HSEWT and HSF-HSE2M can also be observed in the rate of proteolysis of these complexes. After 12 min of chymotrypsin digestion, the HSF-HSEWT and HSF-HSE2M complexes are almost completely degraded whereas approximately 25% of the HSF-HSE1P complex remains (Fig. 7B). Thus, the HSF-HSE1P complex adopts a conformation distinct from the HSF-HSEWT and HSF-HSE2M complexes which can be demonstrated as differences in both the concentration and rate of limited digestion with chymotrypsin (Fig. 7). 9). yHSF concentrations used were 5.5 ϫ 10 Ϫ10 , 9.6 ϫ 10 Ϫ10 , 1.6 ϫ 10 Ϫ9 , 2.9 ϫ 10 Ϫ9 , 5 ϫ 10 Ϫ9 , 8.9 ϫ 10 Ϫ9 , 1.5 ϫ 10 Ϫ8 , 2.7 ϫ 10 Ϫ8 , and 4.8 ϫ 10 Ϫ8 M for lanes 1 through 9, respectively. Lane 10 contains the free probe. The probe concentration in each reaction was 0.1 nM. Positions of the free probe (F) and yHSF-DNA complex (B) are indicated. (B) Graphical representation of the quantitation of the protein titration plots in panel A. The data were quantitated with a PhosphorImager and then plotted and analyzed as described previously (29) and in Materials and Methods. The K d,app for each probe was derived from at least three independent determinations and is the average Ϯ standard deviation. K d,app s were (3.7 Ϯ 0.5) ϫ 10 Ϫ9 M for the CUP1 HSEWT probe (3.4 Ϯ 0.9) ϫ 10 Ϫ9 M for the CUP1 HSE1P probe, and (4.0 Ϯ 0.2) ϫ 10 Ϫ9 M for the CUP1 HSE2M probe. Each data point for each probe is taken from the average of at least three independent determinations. The line is drawn only through the data points for the CUP1 HSEWT probe.
DISCUSSION
Higher eukaryotic cells possess multiple distinct HSF isoforms, encoded by different genes. This diversity is further increased through differential splicing, responses to distinct stresses, and preferences for binding to distinct arrangements of HSEs (70). In contrast, the S. cerevisiae HSF is encoded by a single, essential gene and binds to some HSEs constitutively, while binding to other HSEs is induced in response to an environmental or pharmacological stimulus (24,27,58). Furthermore, yHSF differentially activates gene expression through the use of separate amino-or carboxyl-terminal transactivation domains or by responding to distinct stressors (36,42,56,63). Therefore, yHSF may represent a composite of the functions carried out by individual HSF isoforms in higher eukaryotes. The observation that human HSF isoforms are differentially functional when expressed in yeast cells lacking the endogenous HSF gene further underscores this notion (35).
To explore the mechanisms underlying the differential use of yHSF transactivation domains for target gene activation, HSFdependent activation of the CUP1 gene was investigated in detail. CUP1 represents a heretofore atypical HSF-dependent gene in that it contains a nonconsensus HSE in its promoter, requires heat shock at 39°C for robust activation, as opposed to 37°C for other HSF target genes, responds to superoxide radical generators for HSF-mediated activation, and exhibits a strong requirement for the yHSF CTA (36,63). Here, we have demonstrated that the CUP1 promoter harbors two HSEs, neither of which resembles those found in typical HSF-respon-sive genes, such as SSA1 or SSA3, in their fundamental architecture. Consistent with the known interaction of HSF with HSEs as homotrimeric proteins, both the CUP1 HSE1 and HSE2 harbor three repeats of the pentameric element. Furthermore, the separation of CUP1 HSE1 and HSE2 by one helical turn provides a mechanism for potential interactions between two DNA-bound HSF trimers on the same face of the DNA helix. Although the CUP1 HSE1 contains three pentamers, the distance of one helical turn between pentamers 2 and 3 allows occupancy of major grooves on the DNA with a distinct geometry compared to contiguous pentamers such as those found in SSA1 and SSA3. Indeed, the generation of a CUP1 HSE1 derivative which mimics those found in SSA1 and SSA3 (CUP1 HSE1P) results in stress-responsive transcriptional activation characteristics which more closely resemble these genes in terms of the temperature optimum, their reduced dependence on the yHSF CTA, and ability to activate a heterologous CYC1 basal promoter. One possible mechanism by which the CUP1 HSE1P might enhance CUP1 expression is by affecting the affinity of yHSF for DNA. However, our results suggest no significant difference in the apparent affinity of yHSF for the CUP1 promoter fragment containing HSE1P compared to HSE1WT.
Previous experiments have demonstrated that HSF-dependent activation of CUP1 in response to heat and oxidative stress is absolutely dependent on HSE1 (36,63). Here, we have shown that although neither HSE1 nor HSE2 functions to activate heat-inducible expression in the context of a fusion to the yeast CYC1 core promoter, HSE2 plays an important role VOL. 18,1998 HSE STRUCTURE MODULATES HSF ACTIVITY in modulating CUP1 expression through HSE1. The inability of HSE2 alone to function as an activating HSE, even in the context of the CUP1 promoter, may in part be due to its low affinity for yHSF, a consequence of the altered spacing between each of the three pentamers (2,30,71). This architecture of HSE2 may also affect structural changes in the CUP1 promoter DNA upon binding of yHSF. DNase I treatment of yHSF-CUP1 promoter DNA complexes results in hypersensitive sites within and adjacent to HSE2, suggesting that the binding of yHSF to the CUP1 DNA induces conformational changes in the DNA. Hypersensitive sites have been observed in DNase I treatment of mHSF1-and mHSF2-HSE complexes (31). Although HSE2 is incapable of autonomously driving yHSF-dependent activation of CUP1 transcription and is bound by yHSF with low affinity, the occupation of HSE2 has dramatic effects on stress-dependent activation of CUP1 tran-scription and the manner in which HSE1 is bound by yHSF. The generation of a form of HSE2 incapable of binding yHSF renders CUP1 heat inducible transcription hyperactivated at 37 and 39°C, in a manner similar to the conversion of CUP1 HSE1 to HSE1P. Furthermore, consistent with potential interactions between yHSF trimers bound both at HSE1 and HSE2, DNase I footprinting assays demonstrate that the occupation of HSE2 alters the manner in which yHSF is bound at HSE1. Studies of the Drosophila hsp70 promoter have demonstrated the presence of a high-and a low-affinity HSE, the latter of which plays a critical role in the heat-inducible transcriptional response (65). It is interesting that an HSE found in the human prointerleukin 1- gene, which consists of only two pentameric units, fails to serve as a heat shock-inducible element but restrains expression from the promoter in response to heat shock jointly administered with the inducer, lipopolysaccharide (13). It is thought that this may provide a mechanism to temper the inflammatory response. Furthermore, Westwood et al. have demonstrated the binding of Drosophila HSF to chromosomal loci that far exceed the predicted number of heat shock-inducible genes (67). These observations, taken together with the data described here, suggest that in addition to their role as genespecific positive transcriptional regulatory elements, HSEs might modulate both HSF activity and the activity of distinct cis-acting promoter elements. Similar context-dependent activation or repression has been observed with the retinoic acid receptor bound to its cognate DNA response element (34). The organization of the CUP1 HSE1 is very similar to that of the HSE1 in the HSP82 and HSC82 genes. We found that the heat shock induction of these three genes is highly dependent on the yHSF CTA. Like CUP1, HSP82 and HSC82 have multiple HSEs; however, others have shown that only the HSE1 of HSP82 and HSC82 is constitutively occupied in vivo (18). Giardina and Lis have shown that there is a change in the in vivo footprint of the HSP82 HSEs following heat shock (24). The changes in HSF-DNA binding upon heat shock were seen mainly on the low-affinity HSEs, HSE2 and HSE3 of HSP82. The binding of yHSF to these weaker HSEs in the HSP82 promoter was transient, and these sites were largely unoccupied once cells progressed through a recovery stage and into the non-heat-shocked stage. It may be that a similar situation occurs on the CUP1 promoter with HSE1 representing the constitutively occupied HSE with HSE2 occupied only upon stress induction. This is consistent with in vitro DNA binding studies performed in this report showing that HSE2 is a lowaffinity site. Our DNase I footprinting assays demonstrate that the occupation of HSE2 alters the manner in which yHSF is bound at HSE1 and perhaps in vivo occupancy of HSE2 following stress induction tempers the transcriptional response of CUP1. The Hill coefficient for the HSE2M was approximately 1.5, suggesting that a second trimer may be only weakly bound to HSE1. The occupancy of HSE2 upon stress induction may also act to stabilize yHSF bound to HSE1. Future in vivo footprinting experiments will address these possibilities.
How do the specific architecture of HSE1 and the presence of HSE2 impart unique HSF-dependent regulatory characteristics to CUP1? One mechanism may be that HSF binds to the CUP1 HSE1 with less cooperativity than for a large contiguous HSE, thereby leading to a tempering of CUP1 expression. mHSF1 and mHSF2 differ in the potential for cooperative interactions with HSEs: mHSF1 binds cooperatively to extended HSEs much like that found in the SSA3 gene, and mHSF2 has a binding preference for HSEs harboring two or three pentamers like that in the CUP1 promoter (30,31). Since the DNA binding domain of yHSF may be more conformationally flexible than that of mHSF1 or mHSF2 (20), perhaps FIG. 7. yHSF binds to the CUP1 HSEWT, HSE1P, and HSE2M with distinct conformations. (A) The partial proteolysis with chymotrypsin of the HSF-HSEWT complex is compared with that of the HSF-HSE1P and HSF-HSE2M complexes. Binding reactions containing the indicated probe were carried out exactly as described for Fig. 5 except that all lanes contained 1.5 ϫ 10 Ϫ8 M HSF. Following the standard binding assay, chymotrypsin was added to all reactions except those in lanes 1, 6, and 11. The HSF-HSE complexes in lanes 2 through 4, 7 through 9, and 12 through 14 were digested with 0.01, 0.1, and 1 ng chymotrypsin, respectively. Binding reaction mixtures were incubated for 10 min following chymotrypsin addition before loading onto EMSA gels. The chymotrypsin inhibitor chymostatin (1 g) was added to the samples in lanes 5, 10, and 15 immediately prior to chymotrypsin (1 ng) addition. Bound HSF-HSE complexes (B) and free probe (F) are indicated. (B) Graphical representation of the results from time course assays of partial proteolysis of HSF-HSE complexes. Experimental conditions were as for panel A except that chymotrypsin digestion (1 ng) was carried out for the indicated time followed by chymostatin addition to terminate the digestion prior to loading onto EMSA gels. The bound HSF-HSE complexes (B) and free probe (F) were quantitated with a PhosphorImager and plotted. The data represent the averages of two separate experiments.
yHSF extracts binding site context information to influence the level of cooperativity used to bind a given promoter. However, our results suggest that yHSF binds to the CUP1 HSE1WT, HSE1P, and HSE2M with nearly identical levels of apparent cooperativity. It is also possible that when bound to HSE1, HSF adopts a conformation that alters its interactions with the basal transcription machinery in the core promoter of the CUP1 gene. Indeed, our results which demonstrate that a consensus HSE from SSA3, or the CUP1 HSE1P but not the CUP1 HSE1WT element, can confer heat-inducible expression to the CYC1 basal promoter are consistent with a requirement for the adaptation of distinct HSF conformations on the different HSEs. Consistent with this idea, substitution of the Gcn4 leucine zipper for the yHSF trimerization domain has recently demonstrated that the oligomeric state of the DNA-bound HSF-Gcn4 chimeric protein depends on the number and orientation of pentameric units within the HSE (17). Therefore, it is possible that the HSE structure can similarly influence the overall yHSF conformation. The in vitro DNA binding, in vivo gene expression, and protease sensitivity assay results described here are consistent with a change in the conformation of DNA-bound yHSF. The digestion of yHSF-HSE complexes with chymotrypsin shows that a yHSF surface is more readily accessible to the protease in the yHSF-HSEWT complex than it is in the yHSF-HSE1P complex. The ability of DNA to induce conformational changes in transcription factors has been previously proposed for the nuclear hormone receptors (61), the yeast pheromone/receptor transcription factor (64), and nuclear factor NF-B (p50) 2 (22). Therefore, it is possible that HSE structure can similarly influence the conformation of yHSF, and perhaps the extended linker region in yHSF facilitates such changes.
What might be the mechanisms by which HSEs with contiguous pentamers exhibit a reduced dependence on the yHSF carboxyl-terminal activation domain compared to CUP1? Since the CTA is known to harbor an additional coiled-coil domain (14), it is conceivable that this region (HSF584-833) is responsible for intermolecular interactions that serve to stabilize yHSF trimers on the CUP1 promoter or to augment interactions between yHSF trimers bound at HSE1 and HSE2. It is also possible that yHSF receives context information from the HSE that specifies which functional surfaces of yHSF will be presented to the transcriptional machinery or to other, non-DNA binding regulatory factors. The chymotrypsin sensitivity data suggest that a gapped HSE such as the CUP1 HSE1 might induce a conformation of HSF which more efficiently utilizes the C-terminal rather than the N-terminal transactivation domain. The more canonical HSE such as HSE1P might result in more of the HSF N-terminal than C-terminal transactivation domain being presented to the transcriptional machinery. A similar mechanism has been proposed to dictate whether the glucocorticoid receptor functions through its response element as a transcriptional activator or repressor (61). As we show here, the architecture of HSEs in the CUP1, HSC82, HSP82, and HSP70 family of genes plays an important role in the features of the heat shock transcriptional response. Perhaps the sequences of these promoter HSEs have evolved to facilitate differential use of the yHSF transactivation domains and thus impart distinct characteristics to the heat shock transcriptional response. Although HSFs are regulated at many levels in response to stress, these studies demonstrate that promoter context represents a further level for regulation of transcription during the stress response.
|
2014-10-01T00:00:00.000Z
|
1998-11-01T00:00:00.000
|
{
"year": 1998,
"sha1": "3a42d0a362672c0c8ac893f068c70addf9c1c637",
"oa_license": "CCBY",
"oa_url": "https://mcb.asm.org/content/18/11/6340.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3a42d0a362672c0c8ac893f068c70addf9c1c637",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
100219068
|
pes2o/s2orc
|
v3-fos-license
|
Prominent ethanol sensing with Cr2O3 nanoparticle-decorated ZnS nanorods sensors
ZnS nanorods and Cr2O3 nanoparticle decorated ZnS nanorods were synthesized using facile hydrothermal techniques and their ethanol sensing properties were examined. X-ray diffraction and scanning electron microscopy revealed the good crystallinity and size uniformity of the ZnS nanorods. The Cr2O3 nanoparticle decorated ZnS nanorod sensor showed stronger response to ethanol than the pristine ZnS nanorod sensor. The responses of the pristine and decorated nanorod sensors to 200 ppm of ethanol at 300oC were 2.9 and 13.8, respectively. Furthermore, under these conditions, the decorated nanorod sensor showed longer response time (23 s) and shorter recovery time (20 s) than those of the pristine one (19 and 35 s, respectively). Consequently, the total sensing time of the decorated nanorod sensor (42 s) was shorter than that of the pristine one (55 s). The decorated nanorod sensor showed excellent selectivity to ethanol over other volatile organic compound gases including acetone, methanol, benzene, toluene, whereas the pristine one failed to show selectivity to ethanol over acetone. The improved sensing performance of the decorated nanorod sensor is attributed to the modulation of the conduction channel width and the potential barrier height at the ZnS Cr2O3 interface accompanying the adsorption and desorption of ethanol gas as well as the greater surface to volume ratio of the decorated nanorods than the pristine one due to the existence of the ZnS Cr2O3 interface.
ZnS nanorods and Cr2O3 nanoparticle-decorated ZnS nanorods were synthesized using facile hydrothermal techniques and their ethanol sensing properties were examined. Xray diffraction and scanning electron microscopy revealed the good crystallinity and size uniformity of the ZnS nanorods. The Cr2O3 nanoparticle-decorated ZnS nanorod sensor showed stronger response to ethanol than the pristine ZnS nanorod sensor. The responses of the pristine and decorated nanorod sensors to 200 ppm of ethanol at 300°C were 2.9 and 13.8, respectively. Furthermore, under these conditions, the decorated nanorod sensor showed longer response time (23 s) and shorter recovery time (20 s) than those of the pristine one (19 and 35 s, respectively). Consequently, the total sensing time of the decorated nanorod sensor (42 s) was shorter than that of the pristine one (55 s). The decorated nanorod sensor showed excellent selectivity to ethanol over other volatile organic compound gases including acetone, methanol, benzene, toluene, whereas the pristine one failed to show selectivity to ethanol over acetone. The improved sensing performance of the decorated nanorod sensor is attributed to the modulation of the conduction channel width and the potential barrier height at the ZnS-Cr2O3 interface 1
I. INTRODUCTION
The remarkable fundamental properties of zinc sulfide (ZnS) has made them have diverse applications such as light-emitting diodes, lasers, flat panel displays, infrared windows, sensors, and biodevices, etc [1]. In particular, ZnS can be applied to the fabrication of ultraviolet (UV) light sensors and gas sensors. Over the past decade, the following ZnS nanostructure based gassensors have been reported: a ZnS single nanobelt sensor for H2 sensing [1], ZnS nanobelt sensors for H2 sensing [2], ZnS microsphere sensors for O2 sensing [3], ZnS nanowire sensors for acetone and ethanol sensing [4], ZnS nanotube array sensors for humidity sensing [5].
Metal oxide semiconductors are endowed with many good properties such as high sensitivity, fast sensing, low detection limits and function durability that the sensor materials should have.
On the other hand, this sensing material has several shortcomings such as high operation temperature, poor selectivity and reliability. A range of techniques including doping, 2 heterostructure formation and light activation have been studied to overcome these drawbacks.
Of these techniques, the heterostructure formation technique was adopted to overcome the poor performance of the SnO21D nanostructure-based sensors in this study. Formation of heterostructures by creating interfaces between two dissimilar semiconducting materials can make the Fermi levels across the interface equal, i.e., in equilibrium, resulting in charge transfer and the formation of a interfacial depletion region. This will eventually lead to enhanced sensor performance. The enhanced sensing properties of these heterostructures might be attributed to many factors, including electronic effects such as band bending due to Fermi level equilibration, charge carrier separation, depletion layer manipulation and increased interfacial potential barrier energy, chemical effects such as decrease in activation energy, targeted catalytic activity and synergistic surface reactions and geometrical effects such as grain refinement, surface area enhancement, and increased gas accessibility [6]. Heterostructure formation is commonly achieved by either forming core-shell structures by coating nanostructures with a thin film or decorating nanostructures with dissimilar semiconductor nanoparticles. In this paper, we report the synthesis of Cr2O3 nanoparticle-decorated ZnS nanorods via a facile hydrothermal route and their enhanced sensing properties towards ethanol (C2H5OH) gas.
Synthesis of pristine and Cr2O3 nanoparticle-decorated ZnS nanorods
The Cr2O3 nanoparticle-decorated ZnS nanorods were synthesized via a facile hydrothermal route: ZnS nanorods were synthesized using a hydrothermal method. First, Au-coated sapphire was used as a substrate for the synthesis of ZnS nanorods. Au was deposited on a silicon (100) substrate by direct current (dc) magnetron sputtering. A quartz tube was mounted horizontally inside a tube furnace. An alumina boat containing 99.99 % pure ZnS powders and silicon substrates were placed separately in the two-heating zone-tube furnace, where the ZnS powders were in the first heating zone and Si substrates in the second heating zone. The substrate temperatures of the first and second heating zones were set to 850 and 650 o C, respectively, with an ambient nitrogen gas pressure and a flow rate maintained at 1 Torr and 50 cm 3 /min, respectively, throughout the synthesis process. The thermal evaporation process was carried out for 1 h and then the furnace was cooled to room temperature at 1 mTorr, after which the products were taken out.
On the other hand, 50-mM Cr2O3 precursor solution was prepared by dissolving chromium acetate monohydrate (Cr(CH3COO)2•H2O) in distilled water. 50 ml of the Cr2O3 precursor solution and 10 ml of 28% NH4OH solution were mixed together. The mixed solution was then ultrasonicated for 30 min to form a uniform solution and then rotated using a centrifuge at 5,000 rpm for 2 min to precipitate the Cr2O3 powders. The precipitated powders were collected by removing the liquid leaving the powders behind. The collected powders were rinsed in a 1:1-solution of isopropyl alcohol (IPA) and distilled water to remove the impurities. The rinsing process was repeated five times. Subsequently, the Cr2O3 precursor solution was dropped onto the ZnS nanorods on a substrate and the substrate was rotated at 1,000 rpm for 30 s for Cr2O3 decoration. After the spin-coating process, the Cr2O3 decorated ZnS nanorod sample was dried at 150°C for 1 min and then annealed in air at 500°C for 1 h.
Materials characterization
The phase and crystallinity of the pristine and Cr2O3 nanoparticle-decorated ZnS nanorods were analyzed by XRD (Philips X'pert MRD) using Cu Kα radiation (1.5406Å). The data was collected over the 2θ range, 20-80°, with a step size of 0.05° 2θ at a scan speed of 0.05°/s. Assignment of the XRD peaks and identification of the crystalline phases were carried out by comparing the obtained data with the reference compounds in the JCPDS database. The morphology and particle size of the synthesized powders were examined by SEM (Hitachi S-4200) at an accelerating voltage of 5 kV.
Sensor Fabrication
For the sensing measurement, a SiO2 film (~200 nm) was grown thermally on the single crystalline Si (100). In the meantime, the as-synthesized ZnS nanorods and Cr2O3 nanoparticle-
Sensing tests
The electrical and gas sensing properties of the pristine and Cr2O3 nanoparticle-decorated ZnS nanorods were determined at different temperatures in a quartz tube inserted in an electrical furnace. During the tests, the nanorod gas sensors were placed in a sealed quartz tube with an electrical feed through. A predetermined amount of ethanol (>99.99 %) gas was injected into the testing tube through a microsyringe to obtain ethanol concentrations of 10, 20, 50, 100, and 200 ppm while the electrical resistance of the nanorods was monitored. The response was defined as Ra/Rg where Rg and Ra are the electrical resistances of sensors in ethanol gas and air, respectively. The response time was defined as the time needed for the change in electrical resistance to reach 90% of the equilibrium value after injecting ethanol, and the recovery time was defined as the time needed for the sensor to return to 90 % of the original resistance in air after removing the ethanol gas.
Crystalline structure and morphology
The structure and chemical composition of pristine and Cr2O3 nanoparticle-decorated ZnS nanorods were examined by XRD immediately after sample preparation. As shown in Fig. 2 [7].
where D is the crystallite size in nm, K is the shape factor (0.90), λ is the wavelength of X-rays used (1.5406Å), β is the full-width at half maximum in degrees and θ is the diffraction angle in degrees. The values obtained were 60 nm and 50 nm for the pristine and Cr2O3 nanoparticle-decorated ZnS nanorods, respectively. nanoparticles with long radii of 30-40 nm and small radii of 10 -20 nm ( Fig. 3(b)). The surface-to-volume ratio of the decorated nanorods must be much higher than that of the pristine nanorods.
Gas-sensing properties
2.1. Optimal working temperature 6 The sensitivity of gas sensors is strongly influenced by the operating temperature.
Parallel experiments were carried out over the temperature range from 200 to 400°C to determine the optimal operating temperature of the sensors. Figure The Occupational Safety Health Administration (OSHA) established the maximum recommended exposure level of ethanol to be 1000 ppm [9], and the Cr2O3 nanoparticledecorated ZnS nanorod sensor can easily detect this level of ethanol.
Sensor response with ethanol gas concentration
A relationship between the sensor response (S=Rg/Ra) and ethanol concentration can be expressed as an empirical equation: where A, b and [Cethanol] are a constant, an exponent and the ethanol concentration, respectively [10]. In fact, "b" is a charge parameter with an ideal value of 1 for Oand 0.5 for O 2-, which is derived from the surface interaction between the chemisorbed oxygen and target gas [11,12]. The response of the decorated nanorod sensor tended to increase more rapidly than that of the pristine one as the ethanol concentration increased, suggesting that the response of the former to ethanol would be much stronger than that of the latter at high ethanol concentrations. For these ZnS-based sensors, 300°C may be an optimal operating temperature because the activation energy for the adsorption of ethanol is low at that temperature, whereas those for the adsorption of other gas species are relatively high at that temperature.
Response and recovery times
Up to now, metal oxide sensors for ethanol detection have been studied extensively.
This is because ethanol is used widely in different industries and its detection in drunk drivers is important for social safety. Table 1 compares the ethanol sensing properties of the pristine and Cr2O3 nanoparticle-decorated ZnS nanorod sensor fabricated in this study with other 1D gas sensors reported in the literature [14][15][16][17][18][19][20][21][22][23][24]. The table shows that the response and response/recovery times of the Cr2O3 nanoparticle-decorated ZnS nanorod sensor are comparable to those of metal oxide semiconductor 1D nanostructured sensors.
Gas-sensing mechanism
Based on the above results, the Cr2O3 nanoparticle-decorated ZnS nanorod sensor showed a significantly improved sensing performance compared to the pristine ZnS sensor. For the pristine ZnS sensor, the gas sensing mechanism can be explained mainly in terms of modulation of the depletion layer accompanying the adsorption and desorption of gases. When the pristine ZnS sensor is exposed to air, oxygen molecules On the other hand, for Cr2O3 nanoparticle-decorated ZnS nanorod sensor, ZnS and Cr2O3 are n-type and p-type semiconductors, respectively, with different electron affinities (3.8 eV for ZnS [26], no data available for Cr2O3). Little is known about the electron affinity of Cr2O3, but the Fermi energy level of ZnS might be lower than that of Cr2O3 because ZnS and Cr2O3 are n-and p-type semiconductors, respectively, and the Fermi energy level of a n-type semiconductor is commonly higher than that of a p-type semiconductor. Therefore, the transfer of electrons will occur from the conduction band of Cr2O3 to that of ZnS to make the two Fermi energy levels (EF) equal ( Fig. 9(a)). This will result in the formation of an electron depletion layer and a potential barrier at the ZnS-Cr2O3 n-p junction interface, which will enhance the response of the Cr2O3 nanoparticle-decorated ZnS nanorod sensor further compared to the pristine one.
The enhanced ethanol gas sensing performance of the Cr2O3 nanoparticle-decorated ZnS nanorod sensor can be explained by modulation of the conduction channel width [50] and the potential barrier height at the ZnS-Cr2O3 interface [27,28]. = 10 -5 -10 -6 cm [29], Even though the data of λD(Cr2O3) is not available at present, a large portion of the total volume of each Cr2O3 nanoparticle might be depleted of carriers in air. The larger depletion layer width in the decorated nanorods than that in the pristine 11 nanorods leads to higher resistivity and a larger change in resistivity. In addition to the increased depletion layer width, the formation of a potential barrier at the ZnS-Cr2O3 interface due to electron trapping in the interface states should be considered when explaining the enhanced response of the decorated nanorods to ethanol gas. Upon exposure to ethanol gas, the potential barrier at the ZnS-Cr2O3 interface will decrease, whereas after stopping the ethanol gas supply, the potential barrier will increase upon exposure to air (Fig. 9). Hence, modulation of the potential barrier occurs concomitantly with the adsorption and desorption of gas molecules, which would increase the change in resistance, i.e., the response of the decorated nanorod sensor to ethanol gas.
In addition to the above two effects, the ZnS-Cr2O3 interfaces provide additional.
Preferential adsorption sites and diffusion paths for oxygen and ethanol molecules [30], which might also contribute to the enhanced ethanol gas sensing properties of the decorated nanorod sensor. In other words, the enhanced response of the Cr2O3 nanoparticle-decorated ZnS nanorod sensor is partially attributed to the higher surface-tovolume ratio of the decorated nanorods than that of the pristine one because the ZnS-Cr2O3 interfaces also act as preferential adsorption sites like the outer surface of the nanorods.
|
2019-04-08T13:04:06.981Z
|
2016-05-20T00:00:00.000
|
{
"year": 2016,
"sha1": "dc6c09bc10ac550f0e8212b08885d7647bb21f87",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1605.06202",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d111d2dcdab607951aea1eedd07b79030cc56e1c",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
3932400
|
pes2o/s2orc
|
v3-fos-license
|
Distance-dependent danger responses in bacteria
The last decade has seen a resurgence in our understanding of the diverse mechanisms that bacteria use to kill one another. We are also beginning to uncover the responses and countermeasures that bacteria use when faced with specific threats or general cues of potential danger from bacterial competitors. In this Perspective, we propose that diverse offensive and defensive responses in bacteria have evolved to offset dangers detected at different distances. Thus, while volatile organic compounds provide bacterial cells with a warning at the greatest distance, diffusible compounds like antibiotics or contact mediated killing systems, indicate a more pressing danger warranting highly-specific responses. In the competitive environments in which bacteria live, it is crucial that cells are able to detect real or potential dangers from other cells. By utilizing mechanisms of detection that can infer the distance from danger, bacteria can fine-tune aggressive interactions so that they can optimally respond to threats occurring with distinct levels of risk.
Introduction
New methods in imaging and genome sequencing have reaffirmed and expanded our appreciation of the diversity of bacterial communities in nature [1][2][3]. However, as powerful as these techniques are, they serve mainly to catalogue bacterial diversity while offering limited insights into the behaviors of the constituent communities. Are coexisting bacteria competing with one another or cooperating for their mutual benefit? Over the last few decades the pendulum on these questions has swung fairly broadly in both directions, and has led to productive and valuable research enterprises across both extremes [4,5]. Cooperative interactions mediated by, for example, cross-feeding or quorum sensing, are widespread, and can alter bacterial behaviors for a variety of traits linked to bacterial fitness [6][7][8][9]. At the same time, surveys from natural populations have found that while cooperative interactions between bacteria exist, they are far less common than competitive interactions [10 ]. Indeed, the last 10 years has seen a renaissance in identifying and understanding the diverse means by which bacteria compete and kill one another. Antagonism is rife and is coordinated by a growing arsenal, including antibiotics, bacteriocins, volatile organic compounds (VOCs), and different forms of contact-dependent killing. But why have bacteria evolved so many ways to damage one another? Using results mainly based on studies of bacterial co-cultures, we hypothesize that these diverse mechanisms of antagonism have evolved as non-redundant responses to threats occurring at different distances from a focal cell.
Distance-dependent danger sensing
Bacteria need to be able to detect and discriminate between different kinds of biotic threats in their immediate environment. However, because these threats occur at different spatial scales, they also call for different types of responses. Recently, Cornforth and Foster proposed the idea of Competition Sensing whereby bacterial cells respond to the direct harm caused by competing cells or to nutrient limitation [11 ]. Similarly, LeRoux et al. proposed that bacteria detect ecological competition by sensing danger cues of competition, rather than direct harm per se. Such cues can include material from lysed kin cells or diffusible signals from competitors that are detected by a dedicated danger sensing signal transduction mechanism that activates a danger response regulon [12 ]. Both ideas are important because they make clear that bacteria integrate features of the biotic environment via cues before eliciting a potentially metabolically costly response [11 ,13 ]. However, it is also important to determine if the nature of these cues directs the form of the response. Our review of the literature suggests that it does (Tables 1 and S1). We consider three broad categories of cues ( Figure 1) that are detected at decreasing distances and which indicate different levels of danger: VOCs, diffusible compounds, and those that are contact-dependent. Although these categories are admittedly arbitrary and occasionally overlap, they help to classify examples where these distinct cues induce different types of offensive or defensive responses in target organisms. We consider caveats and limitations with this classification and questions for future studies below.
Volatile organic compounds
VOCs are low molecular weight compounds (<300 Da) that can readily evaporate at ambient temperatures and air pressures [14,15]. Because of these properties volatiles can disperse through both water-and gas-filled pores in the soil, making them extremely suitable for long distance interactions in these spatially complex environments. Volatiles are often considered to be side products of primary metabolism, but this viewpoint is challenged by findings that many volatiles demonstrate biological activity [16], such as antibacterial or antifungal activity [17,18 ]. Volatile blends differ among bacterial species, thereby raising the possibility that these long-distance cues can inform other species of the specific identity of the producers [19]. At the same time, because VOCs can travel far from their source of production, their detection at low concentrations implies that possible threats from these species, due potentially to the direct antimicrobial effects of the VOCs themselves [20,21], are not imminent. Accordingly, and given their diverse chemistries, we predict that detection of microbial VOCs will lead to generalized mechanisms of defence. These include different forms of escape together with the induction of more broadly effective modes of protection. Growth, motility and biofilm formation can all be modified by VOCs at low concentrations (Table 1), as can the Table 1 An overview of our literature survey. Indicated are the studies that measured the responses to the different compounds as indicated on the left. For more information we refer to Distance-dependent danger sensing in bacteria. Soil is a spatially heterogeneous environment consisting of soil particles, shown in grey, and water-filled and air-filled pockets, shown in white. Because of these physicochemical properties volatiles (shown in blue) can diffuse over long distances. Sensing volatiles provides information about the presence of a distant competitor and induces protective responses including an increase in antibiotic resistance. At a closer range diffusible molecules (shown in red), for example, antibiotics, signal the presence of a competitor in the near vicinity, which requires a counterattack such as the induction of antibiotic production. Cell-contact mediated antagonism such as a Type VI secretion system (T6SS) attack (shown in purple), invokes an immediate T6SS counterattack. Responding cells in all panels are shown in orange.
induction of developmental transitions in microbial colonies. For example, trimethylamine produced by Streptomyces venezuelae induces the production of a novel cell type in other streptomycetes, called explorers, that rapidly disperse away from high levels of local competition and towards higher resource concentrations [22]. In addition, bacteria consistently respond to VOCs by increasing antibiotic resistance, even if the volatiles themselves have no antimicrobial properties. For example, Escherichia coli increases its resistance to gentamicin and kanamycin after exposure to Burkholderia ambifaria volatiles [23]. Pseudomonas putida reacts to indole produced by E. coli by inducing an efflux pump that increases resistance to several antibiotics [24]. Importantly, P. putida cannot produce indole itself, providing direct evidence that bacteria can alter their intrinsic levels of antibiotic resistance in response to volatile bacterial cues. Similarly, Acinetobactor baumannii responds to the P. aeruginosaproduced small volatile 2 0 amino-acetophenone (2-AA) by altering cell-wide translational capacity and thereby increasing the production of antibiotic-recalcitrant persister cells [25]. Although these results are suggestive, it is important for future studies to distinguish the direct influence of VOCs on cells from their indirect effects mediated by the changes they induce in the test environment. For example, ammonia and trimethylamine, volatiles produced by E. coli, appear to increase tetracycline resistance in both Gram-positive and Gram-negative bacteria, while these volatiles did not display any growth toxicity at the same concentration [20]. However, rather than directly inducing a response in a target cell, the result was instead explained by the effects of these VOCs on environmental pH; this change, in turn, lead to reduced antibiotic transport [20,26] and therefore increase resistance. Similarly, VOC-mediated modifications to environmental pH may permit cells to grow at higher antibiotic concentrations because low pH can inactivate the antibiotic [27]. Although more work is needed to identify the mechanisms underlying many of the changes elicited by volatiles, studies thus far suggest that these compounds induce protective responses.
Diffusible molecules
Bacteria produce a vast diversity of diffusible compounds as products of primary and secondary metabolism. While some, like quorum-sensing molecules, tend to bind targets within species to induce cooperative responses (although cross-species induction has been observed) [28], many others are antagonistic, for example, antibiotics or bacteriocins. Additionally, because diffusible molecules will often mediate their effects at shorter distances from their producer than volatiles, their detection will indicate that a potential competitor may be nearby. Many recent studies ( Table 1) have shown that bacteria modify their metabolome and their antimicrobial activity when co-cultured with or in close physical proximity to competitors [13 ,29-33]. Indeed, because of this, such co-cultures offer promising avenues for drug discovery [34]. When the Gram-positive actinomycete Streptomyces coelicolor was co-cultured with other actinomycetes [30] or with fungi [35] it produced many compounds, including secondary metabolites and siderophores, that were not detected in monoculture, and which were often unique to a specific interaction. Similarly, the inhibitory range of individual streptomycete species increased by more than twofold during bacterial co-culture [13 ]; the distancedependence of these responses is consistent with the idea that induction was coordinated by diffusible molecules and not VOCs (unpublished results). Notably, antibiotic suppression is also observed during these interactions [13 ,29,36], highlighting that the cells producing diffusible molecules can also strongly influence the outcome of pairwise interactions.
While studies between co-cultured cells provide insights into the dynamics of competition mediated by diffusible molecules and show how widespread these responses are among different phyla [29], they do not always reveal the types of diffusible molecules that mediate these effects. For this reason, it has been valuable to focus on model species, and these too have shown that secreted antibiotics at inhibitory and sub-inhibitory concentrations can induce well-known secondary metabolite pathways [32,33]. For example, co-cultivation of S. venezuelae and S. coelicolor induced undecylprodigiosin production in the latter while also stimulating its morphological differentiation [37]. This response was induced by the angucycline antibiotic jadomycin B, produced by S. venezuelae, which binds the "pseudo" gamma-butyrolactone receptor ScbR2 in S. coelicolor and thereby directly regulates these two processes. The fact that angucyclines from other streptomycetes can also bind this receptor suggests that induction by this diffusible molecule is likely to be widespread [37]. A related study in these same species revealed that the gamma-butyrolactones, diffusible quorum sensing signalling molecules that activate antibiotic production, could also coordinate bacterial antagonism, because the same molecule regulates antibiotic production in both species [38]; accordingly, if this molecule is produced by one species, it will necessarily induce antibiotic production in the other. In another particularly elegant study, Vibrio cholerae was found to change its motility in response to sub-lethal concentrations of the antibiotic andrimid, produced by another Vibrio sp., by increasing its swimming speed, turning rate, and run lengths while directing its movement away from the source of the antibiotic [39]. While responding to antibiotics is predicted because these cause direct harm, bacteria can also respond to the products that result from intercellular antagonism. For example, peptidoglycan from the cell walls of Gram-positive bacteria induced the production of the antibiotic pyocyanin in Pseudomonas aeruginosa through detection of its monomer GlcNAc [31]. Similarly, cell-wall derived GlcNAc potentially derived from competing microorganisms can activate antibiotic production in streptomycetes [40]. Like antibiotics, these products of aggression are indicative of imminent danger.
Direct contact
At the shortest distance between cells, bacterial antagonism can be mediated by cell-cell contact. Bacteria possess several ways to inhibit other cells through cell contact, such as contact-dependent inhibition (CDI) [41] or Type VI Secretion System (T6SS) [42]. CDI systems, that deliver toxins into target cells, are widespread among Gram-negative bacteria [43]. These systems are composed of a protein with a C-terminal toxic region, an outer membrane transporter for its secretion and an immunity protein [44]. The toxin protein is predicted to extend from the cell surface and upon recognizing a receptor on a target cell, it delivers its C-terminal domain to the target cell where it exerts toxicity [44]. These toxins kill or inhibit susceptible cells lacking immunity, but not sister cells that express cognate immunity. Although sister cells are not killed by the toxin, Bhurkholderia thailandensis cells still respond to attacks by down-regulating their cdi operon and, interestingly, by increasing biofilm formation and the upregulation of T6SS and non-ribosomal peptide/polyketide synthase genes [45,46]; these responses can be perceived as forms of defence and offense, respectively. As yet, the molecular mechanism behind this response is yet unknown.
Approximately one quarter of all Gram-negative bacteria possess genes encoding T6SS [47]. The T6SS is a contractile nanomachine resembling a phage tail that translocates toxic effector proteins into a target cell [42]. While some bacteria use their T6SS as an offensive weapon, others use it defensively in response to a T6SS-mediated attack [48 ]. The best-studied organism in the latter case is P. aeruginosa, which does not use its T6SS until it is attacked itself, whereupon it initiates a counterattack. Three different mechanisms through which P. aeruginosa can sense an incoming attack have been described, of which two depend on direct contact. P. aeruginosa engages in so-called "T6SS duelling" where T6SS-mediated killing activity is regulated by a signal that corresponds to detection of the point of attack by the T6SS of another cell [48 ,49,50]. In this way the P. aeruginosa counterattack is directed precisely with both spatial and temporal accuracy [48 ]. T6SS duelling was first observed among P. aeruginosa sister cells, although this does not result in killing as cells are immune to their own toxins [49]. A T6SS expressing strain of Agrobacterium tumefaciens could induce a counterattack by P. aeruginosa, but this required the injection of toxins [51]. Finally, P. aeruginosa can react to a T6SS attack without being attacked itself in a response known as "PARA" or P. aeruginosa Response to Antagonism [51]. In this case T6SS activity is stimulated by the effects of T6SS of a competitor, as these cause kin cell lysis which in turn acts as a diffusible danger signal (cue) that activates their own T6SS. Interestingly, the Type IV secretion system (T4SS), another class of secretion system used for the transport of DNA or proteins [52], can also induce a T6SS counterattack [51,53]. This has been speculated to occur through the sensing of membrane perturbations caused by the incoming nanomachine [53], or through T4SS mediated lysis of kin cells that induces the PARA response [51]. Although this research area is biased to few species (e.g., P. aeruginosa and Serratia marcescens [54,55]) responses to T6SS attack appear to be limited to T6SS-mediated counterattack and show that when threats are detected at close range, offensive counterattack is the anticipated response.
A broader perspective on distance-dependent danger responses
Ecological competition is typically partitioned into two broad types: resource competition and interference competition [11 ]. While studies over several decades have uncovered the exceptional sensitivity of bacteria to small changes in resource concentrations, we are only just beginning to explore the sensitivity of bacteria to threats from other microbial species. We propose that the concentration of volatile compounds, diffusible molecules, and direct and indirect effects of cell-contact provides information about the distance of cells from the producers of these molecules and that these direct how bacteria respond to them. This view is supported by the studies we examine as well as the vast literature on the response of bacteria to sub-MIC antibiotic concentrations (Tables 1 and S1). But these limited studies suffer from some important limitations. First, the current literature is highly biased with respect to organism and response. Pathogens are overemphasized because of our justified concerns with how these species will respond to suboptimal drug dosing, while resistance is favoured for the same reasons. Other modes of defence may be more widespread; however, these remain to be fully explored. Second, while our categories are useful, they are also both arbitrary and coarse, as "distance" and its detection are likely to be both environment and species specific. For example, in heterogeneous soil environments, the distance that diffusible or volatile compounds travel depends not only on the actual distance but also on the presence or absence of water or air filled pockets as well as on the temperature. Moreover, to distinguish between these threats from different distances, bacteria need to be able to differentiate between volatile and diffusible compounds across a range of concentrations. The molecular mechanisms underlying how these compounds are detected are not yet well understood. Third, our selection of examples is fragmented and potentially biased towards responses that match our expectations, however unintentionally. Finally, at present we lack a broader mechanistic or theoretical framework in which to examine these responses, both from the perspective of the cells producing danger cues as well those responding to them. These latter issues, in particular, suggest many questions that are important to consider as we move forward. Most importantly, how can cells distinguish true threats from marginal ones, or even cues from mutualistic bacteria, so that they can avoid paying the costs of a misfired response? Indeed, what are the costs of misfiring? This is particularly important to consider if danger cues are durable and persist long after they were first produced.
In addition, although we focus on how cells respond to different cues, it is equally crucial to consider why and when these cues are produced in the first place. At least for antibiotics, evidence suggests that these secondary metabolites are used as weapons and not signals [13 ]. However, this still leaves open the question of whether these weapons, or cues representing the threat of harm, are mainly used for offense or defence. Similar questions remain for VOCs that are variously considered as weapons or signals for inter-species and intra-species communication [56]. Addressing these issues from the perspective of the producer of VOCs, diffusible compounds, and contact-dependent weapons will undoubtedly illuminate our understanding of how bacteria respond to these cues of danger in their natural environments.
|
2018-04-03T04:15:51.920Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "185c12aa4a27374ad66b21332265bb611a8536f7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.mib.2017.02.002",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8efe6e1d16974c64fa37117b895d6baf9a91108c",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
234093764
|
pes2o/s2orc
|
v3-fos-license
|
Overcapacity Risk of China’s Coal Power Industry: A Comprehensive Assessment and Driving Factors
: The comprehensive and accurate monitoring of coal power overcapacity is the key link and an important foundation for the prevention and control of overcapacity. The previous research fails to fully consider the impact of the industry correlation effect; making it difficult to reflect the state of overcapacity accurately. In this paper; we comprehensively consider the fundamentals; supply; demand; economic and environmental performance of the coal power industry and its upstream; downstream; competitive; and complementary industries to construct an index system for assessing coal power overcapacity risk. Besides; a new evaluation model based on a correlation-based feature selection-association rules-data envelopment analysis (CFS-ARs-DEA) integrated algorithm is proposed by using a data-driven model. The results show that from 2008 to 2017; the risk of coal power overcapacity in China presented a cyclical feature of “decline-rise-decline”, and the risk level has remained high in recent years. In addition to the impact of supply and demand; the environmental benefits and fundamentals of related industries also have a significant impact on coal power overcapacity. Therefore; it is necessary to monitor and govern coal power overcapacity from the overall perspective of the industrial network, and coordinate the advancement of environmental protection and overcapacity control.
Introduction
Electric power is the cornerstone of modern economic development. Due to the limitation of energy resource endowment, the coal power industry has played a dominant role in China's power system. By the end of 2018, the installed capacity of coal power in China reached 1.01 billion kW, accounting for 53% of the total installed power capacity. Coal power generation in China reached 445 million kilowatt-hours, accounting for 64% of total power generation. As an important national basic energy industry, the industrial linkage of the coal power industry is very complex. It not only has a direct impact on the upstream coal industry and the downstream capital-intensive industries such as steel and building materials, but it also relates to the development of a national strategic new energy industry such as wind power and photovoltaic power generation. Therefore, it has a significant, strategic position in the national economy. However, in recent years, the coal power industry has encountered a severe overcapacity problem due to factors such as the slowdown of economic growth, environmental constraints, the transformation of the energy structure, and the untimely adjustment of coal power planning [1,2]. By the end of 2017, the utilization hours of coal-fired power units had declined by 16% compared to that of 2010, while the profits of the coal power industry were only 20.7 billion yuan, which was 83% lower than that of 2016. Coal power overcapacity has caused a series of problems such as tremendous waste of resources, serious environmental pollution, vicious market competition and may further increase the risk of economic fluctuation and affect the development of the national economy [3].
In order to resolve the risk of overcapacity in the coal power industry, the Chinese government has announced a series of policies and measures. For example, in 2017, Opinions on Preventing and Resolving the Risk of Coal Power Overcapacity was announced, which clearly put forward the goal of reducing the installed capacity of coal power to within 1.1 billion kW by 2020. In 2018, the Key Points of Coal-fired Power Overcapacity Elimination stipulated that coal-fired power units below 300,000 kW that failed to meet standards should be eliminated, and illegal construction projects should be discontinued. In 2019, the National Development and Reform Commission issued the Notice on Solving Excess Capacity in Key Areas, which pointed out that the government would be fully involved in market regulation and promote the optimization and upgrading of the coal power industry. However, in practice, China's coal power overcapacity has not been effectively curbed, and investment growth is still at a relatively high level, which has led to serious industry losses. According to a report released by the Global Energy Monitor (GEM), from January 2018 to June 2019, China's installed capacity of coal power increased by 42.9 GW while the total installed capacity of global coal power generation outside China decreased by 8.1 GW. The governance failure highlights the deep-seated contradictions in the existing mechanism, and the incomplete information available in the decision-making process is considered as the main cause. Accurately assessing the overcapacity risk will help policymakers grasp the evolution law of overcapacity and its driving mechanism. It is not only the key link of overcapacity prevention, but also the important foundation of overcapacity governance. Therefore, there is an urgent need to study the risk assessment system of coal power overcapacity.
Many scholars have researched on the coal power overcapacity, including its causes and formation mechanisms [4][5][6], measurement methods [7,8] and governance strategies [9,10]. However, there are still some gaps in the research on the risk assessment system of coal power overcapacity. Some scholars have made a preliminary exploration of the comprehensive assessment of other industries' overcapacity, but there are still many deficiencies involved. On the one hand, from the perspective of selecting evaluation indicators, the existing research relies too much on a small number of outcome indicators such as capacity utilization rate. Such an index system is easily affected by external random disturbance, thus having certain randomness and instability. In addition, due to the neglect of the influence of inter-industry correlation and information transmission effect, the logic of the multi-index system is not clear and comprehensive, and the selected indicators are insensitive to overcapacity, which seriously weakens the scientificity of subsequent modeling and the reliability of the results. On the other hand, from the perspective of the research paradigm, the existing literature is still limited to knowledge or model-driven methods [11]. Although these methods can analyze or quantitatively evaluate the risk of events, they lack the strength to dig out potentially valuable relationships from the data facts, which may increase redundant and irrelevant features in the variable space, leading to model overfitting. Besides, some important latent variables are prone to be excluded from the variable combination of traditional models. This often leads to poor adaptability between the assumptions of the traditional model and the data, diminishing the explanatory power of the model.
Based on the perspective of industry correlation, this study proposes an assessment index system and model of coal power overcapacity risk. This study contributes to the literature in three ways. First, through systematic identification of the industry correlation effect, this paper constructs an index system for assessing coal power overcapacity risk. The index system makes up for defects on the one-sidedness of the existing selected indicators. This helps to improve the reliability of evaluation result and provides an important theoretical basis for the monitoring and early warning of overcapacity. Second, this study adopted the data-driven paradigm to construct an evaluation model based on the correlation-based feature selection-association rules-data envelopment analysis (CFS-ARs-DEA) integrated algorithm. This model reduces the redundancy and computational complexity of the index system by distinguishing the characteristics of data change, and enhances the association between the evaluating indicators and target variables. Moreover, through the identification of the optimal variable combination relationship, it automatically weights each index, which reduces the uncertainty caused by prior knowledge and ensures the accuracy of the evaluation results. This model has been proven to be a practical and effective tool to assess the risk of coal power overcapacity in China. Third, on the basis of ensuring the comprehensiveness of the index system, this paper uses association rules to explore the relationship patterns of specific variables and new variable influence mechanism hidden in data facts, which not only enhances the explanatory power of the evaluation model, but also successfully identifies the crucial and ignored factors and their influence mechanism on coal power overcapacity. These factors include environmental benefits and industry fundamentals of related industries of the coal power industry.
The remainder of this study is structured as follows. Section 2 reviews the relevant literature. Section 3 introduces the comprehensive index system and the CFS-ARs-DEA model for assessing coal power overcapacity risk. Section 4 reports the empirical results and discusses them. Section 5 summarizes the key conclusions and policy implications.
Causes of Coal Power Overcapacity
Overcapacity is a phenomenon that has caught the attention of many scholars. Western scholars mostly discussed the causes of overcapacity from the perspective of market operation mechanisms. Pindyck [12] put forward that the uncertainty of demand forces enterprises to keep the flexibility of production capacity. Fusillo [13] believed that the compromise on sunk costs eventually leads to invalid expansion. Nishimori and Ogawa [14] proposed that the formation of excess capacity is the competitive strategy adopted by enterprises to maintain market share. Due to the difference in the market system, domestic scholars have mostly discussed the formation mechanism of China's industry overcapacity based on unique national conditions and established the mainstream hypotheses of market failure, institutional distortion, and weak demand. For example, Lin et al. [15] proposed that in the process of China's rapid economic growth and accelerated industrialization, due to incomplete and asymmetric market information, enterprises had a strong positive perception of coal, power, and other promising industries. The influx of a large quantity of capital forms a "wave phenomenon" of investment, which eventually leads to overcapacity. Qin et al. [16] believed that China's fiscal and tax policy system and the evaluation system of local officials strengthen the motivation of local governments to intervene in enterprise activities. Improper intervention distorts the market price of production factors in the coal power industry and induces blind investment and an increase in the number of coal-related enterprises. Once power demand declines, the serious problem of overcapacity is inevitable. Other scholars have discussed the causes of coal power overcapacity from the perspective of weak demand. They believed that the fundamental change of power demand from high-speed growth to medium high-speed growth is an important reason for coal power overcapacity [17].
There has been a systematic review of the causes of overcapacity in the coal power industry in academic circles. Existing research results provide a theoretical basis for the identification of the risk factors of overcapacity in the coal power industry. However, few scholars analyze the formation mechanism of coal power overcapacity from the perspective of industrial linkage. With the increasingly clear division of labor and close interdependence of industries, will associated industries have an impact on each other's capacity utilization? What are the strengths and directions of influence? These questions remain unanswered and will play an important role in the accurate assessment of coal power overcapacity risk. Therefore, this study examines industry features and industry correlation effects to evaluate the influencing factors and driving mechanism of coal power overcapacity risk.
The Judgment of Coal Power Overcapacity
Prior research has been conducted to judge industrial overcapacity based on different theories and perspectives. Presently, measuring capacity utilization is the most popular method used to judge whether there is the excess capacity [18,19], which is relatively simple and intuitive. However, due to the influence of external random disturbance, capacity utilization has certain randomness and instability, and thus, it is difficult to accurately reflect the industry's overcapacity state. Moreover, there is a certain time lag in the statistics related to capacity utilization. Therefore, enterprises can hardly rely on it to avoid the risk of overcapacity. Further, in a market environment characterized by increasingly perfect mechanisms and refined division of labor, where excess capacity occurs in an industry, its related industries are also affected. Currently, it is unscientific to measure the degree of industry overcapacity simply by relying on capacity utilization. Therefore, it is necessary to put forward a systematic assessment index system and model of coal power overcapacity risk. This will provide a quantitative tool for policymakers to accurately identify the risk of overcapacity, so as to carry out macro-control and policy guidance as soon as possible. In addition, it can also provide a scientific strategical basis for enterprises, preventing them from blind investment.
The existing literature scarcely covers the comprehensive assessment of overcapacity risk in the coal power industry. Han and Wang [20] selected 11 economic indicators, such as fixed-asset investments, production and demand, inventory, and industry benefits, to build an assessment system of overcapacity and quantitatively analyze the capacity utilization level of the steel industry. Shi et al. [21] selected six indicators reflecting the characteristics of wind resources, types of wind power equipment and wind power output, and used the improved analytic hierarchical process (AHP) and fuzzy evaluation method to evaluate the capacity utilization level of the wind power industry in Xinjiang, China. These studies have value, but their defects are also obvious. On the one hand, existing research on the comprehensive evaluation of industrial overcapacity seldom considers the correlation effect between industries, which leads to limitations in the establishment of the assessment index system. On the other hand, existing weighting methods for composite index are mostly subjective expert opinion-based methods, which greatly reduce the scientificity and accuracy of the assessment results.
To improve the above problems, we combined three data-driven algorithms to establish a CFS-ARs-DEA model for the assessment of coal power overcapacity risk, trying to measure the risk level of coal power overcapacity by constructing a composite assessment index. The composite index can quantitatively reflect the real problems with multi-dimensional attributes based on less information loss. This method has been widely used in evaluation and decision-making processes in many fields, such as business competitiveness [22], ecological vulnerability [23], and energy security [24]. In the comprehensive assessment model proposed in this study, correlation-based feature selection (CFS) algorithm has been proven to be able to effectively reduce the high-dimensional feature system, which helps to improve the reliability of the research results. For example, Cigdem and Demirel [25] fused CFS feature selection with multiple classifiers and successfully improved the detection accuracy of Parkinson's disease magnetic resonance imaging. Kushal and illindala [26] proposed a resilience characteristic analysis method of ship power system (SPS) based on CFS algorithm, which was used to distinguish the best predictor of performance during contingencies and optimized the evaluation results of SPS performance. At the same time, after a series of optimization, association rules can also efficiently and accurately mine the potential risk causality and hidden key information from the data facts. Czibula et al. [27] proposed a classification model based on relational association rules to assist software developers to identify defective software modules. Therefore, this paper combines the two algorithms to obtain a more objective and comprehensive understanding of the risk level of industry overcapacity and its causes.
Framework
Based on the idea of data-driven analysis and the perspective of industry correlation, this study established an index system and constructs a CFS-ARs-DEA integration model to assess coal power overcapacity risk, as shown in Figure 1. The details are as follows. First, considering the production and operation mechanism of the coal power industry and its related industries, we establish a systematic initial index system for assessing coal power overcapacity risk. Second, we used the CFS algorithm to reduce the redundancy among the initial indicators. Third, to eliminate indicators with a weak association with coal power overcapacity, the association rules (ARs) algorithm was used to perform association analysis on the index system after the previous reduction. Finally, the data envelopment analysis (DEA) model was used to weight the final index system and generate the composite index of coal power overcapacity risk. Then, we tested the robustness of the weighting scheme and further analyze the assessment result of coal power overcapacity risk.
. Figure 1. The basic principle of the assessment model of coal power overcapacity risk.
Indicators Selection
The industry correlation effect is essentially the relationship between demand and supply and the resulting technological and economic ties [28]. This effect inevitably influences the industry capacity utilization, which has been ignored in previous studies on overcapacity. As shown in Figure 2, related industries include not only the upstream and downstream industries linked by intermediate product input in the vertical industrial structure, but also the complementary industries engaged in complementary production and alternative industries engaged in the production of substitutes in a horizontal industrial structure [29]. Inspired by such effects, this study considered upstream and downstream industries, complementary industries, and alternative industries of the coal power industry in the scope of evaluation while constructing the initial index system of overcapacity risk assessment.
By examining the industry linkage of the coal power industry, we can conclude that the upstream part is mainly the coal industry, and the downstream part mainly includes four industries, namely steel, non-ferrous metals, building materials, and chemical industries. The substitution part of the coal power industry mainly includes hydropower, nuclear power, and other industries. In this study, they are collectively referred to as the new energy power industry. The complementary part of the coal power industry is mainly the power equipment manufacturing industry.
. Based on existing research on the formation mechanism of overcapacity, we build the initial index system for assessing coal power overcapacity risk by combining four dimensions, namely industry performance, supply, demand, and industry fundamentals.
•
The industry fundamentals refer to the inherent heterogeneity among industries that makes each industry flexible in coping with excess capacity in different ways. Therefore, it is an important dimension to be examined. Such fundamentals mainly include industrial concentration, marketization level, capital intensity, opening degree, and employment elasticity. These indicators have significant differences in how they influence the production and operation of each industry. High industrial concentration can reduce the blind follow-up and disorderly expansion of a large number of small and medium-sized enterprises, which helps enterprises grasp a higher share of market investment. We used the concentration ratio (CR-5) index to measure industrial concentration. The improvement of the level of marketization can make the industry allocate resources through the mechanism of survival of the fittest in competition more reasonably. Non-state-owned enterprises' share of total industrial sales value is used to represent the marketization level. The higher the concentration of capital, the higher is the number of industry exit barriers. As a result, when market demand falls, a large number of enterprises may fail to reduce their production capacity. We use the ratio of the net value of fixed assets to total industrial sales value to measure capital intensity. Higher employment elasticity indicates that enterprises can adjust their variable costs and control their capacity according to the market changes. We measure employment elasticity by the elasticity value of employees to the sum of the inventory and accounts receivable. The level of opening up reflects the ability of enterprises to explore overseas markets. Enterprises can resolve excess capacity through export when domestic demand is insufficient. We use the ratio of the export value to total industrial sales value to represent the degree of openness. • The matching of supply and demand is the basis for measuring industry overcapacity [30]. Supply refers to the input of production factors and is the main source of overcapacity in China. Among the various supply factors, we selected indicators from four sub-dimensions, namely fixed assets, labor, technology, and credit. At the demand level, this study used four indicators to examine the changing trend of market demand, namely the growth rate of the industrial sales output value, the turnover rate of inventory, the growth rate of inventory, and the ratio of production to sales.
•
This study examines industry performance from two perspectives of economic and environmental benefits. When the industry has serious overcapacity, the overall economic benefits will decline significantly, such as price decline and deficit increase. These indicators can most intuitively reflect the degree of overcapacity. However, the negative effects of overcapacity are reflected not only in economic benefits, but also in the deterioration of environmental benefits. The real problem is that many local governments in China have relaxed their environmental protection standards and pollution control of coal power enterprises in exchange for more investment. This behavior externalizes the production cost of enterprises, intensifies over investment and repeated construction, and, finally, inhibits capacity utilization [31]. In this study, pollution emission intensity is used to represent environmental benefits.
Finally, the initial index system of coal power overcapacity risk assessment was established from four dimensions, namely performance, supply, demand, and industry fundamentals, and includes eight evaluated industries. It should be noted that due to the common features of coal, steel, power equipment, and other industries, the same indicator system applied to the upstream and downstream industries and complementary industries. The power industry is relatively special. For example, there is no inventory in the power industry. Therefore, the coal power industry and the new energy power industry belong to an independent indicator system. In sum, after systematic investigation and preliminary screening, we have obtained the initial index system with a total of 169 indexes, as shown in Table 1.
Considering the availability and integrity of data, data from 2008 to 2017 is selected to assess the risk of coal power overcapacity. Specifically, among the data of coal, iron and steel, non-ferrous metals, building materials, chemical industry and power equipment manufacturing industry, L1, L2, L3 and L5 are from China Industrial Statistical Yearbook, L4 and L9-L24 are from China Statistical Yearbook, L6-L8 are from Statistical Yearbook of the Chinese Investment in Fixed Assets, and L25 is from China Environmental Statistical Yearbook. In coal power industry and new energy power industry, H1, H2 and J1, J2 are from China Industrial statistical Yearbook, H3~H10 and J3~J8 are from China Electric Power Yearbook and China Hydropower Yearbook, and H11 is from China Environmental Statistical Yearbook. In addition, the smooth index method and the Newton interpolation method are used to make up the missing values of data.
Indicators Reduction
Indicators reduction can not only effectively reduce the information redundancy in the initial index system, but it can also reduce the calculation complexity. The CFS algorithm is a filtering algorithm for feature reduction based on feature correlation [32]. Unlike traditional feature reduction algorithms such as the genetic algorithm and decision tree, the CFS algorithm can evaluate and rank each feature subset rather than a single feature so as to mine and capture the feature correlation through data analysis, and its computational complexity is relatively small [33]. The principle of index reduction is that the optimal feature subset should contain features that are highly correlated with their relevant classes and are not correlated with other features in the dataset. The calculation equation of the feature subset correlation is: where s M erit represents the correlation value of feature subset s containing k features. cf r is the average feature-class correlation, ff r is the average feature-feature intercorrelation. The correlation is measured using the Pearson correlation coefficient, and all variables need to be standardized before calculating the correlation coefficient. The search process for the optimal feature subset is as follows.
Step 1: Add single features successively from the empty set to generate n single features of 1i M ; Step 2: Calculate the M erit value of feature n sets; Step 3: Select the feature with the largest M erit value and the second largest M erit value in 1i M to form a new feature set 2 i M ; Step 4: If the M erit value of the new feature set is less than the maximum M erit value in 1i M , then the feature with the second largest M erit value is replaced by the feature with the third-largest M erit value to form a new feature set; Step 5: Repeat the iteration until the feature set with the highest M erit value is found.
Indicators Correlation
Since the reduction of indicators in the previous stage may reduce the correlation between some indicators and the coal power overcapacity, we used ARs to analyze the correlation between the retained indicators and coal power capacity utilization so as to remove indicators that are weakly related to coal power overcapacity. The ARs algorithm aims to mine the relationship of X Y ( X represents the antecedent of the rule, Y represents the consequent of the rule and X Y ∩ = ∅ ). It reflects the rule that the items in the antecedent will also appear when the items in the consequent appear. There are two main criteria to judge the association rules: Support and Confidence . The mathematical expressions are as follows: overall records in the database when the Support and Confidence of a rule meet the criteria of minimum support (minsupp) and minimum confidence (minconf), it is the required strong association rule.
It should be noted that traditional ARs algorithms such as the Apriori and Frequent Pattern (FP)-growth are aimed at mining the strong association between items in each event [34]. To explore the relationship between indicators, we look for indicators that change in the same or reverse direction with coal power capacity utilization. When the number of years in which an indicator changes in the same (reverse) direction with capacity utilization reaches the threshold of minsupp and minconf, it has a strong (weak) correlation with coal power overcapacity. If the number of years in the same direction reaches the threshold, it is positively related to capacity utilization, that is, it is negatively related to overcapacity. Otherwise, it is positively related to overcapacity. In this process, we removed the indicators that are weakly related to coal power capacity utilization. Next, we used the Apriori algorithm to find all indicators that have a strong correlation with capacity utilization. This algorithm can dig out strong association relations between item sets from various events Its principle is that if the item set X is a frequent set, then its nonempty subsets are all frequent sets. The steps are as follows.
Step 1: Given the minimum support threshold and the minimum confidence threshold; Step 2: Scan database D to generate candidate item set I1 and then prune frequent 1item sets according to the minimum support to get the 1-frequent set L1; Step 3: Get the candidate 2-term set C2 according to L1 and then prune C2 according to the minimum support to get the 2-frequent set; Step 4: Repeat iterations until higher order frequent sets cannot be generated; Step 5: Mine all strong association rules that are not less than the minimum confidence from all frequent sets.
Indicators Weighting and Aggregation
In recent years, the DEA weighting method has evolved from the application of traditional nonparametric efficiency evaluation to the construction of composite index. Unlike the expert opinion-based weighting method, this method does not require prior information of weights and can automatically select the most favorable set of weights for each entity to measure the relative performance between entities in the best case scenario [35]. Translating the original DEA context to determine the weights implies that we do not consider inputs and refer to each indicator as an output. Suppose that we have m entities and n indicators. ij I shows the value of indicator j for entity i , and ij w represents the weight of indicator j for entity i . The implementation steps of the weighting model are as follows.
Step 1: Calculate the most favorable weight for each entity to maximize the index value. The linear programming model is as follows: Step 2: Calculate the most unfavorable weight for each entity to minimize the index value. The linear programming model is as follows: It is observed that the linear programming model is of great importance in the generation of weights. However, this model has some disadvantages. First, the adjustment parameters λ are set artificially and have strong subjectivity. Second, for each entity, the model assigns different weights to each indicator, which makes the weighted values of indicators in different years, not comparable. Third, this model does not limit the scope of the weight, and the index weight may be zero. To overcome the above defects, this study used the approach of Hatefi et al. [36] to make the following adjustments to the aforementioned method.
First, we introduced a variable i d , that is greater than zero to change the inequality into equality. In this way, the composite index value is converted to 1 i i CI d = + . To achieve the best performance of the annual assessment, we look for a set of weights that make i CI as small as possible, which is equivalent to minimizing i d . Second, to adjust the variable weight to the common weight, we used the min-max method to set M as the maximum of all i d and then adjust the objective function to min M . Finally, to make each index weight not equal to zero, we introduced an infinitesimal positive number ε as the lower limit of the constant weight. Based on the above adjustments, the optimized DEA model can endogenously obtain the composite index of coal power overcapacity risk.
Results of the Indicators' Reduction and Correlation
Before indicators' reduction and correlation, the min-max method was used to standardize the data. Then the CFS algorithm and the ARs algorithm were used to reduce and correlate, respectively, the indicators for assessing coal power overcapacity risk using the Python 3.7 software. In previous cases of research on association rules, the minimum support and minimum confidence were often set at around 0.4 and 0.6 [37]. In this study, we set the minimum support to 0.55 and the minimum confidence to 0.8 so as to retain the indicators that are strongly related to coal power overcapacity and judge the influencing direction of each indicator. As shown in Table 2, after the indicators reduction and correlation, 45 indicators are retained. Specifically, the CFS algorithm removes about 60% of indicators for each industry, and the results are in line with the normal reduction range of the CFS algorithm.
The retained indicators shown in Table 2 basically cover four dimensions, namely industry fundamentals, supply, demand, and industry performance. It is worth noting that the indicators of industry fundamentals are removed from the index system of the coal power industry and the new energy power industry. This is because the industry fundamentals do not have a direct and rapid impact on coal power capacity utilization. Besides, due to the state's intervention and regulation in recent years, the fundamental change of the power industry is very limited. In sum, through index reduction and association, the redundancy between indicators is effectively reduced, which helps to improve the scientificity and rationality of the evaluation results.
Correlation
Industry fundamentals Industry performance
Results of Indicators' Weighting and Aggregation
The adjusted DEA model was used for indicators weighting and aggregation. Before calculating the composite index, we first took the opposite number of indicators that change inversely to the risk of overcapacity and homogenize it. This was achieved with the Lingo 12.0 software. Finally, the composite index of coal power overcapacity risk in 2008-2017 was obtained, as shown in Figure 3.
Robustness Test
The procedure for constructing a composite index may be critically judged because of the relatively subjective selection of methods in each step [38]. Therefore, it is necessary to test the robustness of the composite index aggregation scheme. In this study, in addition to the DEA weighting method, we also adopted the equal weighting method and the entropy weighting method to calculate the composite index of coal power overcapacity, and then we compared the different results, as shown in Figure 4.
The Pearson correlation coefficient was used to analyze the correlation between three groups of evaluation results. The correlation coefficient between the result obtained by DEA weighting and the result obtained by equal weighting was 0.81, while the correlation coefficient between the result obtained by DEA weighting and the result obtained by entropy weighting was 0.85, both showing a strong correlation. It indicates that the evaluation results using the DEA method are relatively robust. Besides, according to Figure 4, the fluctuation trend of the results of the three groups of the composite indexes of coal power overcapacity risk has no significant difference. Due to the difference between the weighting principle of the above methods, the corresponding results cannot be exactly the same in value, but the gap is not significant. The maximum difference is only 0.12, which is within a reasonable range. Therefore, it can be concluded that the composite index synthesis scheme proposed in this study has passed the robustness test. In 2012, the risk of coal power overcapacity increased sharply, and the contribution index curves of the complementary industry and most of the downstream industries showed similar fluctuations. In reality, these industries suffered from the deterioration of operating efficiency with supply such as credit, and the number of employees declining to varying degrees. Meanwhile, the equipment utilization hours of the new energy power industry increased by 12%, which further squeezed the living space of the coal power industry. In conclusion, the change in the operating mechanism of related industries ultimately affects the coal power industry, resulting in a sharp increase in the risk of coal power overcapacity.
•
In 2013, the risk of coal power overcapacity decreased significantly. Specifically, the power generation of the coal power industry resumed its growth while the equipment utilization hours of the new energy power industry showed negative growth.
With the rapid growth of industrial sales value, the economic benefits of the power equipment manufacturing industry have improved significantly. At the same time, the performance of downstream industries also recovered, with the number of employees increasing by nearly 40%. These market changes will help resolve excess coal power capacity. • Since 2014, the risk level of coal power overcapacity rose until 2015, when it reached its highest level for a decade. Then in 2016 and 2017, it dropped slightly but was still at a relatively high level. In fact, the rise in this stage was mainly due to the deterioration of the operating conditions of the coal power industry, the power equipment manufacturing industry, and the four downstream industries. Eventually, the superposition of adverse factors derived from these industries makes the risk of overcapacity to rise again.
In general, the risk of coal power overcapacity is significantly affected by its related industries. The changes in operating conditions of related industries will act directly or indirectly on the supply and demand of coal power industry, eventually influencing its capacity utilization. Therefore, when examining the risk of coal power overcapacity, it is necessary to integrate the development characteristics of the coal power industry and the effects of related industries into the evaluation scope. It means that we should not only pay attention to the established factors that affect the coal power overcapacity risk, but also systematically consider random factors from its related industries.
The Identification of the Driving Factors of Coal Power Overcapacity
The existing literature often ignores the influence of the industry correlation effect when exploring the causes of overcapacity. In order to fill this gap, we used association analysis to track the influencing factors of coal power overcapacity. The support and confidence of each indicator calculated by association rules are arranged from high to low, and the influencing direction of each index was judged. The symbols CP, NE, PE, CO, ST, NF, BM, and CH, were used to represent eight industries respectively, including coal power, new energy power, power equipment, coal, steel, non-ferrous met-als, building materials, and the chemical industry, respectively. The results of the associa-tion analysis are shown in Table 3. It can be found that the upstream and downstream industries, as well as the complementary and alternative industries all have an impact on its overcapacity. Due to the lack of research on the impact of the environmental benefits and the fundamentals of related industries, this paper focused on studying them through industry correlation effects.
In terms of the impact of environmental benefit, the results showed that the higher the pollution emission intensity of the upstream coal industry and the downstream chemical industry, the lower the possibility of the coal power overcapacity. It should be noted that the serious pollution problem reflects the weak environmental regulations of the government, which reduces the marginal cost of upstream and downstream enterprises. Thus, there is a high probability that the coal power industry would not only obtain cheaper means of production but also achieve stronger market demand, which is conducive to resolving the excess capacity of coal power.
As far as the industry fundamentals are concerned, association analysis shows that the industrial concentration of the upstream industry and the capital intensity of the downstream industry are positively related to coal power overcapacity, while industrial concentration, marketization level, the employment elasticity of the downstream industry, and the industrial concentration of the complementary industry are all negatively related to coal power overcapacity. There are three reasons for this. First, the increase of industrial concentration strengthens the bargaining power of the upstream industry, which increases the marginal cost of the coal power industry and limits the reasonable utilization of the production capacity of the coal power. Second, the increase of industrial concentration, the marketization level, and the employment elasticity of downstream industries guarantees the operation of the market mechanism and provides downstream enterprises more space to adjust their operations flexibly. This helps coal power produc- tion enterprises obtain correct market information and enables them to adjust their operating strategies timely to reduce the risk of overcapacity. However, the increase of capital intensity in downstream industries raises the exit barriers and consequently, a large number of loss-making enterprises will be unable to exit in time, which ultimately misleads the production and investment decisions and may increase the potential risk of coal power overcapacity. Finally, the high industrial concentration of the complementary power equipment manufacturing industry helps to reduce disorderly competition and improve the profitability of the industry. Considering the mutually beneficial relationship between the power equipment manufacturing industry and the coal power industry, it is conducive to the healthy operation of the coal power industry.
Apart from the two above points, the new energy power industry also has a complicated connection of benefit with the coal power industry [39]. The results of association rules show that the high-speed growth of projects under construction and equipment utilization hours of the new energy power is highly related to the risk of coal power overcapacity. As an alternative industry, the new energy power industry such as wind and photovoltaic power industry inherently competes for power market with the coal power industry. With the slowdown of power demand and the increase of state's support for the new energy industry, the survival and development space of coal power industry has been further compressed, increasing the risk of overcapacity and intensifying the contradiction between the two sides [40]. The result of association rules precisely reflects such a conflicting situation. Therefore, we can conclude that the rapid development of the new energy industry is positively related to coal power overcapacity risk.
Conclusions
Considering the seriousness of overcapacity in China's coal power industry and its importance in the sustainable development of the national economy, this study establishes a comprehensive index system based on an industry correlation mechanism. It proposes a CFS-ARs-DEA model for assessing coal power overcapacity risk. Through the robustness analysis, the reliability of the model and the stability of the results are verified. The main conclusions are as follows.
1. The comprehensive index system and model for assessing coal power overcapacity risk has remarkable advantages. First, the index system fully considers the industry correlation effect, and comprehensively covers the internal and external factors influencing coal power overcapacity. Second, the CFS-ARs-DEA integrated algorithm effectively reduces information redundancy from data features and avoids the subjectivity of index weighting and aggregation, thus helping to improve the scientificity of the risk assessment results of coal power overcapacity. This model provides an effective quantitative analysis tool for accurately identifying the risk level of industrial overcapacity and monitoring the trend of industrial overcapacity. 2. The empirical evaluation result of coal power overcapacity risk reveals the fluctuation law of overcapacity risk in China's coal power industry between 2008 and 2017. From 2008 to 2017, the risk of coal power overcapacity presents a cyclical feature of "decline-rise-decline." In 2016 and 2017, although the risk level was slowly declining, it was still at a high level. Besides, the risk of coal power overcapacity is significantly affected by the operation of related industries. From the perspective of environmental benefits, the constraints of environmental regulations of upstream and downstream industries will aggravate the overcapacity of the coal power industry. From the perspective of industry fundamentals, the increase of the industrial concentration of the upstream industry and the complementary industry, and the increase of capital intensity of the downstream industries aggravates the overcapacity of the coal power industry. The increase of industrial concentration, marketization level, and the employment elasticity of downstream industries can effectively restrain the overcapacity of the coal power industry.
Policy Implications
Based on the above conclusions, several policy recommendations are proposed to the Chinese government.
1. Establish and improve the monitoring and early warning mechanism of overcapacity risk in coal power industry. Government departments should systematically collect, summarize and analyze data of the whole coal power industry network, and constantly improve the statistical index system of overcapacity monitoring. It is necessary to avoid the ex-post assessment of overcapacity solely relying on the measurement of capacity utilization. Instead, a systematic analysis of the formation of overcapacity should be performed to accurately identify the potential risks. Specifically, a special information sharing platform should be established to release timely and transparent market information, so as to guide the coal power and other related enterprises to adjust their investment and production decisions. What is more, statistical departments can also rely on big data, cloud computing to dig out the causal association of data characteristics, which will help to systematically and accurately assess the state of overcapacity and potential risks. 2. It is of great significance to build a mechanism to resolve overcapacity from the overall perspective of industrial network. Our empirical results show that the impact of vertical and horizontal related industries of coal power industry on its overcapacity is significant. Therefore, in order to fundamentally control the overcapacity, the government should firstly improve the marketization level of the coal power industry and its related industries, giving full play to the guiding role of the market in investment. Secondly, the government should strengthen the vertical and horizontal strategic cooperation of the whole industry network, encourage the organic integration of the coal industry and the power industry through asset pooling and mutual equity participation, and promote the merger and reorganization of coal power enterprises of different sizes to improve the industry concentration. Finally, while encouraging the development of the new energy power industry, the government must comprehensively update the development planning of the coal power industry, so as to orderly allocate power generation capacity and guide the smooth exit of backward coal power capacity. 3. Government departments should control environmental regulation within the acceptable range of enterprises and establish a long-term mechanism to resolve overcapacity. As the association rules show, the increase of environmental regulation in upstream and downstream industries will aggravate the overcapacity of coal power industry. Therefore, while strengthening the industry environmental supervision, the government should ensure that the environmental regulation is reasonable and appropriate and adopt appropriate supervision measures. For example, the government can use more incentive and control measures such as environmental tax, emission trading and environmental subsidies to raise the market access threshold of high pollution and high energy consumption enterprises, and encourage enterprises to strengthen the innovation of production technology and industrial transformation. These measures are of benefit to realize the coordinated governance of environmental protection and overcapacity reduction of coal power industry.
|
2021-05-10T00:04:20.664Z
|
2021-01-29T00:00:00.000
|
{
"year": 2021,
"sha1": "328759ba79ccccd5823c08dfa411c9979bb3dfd5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/3/1426/pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "228332969671c255530da3d74472f5c68a7fcb3b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
}
|
77395263
|
pes2o/s2orc
|
v3-fos-license
|
A Quantitative Account of the Behavioral Characteristics of Habituation: The Sometimes Opponent Processes Model of Stimulus Processing
Habituation is defined as a decline in responding to a repeated stimulus. After more than 80 years of research, there is an enduring consensus among researchers on the existence of 9–10 behavioral regularities or parameters of habituation. There is no similar agreement, however, on the best approach to explain these facts. In this paper, we demonstrate that the Sometimes Opponent Processes (SOP) model of stimulus processing accurately describes all of these regularities. This model was proposed by Allan Wagner as a quantitative elaboration of priming theory, which states that the processing of a stimulus, and therefore its capacity to provoke its response, depends inversely on the degree to which the stimulus is pre-represented in short-term memory. Using computer simulations, we show that all the facts involving within-session effects or short-term habituation might be the result of priming from recent presentations of the stimulus (self-generated priming). The characteristics involving between-sessions effects or long-term habituation would result from the retrieval of the representation of the stimulus from memory by the associated context (associatively generated priming).
INTRODUCTION
The predominant consequence of stimulus repetition is a systematic decrease in the frequency or amplitude of the response to the stimulus. When it is proved that this decrement is not caused by physiological changes at the sensory or motor levels, it is inferred that a learning phenomenon, known as habituation, has occurred. Habituation has been experimentally studied since the early twentieth century (Humphrey, 1933;Prosser and Hunter, 1936;Harris, 1943) and its core behavioral regularities were soon compiled by Thompson and Spencer (1966) and Groves and Thompson (1970) into a list of nine characteristics or parameters of habituation. This list has remained relatively uncontroversial and has oriented most of the research in the field over the years. Indeed, 40 years after the publication of these characteristics, a group of recognized researchers in the area gathered in a symposium where one of the goals was to revisit the empirical status of these features. With minor amendments and the addition of one characteristic, the conclusion of the symposium was essentially confirmatory (Rankin et al., 2009;Thompson, 2009).
No similar agreement has been reached, however, concerning theories of habituation. Three approaches have dominated the field over the years: Groves and Thompson's (1970) dual process theory, Sokolov's (1960) comparator theory, and Wagner's (1981) Sometimes Opponents Processes model (SOP). Although there is not a plethora of choices, these theories have not been systematically compared. This is likely due, in part, to the fact that they differ in their level of formalization and emphasis on different subsets of empirical data. Certainly, these theories have each their respective merits (see, Mackintosh, 1987;Hall, 1991;Siddle, 1991 for critical reviews); but, in our opinion, only SOP is formulated with sufficient quantitative detail to make relatively unambiguous descriptions of a broad spectrum of phenomena and testable predictions.
In an early chapter, Whitlow and Wagner (1984) exposed in detail the potential of SOP on this topic. However, their analysis was more conceptual than quantitative. Alternatively, Donegan and Wagner (1987) and Wagner and Vogel (2010) presented a quantitative analysis of SOP, but they focused primarily on the kind of response decrement that might be attributed to associative factors. In this paper, we attempt to complement these efforts by evaluating the quantitative performance of the model on a relatively larger set of phenomena. We also propose possible instantiations of some mechanisms that were left unspecified in previous formulations of SOP.
In the first part, we briefly describe the major principles of SOP emphasizing those more closely related to habituation. We show the theoretical mechanisms by which the habituation of any stimulus can be understood as the result of two types of memorial priming: a transient memorial effect due to recent exposure to the stimulus (Davis, 1970;Whitlow, 1975;Vogel and Wagner, 2005) and a more persisting memorial effect due to the context carrying a relatively stable association with the habituated stimulus (e.g., Jordan et al., 2000). Then, we proceed to demonstrate, by computer simulations, how these mechanisms account for the 10 parameters of habituation accorded by Rankin et al. (2009). In the last part, we discuss the potential of the model to embrace the related phenomenon of sensitization, and we comment on the limitations of our current analysis.
THE SOP MODEL
The SOP model is described in more detail elsewhere (e.g., Wagner, 1981;Mazur and Wagner, 1982;Vogel et al., 2018), so we present only its essentials here. As shown in Figure 1A, the model states that the representation of any stimulus (i.e., "s") comprises a large set of elements that can be in one of three states of activity: inactive (I s ), primary activity (A1 s ), and secondary activity (A2 s ). Upon presentation of the stimulus, a proportion of the inactive elements are promoted to the A1 s state according to the probability p1 s , which might be taken to be a function of the intensity of the stimulus. Once in the A1 s state, the elements decay, first to the A2 s state, with probability pd1 s , and then back to inactivity with probability pd2 s , where they remain unless a new presentation of the stimulus occurs. Thus, the momentary theoretical processing of the stimulus can be characterized by the proportion of elements in each of the three states, that is, by the vector (PI, PA1, PA2) where PI + PA1 + PA2 = 1 (Donegan and Wagner, 1987). It is assumed that the primary response to the stimulus is a function of P A1 and that P A2 might be either behaviorally silent or add to or oppose the primary response.
Let us consider the example depicted in Figure 1B, which exemplifies the momentary distribution of elements across the three states of activity over time after a single introduction of a 1-moment duration stimulus, with p1 = 0.8, pd1 = 0.1, and pd2 = 0.02. At the moment t 0 , that is, before the presentation of the stimulus, all elements are in the I state; so, the activity vector is (1, 0, 0). At moment t 1 , p1 elements move to the A1 state, leaving the activity pattern in (0.2, 0.8, 0), and in moment t 2 , pd2 of these elements decay to the A2 state leaving the pattern in (0.2, 0.72, 0.08). Since the stimulus is only "on" at moment t 1 , no further elements are promoted to A1 at any other time and thus PA1 declines very rapidly. Since the rate of decay from A2 to I, pd2, is five times smaller than the rate of decay from A1 to A2, pd1, PA2 persists for a longer period. With these standard assumptions, the consequence of the presentation of a brief stimulus is a rapid and transient increase in the proportion of the elements in the A1 state, followed by an increase in the proportion of elements in the A2 sate and by a very protracted return of elements to inactivity.
Notice in Figure 1B that there is a long period after the offset of the stimulus in which a substantial proportion of elements are in the A2 state. Indeed, only at moment 250, almost all elements have decayed back to inactivity, being, thus, just then eligible for reactivation in case the stimulus was presented again at this time. This is the reason why the A2 state can be regarded as a refractory state of activity. This is illustrated in the left-hand plot of Figure 1C, which depicts the theoretical activity that would be generated if the same stimulus of Figure 1B was repeated once at an interval of 32 moments. There, it is apparent that in the second presentation, the stimulus is less effective in provoking A1 activity, which reaches a peak of about half of the size of that of the first presentation. Generally speaking, the presentation of a given stimulus may have different effects depending on the momentary distribution of elements in the three states. Since the only consequence of presenting a stimulus is through p1, the stimulus will have greater efficacy in provoking A1 activity the greater is the number of elements in the inactive state and the lesser in the refractory state. This feature of SOP is a quantitative rendition of priming theory which states that "when an event is pre-represented ('primed') in short-term memory (STM) further corresponding stimulation is rendered less effective than it otherwise would be" (Pfautz and Wagner, 1976, p. 107). In the case depicted in the figure, this priming is occasioned by previous presentations of the same stimulus, so it is referred as "self-generated priming" (Wagner, 1976(Wagner, , 1978. It is clear, thus, that self-generated priming is the primary mechanism by which SOP accounts for within-session decrements or short-term habituation. Of course, this is a transient effect that disappears when sufficient time has elapsed from the last presentation of the stimulus (e.g., from one session to another). This is illustrated in the right-hand plot of Figure 1C, which reveals an almost total recovery of PA1 s provoked by the first presentation of the same stimulus in a separate session.
Between-session effects or long-term habituation, on the other hand, require a different kind of mechanism that Wagner (1976Wagner ( , 1978 called "retrieval-generated priming. " In this case, the supposition was that when a stimulus is repeatedly presented in a context, the context would act as a conditioned stimulus (CS) to develop an association with the habituating stimulus, which plays the role of the unconditioned stimulus (US). As the association grows, the stimulus becomes gradually more expected in the context and thus, primed, by the context. Figure 2A sketches how SOP conceives this by assuming that both the context and the stimulus activate a respective sequence of representational nodes, and that the context, via its association with the stimulus, acquires the capacity to promote elements directly from I s to A2 s via the variable p2 s . The assumption is that p2 s is a function of the degree of primary activity of the context (A1 Ctxt ), and the strength of the association between the context and the stimulus (i.e., p2 = A1 Ctxt × V Ctxt−s ).
According to the learning rules of SOP, changes in the net association between a CS and a US are the result of excitatory minus the inhibitory associations that develop simultaneously depending on the respective states of activity of the stimuli. The development of excitatory CS-US links, ΔV+, are assumed to be proportional to the momentary product of concurrent A1 CS and A1 US activity multiplied by an excitatory learning rate parameter, L+ (i.e., ΔV+ = L+ × PA1 CS × PA1 US ), whereas changes in the inhibitory CS-US connections, ΔV-, are assumed to be proportional to the momentary product of concurrent A1 CS and A2 US activity, multiplied by an inhibitory learning rate parameter, L-(i.e., ΔV-= L-× PA1 CS × PA1 US ). In the standard procedure to get habituation, there are several repetitions of the habituating stimulus (US) in a distinctive context (CS), which seems to comply with the conditions of SOP for strengthening the association between them.
Although this may sound straightforward, context-stimulus associations are more theoretically challenging than they appear. Vogel et al. (2018) noticed that if the context is viewed intuitively as a long duration CS with a constant value of PA1 Ctxt over the entire duration of the session, then SOP predicts no net association with the habituating stimulus. In this example, the net excitatory association that would be acquired by the context during the period in which the stimulus is in its A1 state of activity will be overcome by the inhibitory associations that would be provoked during the occasions in which the context is in its primary activity and the stimulus in its secondary activity during the inter-trial intervals. In order to solve this, Vogel et al. (2018) suggested that contexts should not be represented as a long uniform stimulus with a constant primary activity. They proposed that presentation of explicit cues, like the habituating stimulus, provokes systematic changes in the subject's receptor orientation so that the processing of the context is transiently disturbed. Given the dynamics of activity of SOP, this interruption allows the context to enjoy more overlap of its A1 processing with the A1 processing of the stimulus, rather than with the later A2 processing of the stimulus. Vogel et al. posited that this is consistent with the idea that the representation of the context is very vulnerable to disruption by explicit cues.
To implement the idea of context disruption, Vogel et al. (2018) adopted the simple strategy of setting the p1 Ctxt value to zero for some period after the presentation of explicit cues. Here, we rationalize this principle in a related but different way. First, we assume that p1 Ctxt equals zero if PA1 s is greater than some threshold. Second, we follow Wagner's (1981) distractor rules by stating that the decay rates from A1 Ctxt to A2 Ctxt and from A2 Ctxt to I Ctxt , respectively, are increased by the presentation of the habituating stimulus. The level of increase in these decay rates is assumed to be a function of the activity of the habituating stimulus; that is, pd1′ Ctxt = pd1 Ctxt + A1 s /c1 and pd2′ Ctxt = pd2 Ctxt + A2 s /c2, where pd1′ Ctxt and pd2′ Ctxt are the effective decay rates, and c1 and c2 are constant parameters of the model. Figure 2B illustrates the effects of these assumptions on the processing of the context by simulating a situation in which the context is processed alone for some time until a 1-moment stimulus is presented. The relevant pattern of activities displayed in the figure indicates that presentation of the habituating stimulus provokes the progressive diminution of PA1 Ctxt , which remains active for a few moments when PA1 s is at its maximal, but eventually gets mostly suppressed when PA2 s predominates. The net result of this is more excitatory learning (which is proportional to PA1 Ctxt × PA1 s ) than inhibitory learning (which is proportional to PA1 Ctxt × PA2 s ). Figure 2C presents the same simulations as those of Figure 1C, but this time we added our assumptions about the processing of the context. As can be appreciated, in session 1 there is a decrease in the amplitude of the PA1 s generated by the second presentation of the stimulus, which is not very different than the pattern that was described in Figure 1C. In session 2, however, the pattern is very different in that now, even in the absence of self-generated priming, the amplitude of PA1 S in the first presentation of the stimulus is considerably diminished. This diminution is caused by the anticipatory PA2 s activity provoked by the context which has developed an association with the stimulus. This betweensessions decrement is thus explained by the retrieval-generated priming announced by Wagner (1976Wagner ( , 1978.
SIMULATIONS OF THE CHARACTERISTICS OF HABITUATION
In order to show the quantitative strength of these assumptions, in this section, we present a series of computer simulations illustrating how the model accounts for each characteristic of habituation. For this, we used the revised description of the parametric features of habituation proposed by Rankin et al. (2009). Due to the diversity of procedures, stimuli, responses, and species underlying the corpus of research that has given rise to these characteristics, we did not attempt to mimic any specific procedure or published data in particular. Rather, we conduct all simulations with a standard procedure with minimal parametric variation from one simulation to another. Thus in all simulations, the habituating stimulus lasted 1 moment and its activation parameters were: p1 s = 0.8, pd1 s = 0.1, pd2 s = 0.02. In order to simulate high-, low-, and mediumintensity stimuli, we used three different values of p1 s = 0.8, 0.5, and 0.2. The parameters for activation of the context were identical to those of the habituating stimulus excepting for a lower p1 value. That is, p1 Ctxt = 0.05, pd1 Ctxt = 0.1, and pd2 Ctxt = 0.02. The context was turned on at the first moment of each simulation and stayed on according to its p1, pd1, and pd2 values unless the habituating stimulus is presented (which occurred at moment 60 of the simulation). Specifically, if PA1 s > 0.07 then, p1 Ctxt = 0. The presentation of the habituating stimulus also increases the decay parameters of the context to pd1′ Ctxt = 0.1 + PA1 S /2 and pd2′ Ctxt = 0.02 + PA2 S /10.
To simulate the transition from one session to another, all activity was set to zero at the end of the session. Only the associative values of the context were carried on from one session to the next. For the simulations, we used the software Stella ® Architect (Isee systems; Lebanon, NH, United States).
Simple Within-Session Effects
The simulations described in Figures 1C and 2C attempted to make clear that decrements in responding that occur within a session are mainly explained by self-generated priming. In order to illustrate the generality of this effect, we conducted a series of computer simulations in which a 1-moment duration stimulus was repeated four times at inter-stimulus intervals (ISI) of 2-, 4-, 8-, 16-, or 32-moments in a single session. The results are depicted in Figure 3 in terms of the peak PA1 s activity provoked by each presentation of the stimulus. The general pattern is that in each occasion, the stimulus becomes less effective in provoking its A1 s activity than in the previous occasions. The decrease in most cases approximates an exponential function except for the shortest ISI, in which there is transient facilitation. According to SOP, this facilitation occurs only when the two presentations of the stimulus are sufficiently close in time to produce a summation of PA1 s . Beyond the cases of very short ISI (2 and 4 moments), the model predicts that the longer the interval, the less pronounced is the decrement at the end of the session.
This pattern approximates well to the first empirical feature of habituation listed by Thompson and Spencer (1966) and reviewed by Rankin et al. (2009) as follows: "Repeated application of a stimulus results in a progressive decrease in some parameter of a response to an asymptotic level. This change may include decreases in frequency and/or magnitude of the response. In many cases, the decrement is exponential, but it may also be linear; in addition, a response may show facilitation prior to decrementing because of (or presumably derived from) a simultaneous process of sensitization. " (Characteristic #1, p. 136).
Spontaneous Recovery and Long-Term Habituation
Here, we analyze SOP's account of several related facts of habituation listed by Rankin et al. (2009). One refers to the fact that "if the stimulus is withheld after response decrement, the response recovers at least partially over the observation time ('spontaneous recovery'). " (Characteristic #2, p. 136).
FIGURE 3 | Simulated peak PA1 S activity over 4 presentations of a 1-moment stimulus at inter-stimulus intervals ranging from 2 to 32 moments.
Another is the observation that "Some stimulus repetition protocols may result in properties of the response decrement (e.g., more rapid rehabituation than baseline, smaller initial responses than baseline, smaller mean responses than baseline, less frequent responses than baseline) that last hours, days or weeks. This persistence of aspects of habituation is termed long-term habituation. " (Characteristic #10, p. 137).
According to the model, the self-generated priming effects that occur within a session of habituation tend to disappear with the passage of time. This gives rise to the prediction of "spontaneous" recovery of the response from one session to another. The retrieval-generated priming caused by the context, however, does not depend on temporal factors but on the use of the same context in the two sessions. This gives rise to the prediction of a long-term decrement from one session to the next. Thus, in principle, it seems relatively straightforward to conclude that the model predicts a partial recovery of responding from session to session, which would result from the combination of the natural termination of self-generated priming and the persistence of retrieval-generated priming.
Another characteristic of spontaneous recovery that is consistent with this analysis is that "after multiple series of stimulus repetitions and spontaneous recoveries, the response decrement becomes successively more rapid and/or more pronounced (this phenomenon can be called potentiation of habituation). " (Characteristic #3, p. 136). According to SOP, every repetition of the stimulus will lead to an increase in the association between the context and the cue, so more decrement and less spontaneous recovery are expected over extensive training.
The simulations presented in Figure 4 illustrate how SOP accounts for all the characteristics described above. The simulation involved four presentations of the stimulus at intervals of 8 and 32 moments in each of three identical sessions. The results are clear for the two conditions: there is a partial recovery in the PA1 s from the last trial of one session to the first trial of the next (spontaneous recovery), and there is a diminution in the degree of spontaneous recovery in session 10 relative to session 2 (potentiation of habituation).
The data displayed in the figure also allow for the analysis of a further characteristic (#4), which states that "Other things being equal, more frequent stimulation results in more rapid and/or more pronounced response decrement, and more rapid spontaneous recovery (if the decrement has reached asymptotic levels). " (Rankin et al., 2009, p. 136). This is seen in the figure by comparing the within-and between-session decrements for the two simulated ISIs. That is, there are more within-session decrement and more spontaneous recovery for the 8-moments ISI than for the 32-moments ISI.
Finally, there is a further characteristic that can be embraced by the context-stimulus association. This property is listed as the sixth characteristic and described by Rankin et al. (2009) as: "The effects of repeated stimulation may continue to accumulate even after the response has reached an asymptotic level (which may or may not be zero, or no response). This effect of stimulation beyond asymptotic levels can alter subsequent behavior, for example, by delaying the onset of spontaneous recovery. " (p. 137). SOP explains this phenomenon, also known as "below-zero habituation, " by appealing to the fact that once the level of PA1s has reached a low asymptotic value within a session, further training can increase V Ctxt-S with no major observable effect in this session but that will be apparent in a spontaneous recovery test in another session. Figure 5 illustrates this by displaying the result of a computer simulation in which the habituating stimulus was presented either 4 or 10 times at a 32-moments interval. As can be seen, the level of PA1 s in the fourth trial of the short training condition is almost identical to the tenth trial of the extended training condition. Nonetheless, in the tests conducted in session 2, there is more decrement in the extended condition relative to the shorttraining condition.
Stimulus Properties
There are two characteristics of habituation listed by Rankin et al. (2009) that can be explained by some features of the habituating stimulus. One says that "within a stimulus modality, the less intense the stimulus, the more rapid and/or more pronounced the behavioral response decrement. Very intense stimuli may yield no significant observable response decrement. " (Characteristic #5, p. 137). As mentioned before, the parameter p1 in the model can be assumed to represent the intensity of the stimulus. In terms of the model, p1 influences two relevant processes: performance and learning. That is, the higher the value of p1, the higher is the response and the faster is learning. The result of this is shown in Figure 6, which depicts the results of a simulation in which the habituating stimulus was presented four times with p1 s values of 0.2 and 0.8 and then tested in a subsequent session with a common p1 s of 0.5. As can be appreciated, the p1 s = 0.2 condition exhibited more within-session decrements, but less between-session decrements than the p1 s = 0.8 condition. Of course, the within-session effect is a mere performance effect (i.e., less responding to lower p1 s ) while the between-session effect is a reflection of differential context-stimulus learning (more learning, and therefore, less responding for higher p1 s ).
Another property is stimulus generalization, which is described by Rankin et al. (2009) as follows: "Within the same stimulus modality, the response decrement shows some stimulus specificity. To test for stimulus specificity/stimulus generalization, a second, novel stimulus is presented, and a comparison is made between the changes in the responses to the habituated stimulus and the novel stimulus. " (Characteristic #7, p. 137). This property does not pose a special theoretical difficulty for any theory of stimulus processing. To account for it, it is sufficient to assume some generalization gradients for stimulus variation. For the sake of simplicity, here we just make the simple assumption that the context-stimulus association is generalized from one stimulus to another as a function of their similarity. Figure 7 presents the results of these assumptions showing that after training a given stimulus for four trials at an interval of 32 moments, the peak PA1 s values are proportional to the assumed percent of generalization of V Ctxt-stimulus .
Dishabituation
There are two further characteristics listed by Rankin et al. (2009) that refer to the effects of the presentation of a novel stimulus or distractor in the middle of a sequence of presentations of the habituating stimulus. The first states that a "presentation of a different stimulus results in an increase of the decremented response to the original stimulus. This phenomenon is termed 'dishabituation.'" (Characteristic #8, p. 137). The second says that "upon repeated application of the dishabituating stimulus, the amount of dishabituation produced decreases (this phenomenon can be called habituation of dishabituation)." (Characteristic #9, p. 137).
As described above, SOP provides with a set of "distractor rules" by which the presentation of a novel stimulus shortly before a target stimulus causes increments in the decay rates of the target stimulus (pd1 and pd2). These increments are proportional to the degree of primary and secondary activity of the distractor. To exemplify this, Figure 8 depicts the results of a simulation in which the habituating stimulus was presented four times at an interval of 32 moments in FIGURE 5 | Simulated peak PA1 S activity over 4 or 10 presentations of a 1-moment stimulus at a 32-moments inter-stimulus interval. The bar graph depicts the peak PA1 S activity in a single spontaneous recovery test-trial for the two training conditions. FIGURE 6 | Simulated peak PA1 S activity over 4 presentations of a 1-moment duration stimulus at a 32-moments ISI, under two intensity conditions: high intensity (p1 s = 0.8) and a low intensity (p1 s = 0.2). The bar graph depicts the peak PA1 S activity in a single spontaneous recovery testtrial for the two training conditions tested with a common p1 s = 0.5. FIGURE 7 | Simulated peak PA1 S activity over 4 presentations of a 1-moment stimulus at a 32-moments inter-stimulus interval. The bar graph depicts the peak PA1 S activity in a single spontaneous recovery test-trial for stimuli that received a rate of 1, 0.75, 0.5, 0.25, and 0 of generalized V from the habituated stimulus.
each of three identical sessions. In one condition, a distractor was presented between trial 1 and 2 of each session, while in the other condition, there was no distractor in any trial.
The results indicate that the distractor provoked an increase in PA1 s in the subsequent trials relative to the non-distractor condition. Although this effect was notorious in all three sessions, it became progressively less robust over sessions (habituation of dishabituation). The last effect is due, in part, to the fact that as the distractor itself is repeated, it becomes associated with the context and thus rendered less effective.
FINAL COMMENTARIES
In this paper, we showed how the major features of the historically classic phenomenon of habituation can be modeled by the quantitative instantiation of the principles embedded in an already classic theory of stimulus processing, the SOP model (Wagner, 1981). Although this may sound outdated, one reason for bringing such issues here is that theorizing in this field has been relatively neglected, especially in the domain of quantitative modeling.
In the present exercise, we preferred to keep the analysis as simple as possible for expository reasons. But, of course, we must recognize that our simulations of the 10 characteristics of habituation do not exhaust the empirical wealth of the field. Thus, and before concluding, let us make a brief reference to a couple of issues that can be taken forward in future theoretical analyses.
The first refers to the fact that every stimulus evokes several distinct types of responses. These responses may have very different topographies and be differentially susceptible to habituation. In SOP, this difference can be modeled by variations in the parameters of activation. Wagner and Brandon (1989) suggested, for instance, that emotional responses to an aversive stimulus may be represented by more delayed decay processes (i.e., smaller pd1 s and pd2 s ) than the sensory response to the same stimulus. With this parametric variation, one can expect, in principle, that emotional responses will be associated more rapidly with the context than sensory responses. This might explain, in part, the fact that different measures of habituation can show differential context specificity (e.g., Jordan et al., 2000;Pinto et al., 2014).
Furthermore, according to SOP, the repetition of an aversive stimulus, apart from leading to habituation, can also result in the conditioning of emotional responses that potentiate the response to the habituating stimulus itself. Wagner and Vogel (2010) proposed that emotive sensitization competes with sensory habituation in complex ways, such that habituation might be obscured by potentiating effects, presumably reflecting the contribution of an emotional response controlled by the same context that controls habituation. The co-existence of several types of interacting associations between the context and the stimulus is conceptually consistent with SOP. This analysis must be complemented, however, with sufficient empirical studies that succeed in dissociating the response-potentiating from the response-diminishing effects of stimulus repetition (Ponce et al., 2011;Ponce et al., 2015).
There is one further aspect of SOP that was left untreated in the present analysis of habituation. Wagner (1981) proposed that the response to the stimulus is a function of PA1 and PA2 of the stimulus; that is, R = f × (wl × PAl S + w2 × PA2 S ), where wl and w2 are linear weighting factors, and f is a mapping function appropriate to the response measure of interest. As Donegan and Wagner (1987) suggested, this equation provides for at least three options that have differential impact on the course of habituation. One is assuming a very low value of w2, say zero, as we did in this paper. In this case, the response would depend entirely on PA1 s , with PA2 s contributing only indirectly via its priming effect on PA1 s . In our simulations, we have adopted this tactic because it seems to represent better the predominant types of responses that were used for the definition of the 10 characteristics of habituation (e.g., limb flexion in the spinal cat and startle response in rats). FIGURE 8 | Simulated peak PA1 S activity over 4 presentations of a 1-moment stimulus at a 32-moments inter-stimulus interval over 3 training sessions. In the distractor-training condition, a novel 1-moment stimulus (p1 = 0.8; pd1 = 0.1, and pd2 = 0.02) was presented between trial 1 and 2 of each session, while in the other condition there was no such a stimulus in any trial The arrow indicates the distractor location in each case.
Frontiers in Psychology | www.frontiersin.org 9 March 2019 | Volume 10 | Article 504 Another possibility is to adopt a sizable and negative value for w2. In this case, the conditioned and unconditioned secondary activity is subtracted from the primary activity to produce the response. Here, both the negative contribution of PA2 s to the response and its priming on PA1 s would act in a synergic way to diminish the primary response to the stimulus. The use of w1 and w2 with opposite signs may be particularly advised when there are empirical reasons to believe that the response to the habituating stimulus shows a secondary response that opposes the primary response as it has been frequently reported with pharmacological stimuli (e.g., Siegel, 2005).
The third theoretical alternative is to assume that w2 is substantial and positive. Here, PA2 s would have two opposite effects on the response: an augmentative effect through summation with PA1 s and a diminutive effect through priming. In this more complex scenario, it would be expected to observe less behavioral habituation than in the former cases.
Although it may be difficult to assess this possibility with the standard habituation procedures, it is consistent with reports of enhanced performance in some perceptual tasks when the target stimulus is preceded by the same or an associated stimulus (e.g., Posner and Snyder, 1975;Kristjansson and Campana, 2010;Henson et al., 2014).
It may be seen that, despite its complexity, the SOP model is quite well articulated and as such, it seems to be uniquely equipped to encourage further theoretical and empirical work beyond the 10 features of habitation and for a range of very distinct stimulus-response systems. It should be said also that the explanatory scope of the model is not restricted to habituation. Its usefulness has been demonstrated in a variety of phenomena, mainly in the domain of associative learning, such as occasion setting (Wagner and Brandon, 2001;Vogel et al., 2017), timing (Vogel et al., 2003), divergence of response measures (Wagner and Brandon, 1989), trial spacing (Sunsay and Bouton, 2008), cue competition (Mazur and Wagner, 1982;Vogel et al., 2015), causal learning (Dickinson and Burke, 1996;Aitken and Dickinson, 2005), mediated conditioning (Dwyer et al., 1998;Pearce and Bouton, 2001), latent inhibition (Honey and Hall, 1989), and object recognition (Honey and Good, 2000;Robinson and Bonardi, 2015).
In concluding, let us make a personal statement. This paper was prepared in response to the call for papers to be published in a special issue of Frontiers in Psychology on "Research in emotion and learning: Contributions from Latin America." The authors of this article work in Talca, Chile, and we were all tremendously influenced by Allan R. Wagner. His influence was not just intellectual but also took the form of concrete contributions to the setting up of our laboratory for the study of learning in Chile. Allan had accepted to write this paper in collaboration with us. He agreed with the general approach of the paper and with the novel instantiation for context learning, but he passed away before any of the work was completed.
AUTHOR CONTRIBUTIONS
EV and YU-B contributed to the conception of the study; FP, YU-B, and SB conducted the simulations and organized the database; and EV wrote the first draft of the manuscript. All authors contributed to manuscript revision, read and approved the submitted version.
FUNDING
This work was supported by grant from Fondecyt N° 1160601 to EV and by PIA Ciencia Cognitiva, Centro de Investigación en Ciencias Cognitivas, Facultad de Psicología, Universidad de Talca.
|
2019-03-15T13:07:02.217Z
|
2019-03-15T00:00:00.000
|
{
"year": 2019,
"sha1": "588aa37e46a96f553d3416543b7c5fad56fbf7da",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00504/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "588aa37e46a96f553d3416543b7c5fad56fbf7da",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
18604909
|
pes2o/s2orc
|
v3-fos-license
|
Model fluid in a porous medium: results for a Bethe lattice
We consider a lattice gas with quenched impurities or `quenched-annealed binary mixture' on the Bethe lattice. The quenched part represents a porous matrix in which the (annealed) lattice gas resides. This model features the 3 main factors of fluids in random porous media: wetting, randomness and confinement. The recursive character of the Bethe lattice enables an exact treatment, whose key ingredient is an integral equation yielding the one-particle effective field distribution. Our analysis shows that this distribution consists of two essentially different parts. The first one is a continuous spectrum and corresponds to the macroscopic volume accessible to the fluid, the second is discrete and comes from finite closed cavities in the porous medium. Those closed cavities are in equilibrium with the bulk fluid within the grand canonical ensemble we use, but are inaccessible in real experimental situations. Fortunately, we are able to isolate their contributions. Separation of the discrete spectrum facilitates also the numerical solution of the main equation. The numerical calculations show that the continuous spectrum becomes more and more rough as the temperature decreases, and this limits the accuracy of the solution at low temperatures.
I. INTRODUCTION
When fluids are adsorbed in porous materials they behave very differently from what we know in the bulk. This happens in both high-porosity materials, such as silica aerogels, and low-porosity materials, such as Vycor glass. Even an aerogel that occupies a few percent of the total volume significantly deforms and reduces the gas-liquid binodal of the fluid. At least three factors affect phase equilibrium of fluids in porous media: wetting, randomness, and confinement by the matrix. A number of models exist that emphasize one or two of those factors, and they greatly facilitate understanding of possible behaviors of the fluid in the porous media (see review in Ref. [1]). But theoretical models that take into account all three factors appear to be extremely hard to deal with. All existing microscopic theories are inconclusive even concerning the qualitative behavior of the gas-liquid binodal in the porous medium. Conventional approximation schemes yield very different results depending on the model and on the level of approximation [2,3]. Monte Carlo simulations suffer from long relaxation times and are performed in a very limited simulation box [1,4,5]. This makes any exactly treatable model especially desirable. We shall consider such a model, which features all the three factors. It derives from the well known lattice gas model of a fluid. We shall consider here only the simplest representation of a random porous medium (quenched impurities of equal size). The model is exactly solvable on the Bethe lattice, in the sense that we can derive a closed integral equation for the local field distribution. Related approaches, leading to similar equations, were previously known in the context of spin glasses [6,7] and the Random Field Ising model [8,9]. The resulting integral equation has an interesting structure, which means that although numerical solution is relatively straightforward at high temperature, as temperature is lowered the field distribution (and hence also the numerics) becomes more and more involved. This problem is also worse close to the percolation threshold of the porous medium.
The Bethe lattice is defined on a large uniformly branching tree of which the Bethe lattice is the part far from any perimeter site. (This is distinct from a Cayley tree which includes all parts.) The Bethe lattice should not be confused with the Bethe approximation, which usually denotes a set of mean-field-like self-consistency equations for order parameters. The Bethe approximation is closely related to the cluster variation method or cluster approximation which derive these equations from truncated cluster expansion of either the free energy or entropy functional employing several variational parameters. For many models these approximations become exact on the Bethe lattice, but not in the case studied here (except for special choices of parameters). This is a part of the motivation for the exact analysis presented below. Another important goal is to rectify a shortcoming of the grand canonical ensemble when dealing with fluids in porous media. This ensemble allows fluid to equilibrate throughout the pore space, including closed pores that are inaccessible to fluid in reality. For the Bethe lattice we are able to isolate and remove the effects of these closed pores.
We shall formulate the model and present the basic equations in Section II. The analysis of Section III will bring forward the notion of finite clusters and their importance for a correct description of the fluid. Details of the numerical algorithm and computer-aided results will be presented in Section IV. Section V contains our conclusions.
II. THE MODEL
In lattice models of a fluid, the particles are allowed to occupy only those spatial positions which belong to sites of a chosen lattice. The configurational integral of a simple fluid is thereby replaced by the partition function Z = Tr exp (−βH) , β = 1/(k B T ), where n i , which equals 0 or 1, is the number of particles at site i (i = 1 · · · V , where V is the number of the lattice sites [10]). Tr means a summation over all occupation patterns. The total number N of particles is allowed to fluctuate; µ is a chemical potential, which should be determined from the relation Since particles cannot approach closer than the lattice spacing allows, lattice models automatically preserve one essential feature of the molecular interaction: nonoverlapping of particles. The lattice fluid with nearestneighbor attraction is known to demonstrate the gasliquid transition only. Nevertheless, the lattice gas with interacting further neighbors possesses a realistic (which means argon-like) phase diagram with all the transitions between the gaseous, liquid, and solid phases being present [11]. In this paper we deal only with fluid phases and restrict ourselves to the nearest neighbor interaction. The fluid adsorbed in the porous solid can be thought of as residing on a lattice, of which a fraction of sites are excluded or pre-occupied by particles of another sort. Although in practice these blocked sites, representing the solid, must form a connected network (if the solid is to remain static) we ignore this here and allow sites to be blocked at random. Thus, we have to consider the lattice gas with quenched impurities, whose Hamiltonian is given by [2] where ij means summation over all nearest neighbor sites, and quenched variables x i describe the presence (x i = 1) or absence (x i = 0) of a solid particle at site i. Each site of the lattice can be either empty (x i = 0, n i = 0), occupied by a fluid particle (x i = 0, n i = 1), or filled with a particle belonging to the porous solid and therefore inaccessible to the fluid (x i = 1, n i = 0). The first term of Eq. (3) describes the nearest neighbor attraction between the fluid particles (I > 0), the second one corresponds to the fluid-solid attraction (K > 0) or repulsion (K < 0) on the nearest neighbor sites. The hard-core repulsion is taken into account by mutual exclusiveness of particles: It is quite customary to establish the connection between lattice gases and the Ising model. Indeed, a simple substitution S i = 2n i − 1 transforms (1) into the Ising Hamiltonian. In the Ising model language S i = −1 means an empty site, and S i = 1 corresponds to a site occupied by a fluid particle. The current model (3) is equivalent to the diluted Ising model with random surface field ∆: Here y i = 1 − x i describes the accessibility of site i to the fluid particles; the constant term in (4) embraces several pieces that depend only on quenched variables and therefore do not contribute to the thermodynamics; z is the coordination number of the lattice; the sum on j in (5) spans nearest neighbor sites to site i. When deriving (4) we used the equality n i = n i y i . ∆ i is a random field correlated to surface sites: ∆ i = 0 at the sites that are accessible to the fluid (y i = 1), and are in contact with the solid (at least one of the neighbor sites has x j = 0). WhenH = 0, Eq. (4) is a Hamiltonian of the diluted Ising ferromagnet, and we shall frequently refer to this limiting case throughout the paper. The grand thermodynamic potential of the model is where the distribution function ρ({x i }) is a quenched one that describes the distribution of solid particles in space.
We shall study this model on the Bethe lattice, which permits us to calculate an exact thermodynamic potential. It is known that many problems on tree-like structures, such as that depicted in Fig. 1, can be solved iteratively. The term 'Bethe lattice' refers to an infinitely deep part of such a tree. Each site on the nth level has k = z − 1 neighbors at the (n+1)th level and one neighbor at the (n−1)th level. The partition function of the model can be calculated recursively, from upper levels to the bottom. For example, let us consider the cluster of sites framed by the dashed box in Fig. 1, where k = 3. The following relation (which holds for all k) shows how tracing out spins at the upper level forms an 'effective field' on the lower level: We have thus obtained an effective fieldh 0 which is a sum of the original field h 0 and k contributions from the upper branches. We can proceed this way recursively downward, giving at each level. In the non-random case, when h i = h and J ij = J, the effective field deep inside the tree satisfies the stable point relation that follows from Eq. (13): For our model (4), we can expect that a properly defined effective field distribution tends to a stable limiting form at deep levels of the tree. Let us consider the probability Q i (y, h)dh that y i = y andh i = h: where δ(h) stands for the Dirac distribution, and δ ab is a Kronecker's delta (δ ab equals 1 if a = b, and equals zero otherwise). Q i (0, h) is the effective field distribution at a matrix site, and Q i (1, h) is the field distribution for a site accessible to fluid. It is easy to see from (13) and (5) that when y i = 0, J ij = 0 and h i = 0, and the effective fieldh i equals zero, thus Q i (0, h) does not depend on i and has a simple form where we define so that c 0 is a fraction of the sites occupied by the matrix. When we move recursively from the surface of the tree deeper into its interior, Q i (1, h) changes, but should tend to a fixed point at infinitely deep levels of the tree: It follows from (13) that the resulting distribution obeys In writing this equation, and throughout the following, we now specialize to the case where there is no correlation of the {y i } variables (which specify the sites occupied by the solid matrix) beyond nearest neighbor correlations, which are permitted. In this case, taking into account that k upper branches do not interact until they meet the chosen site, the fields {h j } are uncorrelated. (In more general cases, the y j at different branches are correlated; the joint distribution Q(y 1 , h 1 , · · · , y k , h k ) cannot be decoupled as a product of Q(y j , h j ); and (18) is invalid.) Using the integral representation of the delta function we can factorize the delta function in (18) and get Configurational averaging with respect to the quenched variables · · · {x} in (20) can now be performed explicitly, and it yields where we introduced the notations Q(h) = Q(1, h)/c 1 ; u(h) = U (J, h); p a = w a1 /c 1 ; and is the probability to find y i = a and y j = b at a randomly chosen pair of nearest neighbor sites i and j. For example, w 11 = y i y j {x} is a probability that both sites in the pair are accessible to fluid. For each site accessible to the fluid, p 1 specifies the probability that a given nearest neighbor site is accessible as well; p 0 = 1 − p 1 is, correspondingly, the probability that this neighbor site is occupied by the solid. Note that Q(h) specifies the distribution of the effective field created by k 'upper branches'. The complete one-site field deep inside the tree consists of contributions coming from all z nearest neighbors, and its distribution takes into account z equivalent branches This distribution determines the one-site average values. For example, in the magnetic interpretation of the model (3), this total one-site field permits to calculate the average magnetization (per magnetic site) Calculation of other thermodynamic properties is less straitforward. The thermodynamic potential of the tree is a sum of 'site' and 'link' contributions [6,7]: where H i and H ij are Hamiltonians of one site and a pair of sites, respectively, and the traces act only on the corresponding local degrees of freedom. The configurationally averaged thermodynamic potential of the Bethe latice must take into account only deep levels of the tree. We do this by assigning the stable point field distribution to all the sites when doing configurational averaging. The Bethe lattice thermodynamic potential F is, therefore, given by where H i and H ij depend on both fluid (S) and matrix (y or, equivalently, x) variables:
III. THE SOLUTION
The result of the previous section is that thermodynamics of the model is determined by the one-site field distribution Q(h), and this distribution satisfies integral equation (21). Equations of this type have been known previously in the context of spin glasses [6,7] and the Random Field Ising model [8,9]. We did not encounter this kind of equation for the current model in the literature, even for the limiting case of diluted ferromagnet (H = 0). In this section we investigate the ways of solving this equation.
For the diluted Ising ferromagnet in zero external field, when H =H = 0, Q(h) = δ(h) is always a solution of Eq. (21), and it yields zero magnetization ( S i = 0) or, in the fluid model language, the occupancy number equals one half ( n i = 1/2) at all sites. This is known as the trivial solution in all calculations leading to an analytical result, and this is the only solution at the temperatures above critical. What are the nontrivial solutions?
When there is no solid medium (c 1 = 1) or when the solid sites are arranged in a nonporous block or slab with a smooth surface (p 1 = 1), the solution is Q(h) = δ(h − h), andh is given by Eq. (14). This equation has a nontrivial solution (h = 0) at low temperatures: T < T c , In this case the results of the cluster (or Bethe) approximation become exact.
When quenched chaos is present (p 1 = 1), and H = H = 0, the trivial solution becomes unstable at temper-atures below (this is an accurate result), signifying the appearance of the spontaneous magnetic order ( S i = 0). Importantly, though, the cluster approximation is not accurate when both magnetic ordering and quenched disorder are present. In the general case (H = 0, H = 0) we do not have any simple formula for the critical temperature or the field distribution. In what follows, we study the structure of this distribution more closely. First we shall consider the simple limiting case of diluted magnet without surface field (H = 0) [12]. In this case matrix sites break the links between spins, but do not introduce any field that breaks the spin flip symmetry. It follows from (29) that when p 1 < 1/k, T c does not exist, and the spontaneous magnetic order does not appear. (The reason is that in this case the system is not percolated: there is no infinite cluster of linked spins, only finite clusters, and in finite systems the spin flip symmetry cannot be spontaneously broken.) Let us realize that in this system for any matrix (at any value of p 1 , except 0 or 1) there is a finite fraction [13] of spins whose links are all broken, because all their neighboring sites are occupied by the matrix. The effective field at those sites equals just an external field H. Thus it is likely that the solution Q(h) contains a term in δ(h − H). In the case of zero external field (H = 0) one easily finds that a solution of the form Q(h) = A 0 δ(h) + q(h) does satisfy Eq. (21), and one gets where we expect q(h) to be a nonsingular function, because a delta function positioned in any other place (A x δ(h − a), a = 0) or a set of such delta functions does not lead to selfconsistent equations of the required form. Let us show that A 0 has a simple physical meaning, namely, it equals the probability that all k upper branches are finite. Consider first the probability P f that a branch is finite. It consists of two possibilities: either the first link is broken (with probability p 0 ), or it is unbroken, but all farther branches connected to it are finite (with probability p 1 P k f ). This yields P f = p 0 + p 1 P k f . Then, the probability that k given branches are finite equals A 0 = P k f and satisfies Eq. (30). Obviously, Eq. (30) always has the solution A 0 = 1, which leads to the trivial solution for Q(h) because of the normalization condition Q(h)dh = 1. A 0 = 1 is the only solution when p 1 < 1/k, as discussed above. At the percolation point (p 1 = 1/k) it is a two-fold solution, which signifies a bifurcation. When p 1 > 1/k, a solution for A 0 in the interval (0,1) always exists.
When H = 0, the previous ansatz that Q(h) = Aδ(h − H) + q(h) does not work: δ(h − H) placed into the rhs of Eq. (21) generates, under iteration of the equation, a set of k different delta functions, as does any delta function positioned in some other place. This seemingly rules out the idea of finding the discrete levels in this case, but a fully solvable case z = 2 (the linear chain: see Appendix A) resurrects the hope. In that case solution contains an infinite series of delta functions. Therefore, in the general case we shall seek the solution in a form of a sum of an infinite set of delta functions and a nonsingular function: We now show that such a solution can really be obtained. The point is that, similarly to the zero field case (H = 0), the equation for the discrete spectrum separates from the equation for the non-singular part q(h). Placing (32) into Eq. (21), we get the equation for the discrete spectrum The number of delta functions is infinite, but their cumulative weight remains finite, and The latter relation can be obtained by integrating Eq. (33) with respect to h and noting that the total weight of delta functions satisfies Eq. (30). Eq. (34) means that the obtained discrete spectrum is just the single h = 0 level that we had in the case H = 0 split by the external field. Each (a l , h l ) pair corresponds to a certain finite tree 'growing' up from the given site. For example, (a 1 , h 1 ) = (p k 0 , H) corresponds to the 'zero' tree, when all k links to the upper sites are broken; naturally h 1 = H, and a 1 equals the probability that all the neighboring k sites at the upper level are inaccessible. Now let us turn back to the most general case, when both the surface and the external fields are present (H = 0, H = 0). Inserting into (21) one can obtain explicit expressions for the discrete s(h) and continuous q(h) parts of the spectrum: Note that Eq. (38) for the discrete spectrum has a closed form and does not depend on q(h), whereas the equation for q(h) does include s(h). This is because the discrete spectrum comes from finite trees, whereas the continuous spectrum is an attribute of the infinite volume. Finite branches know nothing about the infinite volume, therefore s(h) does not depend on q(h). At the same time, at any given site finite branches may connect onto the infinite tree; therefore the equation for q(h) does depend on the discrete spectrum s(h).
Let us consider the structure of Eq. (37). It can be rewritten in the form Let us recall that by construction of equation (21) for Q(h) if we insert some field distribution as Q(h) in the rhs , we obtain in the lhs the field distribution one level deeper the tree. The same property holds for Eq. (40). Then Eq. (40) permits a clear interpretation: each term of l mixes into the next-level distribution the possibility that l branches are infinite, with the other k − l finite; l runs from 1 to k, the l = 0 term was rightly eliminated by the last term in Eq. (37), because this term would give rise to the possibility of a finite tree and would give a discrete spectrum contribution. This interpretation makes s(ξ) andq(ξ) responsible for finite and infinite branches respectively. The surface fieldH does not enter Eq. (37) explicitly; its influence on q(h) is mediated by the discrete spectrum s(h). In a way, s(h) (or, equivalently, s(ξ) [14]) encapsulates the surface influence. The only explicit reminder of a porous medium in the equation for the continuous spectrum is the p 1 multiplier in Eq. (39), indicating that links between sites are not always present.
The complete effective field acting on an accessible site Q z (1, h) (defined by (23)) likewise consists of a discrete spectrum s z (h), and a continuous one q z (h): The total weight of the discrete part equals the probability that all z branches connected to this site are finite. When this happens, the site belongs to a finite closed volume. When considering the fluid in the grand canonical ensemble (as we have done so far) all sites are in equilibrium with an implicit bulk fluid, whose temperature and chemical potential are parameters that describe this equilibrium. But in real experiments fluid particles cannot access any disconnected pore volumes (described by s z (h)), and those parts of the system do not respond to changes of the chemical potential. Were we considering a magnetic system, spins in those finite separated clusters would be able to react to changes of external field H, and thermodynamic potential (27) would be valid. In contrast, in order to describe the experimental situation for a fluid in porous medium, we have to exclude the contribution of finite volumes to the thermodynamic potential. Fortunately, we are now able to separate this contribution because finite branches always generate a discrete spectrum, in contrast to the infinite cluster, which yields the continuous distribution.
Inserting (35) and (41) into the expression for the thermodynamic potential (27), we can identify three distinct contributions of finite volumes The first term above corresponds to an accessible site whose branches are all finite, the second one describes a linked pair of accessible sites with outer branches finite, and in the third term one site of the pair is accessible to the fluid, but all its branches are finite. Note that the division into these three terms does not reflect the presence of different types of site but reflects the subdivision into site and link contributions in (26). Dropping these 'finite volume' contributions (and constant terms corresponding to the sites occupied by the matrix) we obtain the thermodynamic potential of the fluid in the accessible volume as: The procedure of 'contribution classification' is especially simple when we consider the expression for the density (per free volume) n = n i H {x} /c 1 . The grand canonical ensemble (GCE) predicts whereas when we take into account that closed finite pores are inaccessible to the fluid the microscopic fluid density at those sites has to be set to zero, and we obtain the corrected, 'infinite cluster' (IC) expression where x is a fraction of free sites belonging to the finite cavities, defined by Eq. (44), and the difference between the GCE and IC densities is simply the number of fluid particles in finite cavities per number of the free sites.
IV. NUMERICAL CALCULATIONS
In order to solve the main equation (21) we have to calculate the discrete spectrum (36) and use it when solving for the continuous part of the field distribition via (37). The procedure of the discrete spectrum calculation is iterative. We place the zeroth approximation s(h) = p k 0 δ(h − H − kH) into the rhs of Eq. (36) and obtain in the lhs a sum of k + 1 delta functions, one of which coincides with the initial one and has the correct weight p k 0 . Placing the new approximation into the rhs of Eq. (36) recursively, we can obtain as many terms as we need. The unpleasant feature is that the number of levels in the spectrum grows exponentially during the iterations, and many levels have negligible weight. Fortunately, one can drop the low-weight levels at each step, because they produce only lower-weight levels during the following iterations.
Let us note that the weights of delta functions depend only on parameter p 1 , which is determined by the structure of the solid matrix. The positions of the delta functions depend also on other parameters (T , H,H, and J). As we discussed in the previous section, the discrete spectrum corresponds to finite trees growing from a site. Each level in the spectrum corresponds to a tree of a certain size and form, and its weight is a probability to find such a tree in the system. Fig. 2 shows an example of the discrete spectrum for the case of a relatively sparse solid that blocks only a small fraction of sites. In that case the 20 'heaviest' levels carry more than 99.96% of the total weight. As p 1 approaches the percolation threshold, the convergence of the series becomes progressively poorer. For example, the 200 heaviest levels in the case k = 2, p 1 = 0.6, contain only 87% of the discrete spectrum's weight. This naturally agrees with the fact that there appear a lot of large finite trees near the percolation point, and the number of forms of a tree quickly (combinatorially) grows with its size. One other interesting feature of the discrete spectrum is its bandlike structure, with a l densely grouped around certain values, and h l subdividing into several branches as H increases. Once s(h) is calculated to a given high accuracy These bounds are far from exact at high temperatures, but this does not make a significant impact on the efficiency of the calculations [15]. The numerical solution of Eq. (37) is an iterative process. We start from some initial distribution (this may be a uniform distribution) and insert it into the rhs of Eq. (37). The resulting lhs is a new approximation to the solution. Each iteration produces the field distribution q(h) one level deeper the tree. Therefore such iterations must converge provided the stable point dis- The numerical results we report here are limited to coordination number 3 (k = 2), and a completely random matrix, when there is no correlation between quenched variables even at the nearest neighbor sites: hence w ab = c a c b and p a = c a . Since we are studying the behavior of a fluid in a porous media, we can set aside very dense matrices with p 1 < 1/k; these have no infinite connected volume accessible to the fluid, so that q(h) = 0, and thermodynamic potential (45) equals zero.
First we shall consider the simpler case of a diluted Ising ferromagnet (H = 0), which corresponds to fluid in the porous matrix with a moderate fluid-matrix attraction (K = I/2). In this case the exact critical temperature is known from Eq. (29); when H = 0 and T > T c , Q(h) = δ(h) and m = 0 (n = 1/2), while at lower temperatures there are two symmetric non-trivial solu- The temperature dependence of q + (h) is shown in Fig. 3. One can see that the q + (h) line repeats itself, but not in a trivial fashion. The continuous field distribution acquires even more structure for denser matrices (Fig. 4). At low temperatures q + (h) becomes a set of increasingly sharp peaks, and this greatly increases the errors introduced by the discretization in the numerical algorithm. Therefore this algorithm does not work at very low temperatures. Nevertheless, using extremely fine discretization (we used M up to 2 17 = 131072), we covered an interesting range of temperatures. With the algorithm just described we were able to reach T /T c = 0.4. In the magnetic interpretation of the model (3) the solutions Q + (h) and Q − (h) correspond to exactly opposite magnetizations (m + = −m − ), which reflects the fact that direction of the spontaneous (zero-field) magnetization is not predetermined. In the fluid interpretation these two solutions form the two branches of the binodal (GCE: n + = (m + + 1)/2, n − = 1 − n + ). Fig. 5 shows the resulting GCE binodals at different densities of the matrix. According to Eq. (48), the 'infinite cluster' binodals for this particular case (H = 0) have the same size and shape as GCE binodals, they are only shifted to lower densities by ∆n = x/2, ∆n = 6.8 · 10 −4 , 7.8 · 10 −3 , 0.039, 0.15 for c 0 = 0.1, 0.2, 0.3, and 0.4, respectively. In the general case (H = 0), the density shift between the GCE and IC binodals depends on temperature.
The binodal density (or zero field magnetization, in the Ising language) at zero temperature can be found from the fact that in this case all the spins in the infinite cluster are aligned, whereas finite clusters do not contribute to the spontaneous magnetization, and the magnetization (per magnetic site) equals plus or minus the fraction of magnetic sites belonging to the infinite cluster: m + (T = 0) = 1 − x. This gives the resulting half-width of the binodal, which is depicted in the inset of Fig. 5 as a function of matrix density c 0 . One can see that the cluster approximation produces the binodals that are flatter at the top and have an incorrect width. For a lattice with only sparsely distributed solid sites, the cluster approximation results are quite accurate. Now, let us consider instead the situation where there is no matrix-fluid attraction (hard core matrix): K = 0, H = −J. Fig. 6 shows how the field distribution at constant chemical potential (fixed H) now changes with temperature. In the case depicted there is a competition between the surface fieldH = −J < 0 and the bulk field H = 0.21J > 0. At high temperatures the effective field prefers the sign of the bulk field and is mainly positive. At T /T c < 0.82, an alternative solution for q(h), which favors negative fields, corresponds to the global minimum of the thermodynamic potential and becomes preferable. At T /T c = 0.82 the two solutions yield the same value of thermodynamic potential at different fluid densities, and correspond to the two branches of the binodal. It should be noted that T c is defined as given by Eq. (29) and is not at (but is close to) the critical temperature for this matrix. Fig. 7 shows the temperature dependence of the fluid density at different values of H. The jumps correspond to coexistent densities at the binodal. One can see that in this case the binodal is not symmetric. The results depicted in Fig. 7 correspond to the 'grand canonical fluid' described by Eqs. (27), (46). The fraction of the free sites that belong to finite closed cavities (x = (c 0 /c 1 ) 3 ) is as small as 0.14% for this relatively sparse matrix, therefore the binodal of the 'infinite cluster fluid', which excludes their contribution, is indistinguishable on the plot from that depicted by the whole line in Fig. 7.
Let us also note that due to a special symmetry of the model 4 [19], the binodal of the symmetrical caseH = J (K = 2J, attractive matrix) coinsides with that in Fig. 7 flipped with respect to n = 1/2 line. Next, we would like to consider denser matrices, and here the poor convergence of s(h) leads to a problem. We did not have this problem in the case of dilute magnet (H = 0), because its entire binodal, in view of spin-flip symmetry of that special case, corresponded to H = 0, when the discrete spectrum consisted of only one term: s(h) = A 0 δ(h), which could be exactly calculated for the matrix of any density (Eq. (30)). For the hard core matrix (H = −J) this no longer takes place. For c 0 = 0.3 the 48 leading delta-functions of s(h) contain 95.5% of its weight, 804 terms contain 98.6%. If we simply ignore the residual weight and solve Eq. (37) with an incomplete s(h), the resulting q(h) strongly depends on the number of terms taken into account in s(h) (see Fig. 8).
To overcome the problem, we use the following trick. First we calculate the leading 2l terms of s(h). They correspond to 2l relatively small finite trees, because the larger trees have smaller weight. Then we determine the l lightest terms among them and add ∆ = (A 0 − 2l i=1 a i )/l to their weights. The resulting discrete distribution has the correct total weight. This way we distribute the missing weight of large trees among the largest of those taken into account. From the physical point of view this means that we replace the 'very large' branches by the 'large' ones and in this way forbid 'very wide' pores. The latter implies a certain correlation in the matrix and is always an approximation, since when deriving equation (37) we permitted only the nearest neighbor matrix correlations. Anyway, the results must become accurate in the limit l → ∞, and we find that the method converges already at small l (Fig. 9). A remarkable new feature of q(h) in Fig. 9 is the presence of steps, accompanied by sharp peaks, especially visible at h − H = −1.85 or −1.75. It is possible that these peaks are not accurate (see Appendix B), they could be numerical artefacts arising in places where q(h) in reality has only a step [20]. Note anyway that these peaks are not delta functions: their heights remain almost unchanged when we increase resolution from M = 2 14 to M = 2 15 , whereas if they were delta peaks, their heights would then double [21].
Having overcome this convergence problem with s(h), we can calculate phase diagrams for denser matrices (Figs. [10][11]. For these denser matrices, unlike the case c 0 = 0.1, the difference between the grand canonical ensemble binodals and the infinite cluster ones becomes clearly visible, although the fraction of finite closed cavities is still not large (x(c 0 = 0.2) = 1.6%, x(c 0 = 0.3) = 7.9%). This fraction grows to 100% at the percolation point (which happens at c 0 = 0.5 for this lattice), thus making a big difference near percolation.
One can see that GCE and IC binodals forH = −J are indiscernable, which indicates that for this case closed pores in GCE are almost empty, and their contribution is negligible [22]. On the contrary, forH = J, they are almost saturated with fluid, and ∆n is close to the fraction x of closed pores in the matrix. This situation should also hold for strongly attractive or repulsive matrices: |H| > 1. For dense enough matrices with smaller |H|, ∆n will be nonzero for both negative and positiveH and will depend on temperature.
In the above results for our numerical treatment of the exact integral equations, deviations from the cluster approximation are clearly visible in the phase diagrams (Figs. 5, 7), and become very large in some cases (Figs. 10, 11). Note that the cluster approximation permits no simple way round the failure of the grand canonical ensemble to prevent fluid from entering closed pores. It is interesting to note that some other related approaches [2,3] predict an unusual double-hump binodal of the fluid in moderately dense strongly attractive or repulsive matrices (in terms of the current model, this means large |H|). Kierlick et al [3] considered an offlattice molecular model of fluid in a quenched disordered configuration of spheres on the basis of a replica symmetric Ornstein-Zernike equation. They studied a sequence of approximations, the first of which corresponded to the mean spherical approximation (MSA), and predicted a binodal of the usual form. The higher approximation yielded a binodal with two critical points or even two disconnected binodals. Their highest order approximation again gave only one critical point, but predicted a shoulder on one of the binodal's sides. The MSA applied to the model (3) on the five-dimensional fcc or bcc lattice [2] again predicted a double-humped binodal. A problem identified for MSA was that for the 3-dimensional lattices it predicted a double-humped binodal even for the bulk fluid without any porous matrix. Therefore the authors of [2,3] could not conclude definitely about the existence of two critical points for these models, despite the similarity of their results to the double-humped binodals seen in Monte-Carlo simulations on similar models [4,5]. Our theory does not give any double-humped binodal, at least in the range of parameters we have studied.
V. CONCLUSIONS
We considered the Bethe lattice model of fluid in a porous medium. The recursive character of the Bethe lattice permits an exact treatment, whose key ingredient is an integral equation for the effective field distribution. Solutions of this equation consist of a sum of discrete and continuous spectra, and these spectra have distinct physical interpretations. The discrete spectrum comes from disconnected finite pore spaces, whereas the continuous spectrum is a contribution of the infinite pore space which in reality is the only one accessible to fluid. The continuous spectrum develops more and more structure at low temperature, which means that a numerical solution for it becomes impractical below a certain temperature. However, the physical results found for tem-peratures above this threshold are both consistent and reasonable. Despite use of a Bethe lattice they differ significantly from results calculated using the cluster (or Bethe) approximation, which cannot handle the complexity of the field distributions that we find.
The dichotomy between the two types of pore space mentioned above is not exclusive to the Bethe lattice, but universal. Any microscopic model of fluid in a random porous media that uses the grand canonical ensemble will include contributions of the finite cavities, unless this is carefully subtracted off, as we managed to do for the Bethe lattice. In the grand canonical ensemble, these cavities are in equilibrium with the external bulk fluid, but in real-world experiments they are inaccessible and do not respond to changes of chemical potential. This marks an important distinction between models of fluids in porous media and disordered magnetic model to which they are equivalent in the grand canonical ensemble; for magnetism, finite clusters do contribute to the free energy and it is right to include them. and therefore a 2 = p 1 p 0 . In general, one can find that a l = p 1 a l−1 . Finally, we can write the solution in the form where h l are given by the recursion that follows from Eq. (A2), In our numerical calculation we represent q(h) as an M -vector and use truncated Fourier series. This is guarantied to work well only for smooth functions. For example, truncated Fourier series result in peaks near discontinuities (Fig. 12). These are called 'Gibbs ears'. Note that Gibbs ears have a height that remains fixed, while their width narrows to zero as the resolution improves. There is accordingly no area beneath a Gibbs ear, unlike a delta function. The spikes in Fig. 9 do not look exactly like Gibbs ears: Gibbs ears appear in pairs, and the positive ear has a negative counterpart, whereas spikes in Fig. 9 are asymmetric, with negative spikes much smaller and almost absent at the fine discretization (M = 2 16 ) that we used to produce the plot. It is still possible that equation/recursion (37) somehow creates a type of positiveonly Gibbs ear effect, but this does not seem likely.
|
2017-04-13T01:50:31.826Z
|
2003-05-07T00:00:00.000
|
{
"year": 2003,
"sha1": "567ee477714110e4529846dd0ae9c3cece070901",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0305122",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "567ee477714110e4529846dd0ae9c3cece070901",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
227276871
|
pes2o/s2orc
|
v3-fos-license
|
Experimental study of enhanced oil recovery by CO2 huff-n-puff in shales and tight sandstones with fractures
The fractures and kerogen, which generally exist in the shale, are significant to the CO2 huff-n-puff in the shale reservoir. It is important to study the effects of fractures and kerogen on oil recovery during CO2 huff-n-puff operations in the fracture–matrix system. In this study, a modified CO2 huff-n-puff experiment method is developed to estimate the recovery factors and the CO2 injectivity in the fractured organic-rich shales and tight sandstones. The effects of rock properties, injection pressure, and injection time on the recovery factors and CO2 usage efficiency in shales and sandstones are discussed, respectively. The results show that although the CO2 injectivity in the shale is higher than that in the sandstone with the same porosity; besides, the recovery factors of two shale samples are much lower than that of two sandstone samples. This demonstrates that compared with the tight sandstone, more cycles are needed for the shale to reach a higher recovery factor. Furthermore, there are optimal injection pressures (close to the minimum miscible pressure) and CO2 injection volumes for CO2 huff-n-puff in the shale. Since the optimal CO2 injection volume in the shale is higher than that in the sandstone, more injection time is needed to enhance the oil recovery in the shale. There is a reference sense for CO2 huff-n-puff in the fractured shale oil reservoir for enhanced oil recovery (EOR) purposes.
Introduction
In recent years, the majority of newly discovered oil reservoirs are unconventional reservoirs. Shale oil reservoirs have been discovered in the Bakken Formation (3.65 billion barrels), Three Forks (3.73 billion barrels), Appalachian, Gulf of Mexico, west Siberian, Songliao, and Ordos basins around the world (Gaswirth and Marra 2015). There is a strong consensus that oil can be produced from certain fractured shale oil reservoirs (Zou and Yang 2013). Similar to the tight sandstone, the shale has very pore storage and permeable property, which is adverse to the production (Bustin and Bustin 2012;McGlade et al. 2013). Besides, because shales are rich in kerogen (Wang et al. 2019), the oil in shale consists of free, adsorbed, and dissolved oil Zhu et al. 2018b). Molecular simulation results show that the affinity of oil and kerogen is larger than gas, oil cannot be desorbed even at atmospheric pressure (Falk et al. 2015;Zhu et al. 2018a). Since the mobility of shale oil is very low, the primary oil recovery factor of the shale oil reservoir is less than 15% Hoffman 2012;Iwere et al. 2012;Yu et al. 2014). Therefore, EOR is necessary to achieve maximum economic efficiency in shale oil reservoirs. However, water flooding cannot be used in shale oil reservoirs because of the clay swelling problems, poor sweep efficiency, and low injectivity Ahmad et al. 2019).
The gas injection is an important method in the EOR of the shale reservoir Jia et al. 2019). Yu and Sheng (2016) investigated that cyclic gas injection provided a steadier and continuous recovery performance Edited by Yan-Hua Sun than cyclic water injection. Sharma and Sheng (2017) compared the recovery factor of methane and ethane huff-n-puff with methyl alcohol and isopropanol huff-n-puff, the result showed that the production rate and ultimate recovery factor of gas huff-n-puff are both best. A series of experiments and simulations have been conducted on the Wolfcamp shale to study the effect of different gases on the gas huff-n-puff in the shale, and the results show that the recovery factor of CO 2 huff-n-puff is better than CH 4 and nitrogen huff-n-puff (Li et al. 2017a). Field tests show that the CO 2 injection causes significant reductions in the interfacial tension and oil viscosity (Gondiken 1987). A greater permeability caused by carbonic acid dissolution of calcium carbonate has been well documented (Torabi and Asghari 2010). Schenewerk et al. (1992) performed a study in South Louisiana and showed that CO 2 huff-n-puff can significantly reduce the water cut in wells and alter the saturation distribution. Besides, the diffusion coefficient of CO 2 in oil is higher than nitrogen and water (Zheng and Yang 2017;Zhu et al. 2018c), and the adsorptive properties of CO 2 in kerogen are greater than those of other gases (Mitchell et al. 2004;Kurniawan et al. 2006;Ottiger et al. 2008;Pollastro et al. 2008). The higher diffusion coefficient and excellent adsorptive properties of CO 2 are beneficial to replacing the adsorption-dissolution oil in the kerogen. The combined effects of fracture impaction, diffusion, and nanopore result in an additional 3.8% of oil production during CO 2 huff-n-puff ). In conclusion, CO 2 injection is a good EOR method for shale oil production.
The modes of CO 2 injection in EOR can be categorized into three types: huff-n-puff, multiple-well cyclic injection, and continuous injection (Burrows et al. 2020;Iddphonce et al. 2020;Singh 2018). The difference of multiple-well circulation from huff-n-puff is that the injection well and production well of CO 2 injection are separated. Since it can provide beneficial interwell interference, multiple-well circulation may be more effective than huff-n-puff (Kong et al. 2016). The experiment and simulation results showed that the recovery factor of continuous gas injection is larger than huff-n-puff in the homogeneous formation (Wan et al. 2013;Sheng 2015;Yu and Sheng 2015). However, Tovar et al. (2018) demonstrated that the direct gas injection through the shale matrix is not possible in a reasonable time frame. Many technologies, including horizontal wells and multiple hydraulic fracturing, have been used to economically produce shale oil (Daneshy 2009;Gaurav et al. 2012). Although the diffusion coefficient of CO 2 is better than other fluid (Zheng and Yang 2017;Zhu et al. 2018c), the limited influence area of the diffusion mechanism is not enough to compensate for the low velocity caused by the low permeability of the matrix. Therefore, multiple hydraulic fracturing has been used to economically produce shale oil (Daneshy 2009;Gaurav et al. 2012), and huff-n-puff is the best mode of CO 2 injection in the shale reservoir. More CO 2 diffuses into the matrix because of the larger contact area, and CO 2 moves deep because of the higher permeability of fracture.
In recent years, the effects of mechanistic factors (e.g., soaking time, cycle number, pressure) on the CO 2 huff-npuff in shale cores have been studied by many researchers (Wang et al. 2013;Wan et al. 2013;Li et al. 2017a, b;Sheng 2017;Du et al. 2018). Besides, the effects of heterogeneity (Chen et al. 2014), water saturation (Huang et al. 2020), permeability (Su et al. 2020), pore throat radii and total organic carbon (TOC) content (Hawthorne et al. 2019) on the CO 2 huff-n-puff in the shale were also studied. The recovery factor varies widely (25%-100%), which is caused by different experimental methods and samples. Generally, the recovery factor of shale oil is closely correlated with the injection pressure and contact surface area between CO 2 and rock during CO 2 huff-n-puff in the shale. CO 2 huff-n-puff experiments in the Eagle Ford shale, Marcos shale, and Barnett shale indicated that the oil recovery due to CO 2 injection at pressures at or above the minimum miscibility pressure (MMP) is better than that at pressures below the MMP (Gamadi et al. 2014). The same result was also obtained by Li et al. (2017b) and Hawthorne et al. (2013). Burrows and Haeri (2020) summarized the oil recovery results after 24 h of CO 2 huff-n-puff laboratory testing and the results showed that the recovery factor decreased from 100 to 25% when the core volume/core surface area increased from 0.3 to 10. Therefore, it is important to study the effects of fractures on oil recovery during CO 2 huff-n-puff operations in the fracture-matrix system.
There are three methods to describe the matrix-fracture system in the laboratory. First, a direct visualization experiment with a glass micromodels was used to quantify the recovery rates of oil from fracture networks (Nguyen et al. 2018). Second, a cubic shale sample with an artificial fracture, which is filled with sands and glass beads, was used to model the matrix-fracture system (Tan et al. 2018). Third, a physical model, which is composed of an annular fracture and a cylindrical matrix, was used to study the CO 2 huffn-puff process in conventional fractured reservoirs (Torabi and Asghari 2010;Sang et al. 2016). In some experiments, the fracture was filled with glass beads to model the fracture with proppant (Tovar et al. 2018). However, these three types of assessment methods have their flaws if employed independently. For the first method, the matrix parts of the glass micromodel are different from the shale matrix; the second model needs a large-scale sample and it is hard to quantitatively describe the fracture; while the fault of the third model is that the fracture is too ideal. Because the samples come from the formation and the size of the core is small, the third model is chosen in this study.
This paper presents an experimental study of the CO 2 huff-n-puff process in shale and sandstone matrix-fracture systems at high temperatures and pressures. In Sect. 2, a modified experimental method is developed to test the recovery factors of matrix and fracture, respectively, in a special apparatus designed to mimic the CO 2 huff-n-puff process in fractured unconventional formations. Meanwhile, the injectivity of CO 2 in shale and sandstone is calculated accurately by the pressure decline curves. The oil recovery factor of shale with different porosities is compared with sandstone to study the effects of kerogen on CO 2 huff-n-puff. As a result, the effects of injection pressure and injection time on the recovery factors and CO 2 usage efficiency in shale and sandstone are studied.
Materials
Four samples were used as the experimental cores, including two shale cores and two sandstone cores. All of the samples were drilled in the Mesozoic Yanchang Formation, Ordos Basin, where a larger number of organic-rich shales were found (Wang 2015). Because the clay content of shale is higher, liquid nitrogen was used as a coolant to cool the shale samples during coring in the laboratory. The hydrocarbon properties of the samples were detected by Rock-Eval pyrolysis (Okiongbo et al. 2005;Tong et al. 2011). The porosities of the samples were tested by the saturation of dodecane (C 12 ), which is the simulated oil in this study. The permeability was tested by the helium (Cui et al. 2009). During the test, the pressures of both ends of the core samples are 16 and 15.5 MPa, respectively. The liquid nitrogen adsorption measurement was used to test the pore volume per mass, specific surface area, and average pore diameter (Sing et al. 1985). According to the BET adsorption theory (Radlinski and Mastalerz 2004), the liquid nitrogen adsorption method was focused on the microporous range (1.7-200 nm). They only indicate the development of the microporous of the samples. Table 1 shows the physical characteristics of the samples and Table 2 shows the physical properties of C 12 and CO 2 . The CO 2 -C 12 phase equilibrium data at 40-70 °C have been studied by Nieuwoudt and Du Rand (2002) and the CO 2 -C 12 system is true first contact miscibility. The MMP in Table 2 is obtained at 60 °C. Figure 1 shows photos of the experimental cores. The cores are approximately 25 mm in diameter and 35-50 mm in length. It should be noted that Sandstone 1 contained many microfractures. As the number of microfractures in oil and gas reservoirs increases, the permeability improves significantly.
In this study, X-ray diffraction analysis (XRD) was used to test the mineral compositions of the samples (Xie et al. 2016). The mineral compositions are listed Table 3. The mineral composition of shale is different from sandstone. The dominant mineral is quartz for sandstones. For shales, the contents of analcite, ankerite, and clay are also higher besides quartz.
Experimental setup and procedure
An experimental set-up was proposed to test the CO 2 huffn-puff recovery from oil reservoirs with fractures at high pressures and temperatures. A schematic is shown in Fig. 2, which is comprised of a hydrostatic high-pressure core holder, a confining pressure pump, a CO 2 cylinder, a separator, and a vacuum pump.
The diameter of the core holder was 26.00 mm and the confining pressure pump could provide constant axial pressure on the end face of the core. Therefore, fluid can only flow in a radial direction. The annulus space between the sample and wall of the core holder was used to simulate the fractures. Because the diameter of the core was 25.00 mm, the width of the "fracture" was 0.50 mm. The high-pressure CO 2 cylinder, which is directly connected to the core holder by a line, was used to store CO 2 , and the pressure of the CO 2 cylinder was recorded by a pressure transducer. The CO 2 injection amount in the core can be obtained by the pressure decay of the CO 2 cylinder during the test. Because the volume of CO 2 has an important effect on the pressure change, 20 cm 3 was chosen as the volume of the CO 2 cylinder. If the volume of the CO 2 cylinder is larger than 20 cm 3 , the pressure change will be too small to test. If the volume of the CO 2 cylinder is smaller than 20 cm 3 , a large change of pressure will affect the super-critical state of CO 2 . The experimental set-up was placed in an oven.
The experimental procedures are as follows.
(1) Matrix is saturated with fluid: the fractures and matrix were saturated with C 12 , separately, to obtain the recoveries from the fracture and matrix. First, the core sample, which had been flushed for 15 days, was vacuumed for 40 h. Then, the core sample was saturated with C 12 for 10 days at 20 MPa until their mass did not change.
The mass change of the core sample was the saturated mass of C 12 in the matrix.
(2) Fracture is saturated with fluid: the saturated core was set into the hydrostatic high-pressure core holder and held at the confining pressure, which was 2 MPa larger than the CO 2 injection pressure (pressure of CO 2 cylinder). The fracture and pressure lines were then vacuumed for 5 min to drain the gas from the fractures and pressure lines. Then, C 12 was injected into the annulus space. The mass of C 12 , which has been injected into the fractures, was obtained by a balance. (3) CO 2 huff: CO 2 in the cylinder was injected into the core holder. Simultaneously, the pressure transducer was used to record the pressure of the gas cylinder. (4) CO 2 puff: After 40 h (except Sect. 3.3, the injection times of CO 2 in Sect. 3.3 are 2, 10, 20, 40, and 60 h), the CO 2 and C 12 mixtures in the set-up were released to the separator. The final pressure of the reservoir Fig. 2 Schematic of the CO 2 huff-n-puff measurement system decreased to zero during this step. The flashed fluid was collected during CO 2 flowing through the separator. The mass of C 12 , which was produced during CO 2 puff, was obtained by the mass change of the separator. (5) Steps (3) and (4) were repeated at the same initial pressure to test the recovery factor for the multicycle CO 2 huff and puff. (6) Finally, the core was taken out of the core holder after the test and the mass change of the sample was measured.
All of the tests were performed at 60 °C, which was constant during the tests. All of the mass was tested by a high precision electronic balance with a high precision of 0.001 g. The pressure of the CO 2 cylinder was tested by a high precision pressure transducer with a precision of 0.02 MPa. The injection amount of CO 2 in the matrix-fracture system was calculated from the pressure change of the high-pressure CO 2 cylinder.
Calculation of the recovery factor and injection amount of CO 2
Fractures in reservoirs are complex, irregular, and varying; therefore, it is difficult to determine the oil recovery from the matrix and fractures using experimental methods. In this study, a simplified physical model was used to describe CO 2 huff-n-puff process, as shown in Fig. 3. The model is composed of an annular fracture and a cylindrical matrix. As shown in Fig. 3, CO 2 initially diffuses into the fracture first and then into the matrix during the CO 2 huff process. During the CO 2 puff process, the CO 2 and C 12 mixtures are released first from the fracture and then from the matrix. There is little oil in the CO 2 cylinder after the experiment. Therefore, it is reasonable to assume that the oil components, which diffuse from the core into the CO 2 cylinder, are negligible during CO 2 huff-n-puff. The oil production (mass) after nth cycle CO 2 huff-n-puff is obtained by the mass of C 12 collected by the separator of the nth cycle. The oil production (mass) of the matrix is obtained by the mass change of the core after the experiment. The oil production (mass) of fracture is obtained by the difference between the total oil production and oil production of the matrix after the experiment. The incremental recovery factor of original oil-in-place (OOIP) after nth cycle, the matrix recovery factor of OOIP, and the fracture recovery factor of OOIP can be calculated from the following equations: where η n is the incremental recovery factor of OOIP after nth cycles; η m is the matrix recovery factor of OOIP at the end of the experiment; η f is the fracture recovery factor of OOIP at the end of the experiment; m m0 is the total mass of C 12 in the matrix, g; m f is the total mass of C 12 in the fracture, g; m n s is the mass of C 12 collected by the separator of the nth cycle, g; m s is the total mass of C 12 collected by the separator at the end of the experiment, g; and m m1 is the mass of C 12 in the matrix at the end of the experiment, g.
The CO 2 injectivity in the samples is positively correlated with the injection amount of CO 2 into the matrix-fracture system ( Δn CO 2 ), which is a key factor in the recovery factor during CO 2 huff-n-puff and can be calculated by Eq. (2).
where Δn CO 2 is the injection amount of CO 2 during the test, which is the amount of CO 2 injected per volume of the matrix-fracture system, mol/cm 3 ; n 0 is the initial amount of CO 2 in the CO 2 cylinder, mol; n t is the remaining CO 2 in the CO 2 cylinder, mol; V m is the volume of the matrix-fracture system, cm 3 ; p 0 is the initial pressure of the CO 2 cylinder, MPa; z 0 is the compressibility factor at p 0 ; p t is the pressure of the CO 2 , MPa; z t is the compressibility factor of CO 2 at p t ; v is the CO 2 cylinder volume, cm 3 ; R is the Avogadro constant, R = 8.314 J/(mol K); and T is the experimental temperature, K. (1) Oil production from matrix in the radial direction only
Effect of rock properties
The incremental recovery factors in Shale 1, Shale 2, Sandstone 1, and Sandstone 2 after three cycles of CO 2 injection were tested at 20 MPa and 60 °C. As shown in Table 1, the porosity of Shale 1 (9.6%) was close to Sandstone 1 (8.9%) and the porosity of Shale 2 (3.8%) was close to Sandstone 4 (4.3%). Figure 4 shows the pore size distributions of the four samples, which is tested by liquid nitrogen adsorption measurements. The pore volume of Sandstone 1 was larger than other samples. According to Table 1, the porosity of Shale 1 was similar to Sandstone 1. However, the incremental pore volume of Shale 1 was far less than that of Sandstone 1. The reason was that the porosity in Table 1 was tested by the saturation of C 12 and the incremental pore volume in Fig. 4 was tested by nitrogen. Because C 12 can dissolve in the kerogen, which is inaccessible for nitrogen, the incremental pore volume of Shale 1 in Fig. 1 was less than that of Sandstone 1. Besides, the results of Fig. 4 indicated that Shale 1 had a greater degree of micropores (with widths smaller than 2 nm) than the other samples. The incremental recovery factor, matrix recovery factor, and fracture recovery factor were obtained by Eq. (1) for the four samples after three cycles of CO 2 huff-n-puff. Figure 5 shows the incremental recovery factors from different samples after three cyclic injections of CO 2 . The results show that η n increased with porosity after the same cycle for the shale and sandstone. Moreover, the incremental recovery factors of Sandstone 1 and 2 werte greater than those of Shale 1 and 2, respectively. The reason was that there was a larger amount of adsorption-dissolution C 12 in the kerogen, which was hard to be mobilized during CO 2 injection. Moreover, the increasing rate of the incremental recovery factor of sandstone decreased more rapidly with the number of cycles than shale, especially for the samples with low porosity. It has been reported that the adsorptiondissolution C 12 in the kerogen was only replaced by CO 2 when the concentration of CO 2 was large enough (Zhu et al. 2018b). Because the amount of C 12 in the shale decreased with the number of cycles, the concentration of CO 2 in the matrix also increased with the number of cycles. Therefore, the production decline in the shale was smaller than that in the sandstone, and more cycles are needed during CO 2 huffn-puff process for shale to reach a larger recovery factor. Figure 6 shows the relationships between Δn CO 2 and time in the matrix-fracture systems of the four samples, which can be calculated by the CO 2 pressure decline curves in Fig. 15 in "Appendix," and Table 4 shows the values of Δn CO 2 , Δn 1 CO 2 , and Δn 2 CO 2 during three cycles for the different samples. In the first cycle, because the samples were completely saturated with C 12 , CO 2 diffused into oil and adsorbed in the kerogen with time. However, Δn CO 2 curves can be divided into two stages in the second and third cycles. In the first stage, a mass of CO 2 ( Δn 1 CO 2 ) enters into the matrix-fracture system quickly at the beginning. The reason was that there is an empty matrix-fracture space after oil produced from the system during the first and second cycle puffs. The empty matrix-fracture space should be filled by CO 2 first in the second and third cycles. In the second stage, CO 2 was injected slowly into the matrix-fracture system ( Δn 2 CO 2 ), which diffused into oil and adsorbed in the kerogen. Because the empty matrix-fracture space after the CO 2 puff increased with the number of cycles, Δn 1 CO 2 also increased with time in the first cycle were greater than the others. The reason was that the amount of C 12 in the shale after the CO 2 puff decreased with the number of cycles increasing.
As shown in Fig. 6 and Table 4, Δn 1 CO 2 of Shale 1 in different cycles were larger than those of Shale 2, and Δn 1 CO 2 of Sandstone 1 were larger than those of Sandstone 2. Because the porosity of Shale 1 (9.6%) and Sandstone 1 (8.9%) was higher than those of Shale 2 (3.8%) and Sandstone 2 (4.3%), more CO 2 was needed to fill the empty matrix-fracture space in Shale 1 and Sandstone 1 during the second and third cycles. Therefore, Δn 1 CO 2 increased with increasing porosity. Δn 2 CO 2 of Shale 1 during the three cycles of CO 2 injection were 1.22 × 10 -3 , 0.86 × 10 -3 and 0.56 × 10 -3 mol/cm 3 , respectively, which were larger than those of Sandstone 1 (0.82 × 10 -3 , 0.42 × 10 -3 and 0.40 × 10 -3 mol/cm 3 ). Because of the shale was larger than that of sandstone when the porosity was similar. The relationships between Δn 2 CO 2 and the number of cycles for Shale 2 and Sandstone 2 were similar to those of Shale 1 and Sandstone 1.
As shown in Table 5, the incremental recovery factor (η n ), matrix recovery factor (η m ), and fracture recovery factor (η f ) for different samples after three cycles of CO 2 injection were obtained by Eq. (1). η m and η f both increased with porosity for the same rock properties. Although the porosities of Shale 1 and Sandstone 1 were approximately the same, η m of Shale 1 (50.5%) was less than that of Sandstone 1 (74.3%) after three cycles of CO 2 injection. Besides, η m of Shale 2 (18.5%) was smaller than that of Sandstone 2 (31.2%). Prat of oil in the shale was adsorbed and dissolved in the kerogen, which was hard to be replaced by CO 2 . Therefore, η m of shale was always smaller than that of sandstone when the porosities were approximately the same. η f of Shale 1 (82.2%) was slightly greater than that of Sandstone 1 (80.5%), and η f of Shale 2 (72.3%) was slightly greater than that of Sandstone 2 (70.6%). Because the size of the fracture was only 1 mm, the diffusion length of CO 2 in the fracture was far less than that in the matrix. According to Fick's law, the equilibrium time of CO 2 in the fracture was short (0.5 h), which was far less than 40 h (test time). CO 2 was saturated in the fracture under different injection pressures during the test. Therefore, the concentration of CO 2 in the fracture was similar in the shale and sandstone. However, Δn 2 CO 2 of the shales were slightly larger than those of the sandstones because of the adsorption and dissolution of CO 2 in the kerogen (Table 4). Therefore, more CO 2 flowed through the fracture during the CO 2 puff and the displacement efficiencies of fractures in the shales were slightly better than those in the sandstones when the porosities of them was similar.
Besides, η f increased with increasing porosity when the rock properties were the same. The reason for this result was similar to that described above. Because more CO 2 was injected into the higher-porosity matrix during the CO 2 huff, more CO 2 flowed through the fracture during the CO 2 puff. Therefore, the displacement efficiency of fracture with a higher porosity matrix was better. to the recovery factors were higher than Δn 1 CO 2 . However, because empty matrix-fracture space should be filled with CO 2 at the second and third cycles, Δn 1 CO 2 was necessary during CO 2 huff-n-puff.
Effect of injection pressure
The recovery factor after three cyclic injections of CO 2 in Shale 1 was tested at 60 °C and different operating pressures (6,9,12,15,and 20 MPa). According to the test, m m0 and m f were 1.371 and 1.220 g, respectively. Moreover, the volumes of fracture and matrix were 1.691 and 18.064 cm 3 , respectively. Figure 7 shows the relationship between recovery factors in Shale 1 and the operating pressure during CO 2 huff-n-puff process.
According to Eq. (1), the incremental recovery factor (η n ), the matrix recovery factor (η m ), and the fracture recovery factor (η f ) were obtained at different pressures after three cycles of CO 2 injections. As shown in Fig. 7, η m increased from 28.3% to 50.5%, and η f increased from 68.5% to 82.3% when the injection pressure of CO 2 moved from 6 to 20 MPa. Moreover, the increasing rates of η m and η f with pressure were different. η m increased dramatically from approximately 31.8% to 49.8% when the pressure moved from 9 to 15 MPa, whereas η m only increased by 0.7% when the pressure moved from 15 to 20 MPa. The MMP of the C 12 and CO 2 mixture was 13.8 MPa at 60 °C (Zhu et al. 2018b, c). However, η f did not increase significantly when the pressure neared the MMP (9-15 MPa) because the diffusion coefficient and boundary concentration of CO 2 both Fig. 7 Relationship between recovery factors in Shale 1 and the operating pressure during CO 2 huff-n-puff process increased with pressure . Thus, the flux of CO 2 that diffused into the matrix increased sharply when the pressure neared the MMP. Therefore, the matrix recovery factor increased drastically. However, for the CO 2 flux in the fracture, the influence of the diffusion coefficient could be negligible because the diffusion length of CO 2 in the fracture was far less than that in the matrix. On the contrary, the amount of CO 2 flow from the matrix to the fracture and the concentration of CO 2 in the fracture increased pressure. Therefore, although η f increased with pressure, the increasing rate of η f did not increase significantly when the pressure neared the MMP. Figure 8 is the incremental recovery factor in Shale 1 at different operating pressures, which shows that the incremental recovery factors (η n ) all increased with the injection pressure for the same cycle. Regardless of whether the injection pressure was at immiscible or miscible conditions, the recovery factor of the first cycle was more than double that of the second cycle and the greatest contribution to the incremental recovery factor (more than 50%) came from the first cycle. Figure 9 shows the relationships between Δn CO 2 and time at different injection pressures, which were calculated by the CO 2 pressure decline curves in Fig. 17 in "Appendix". Therefore, Δn 1 CO 2 and Δn 2 CO 2 at different pressures could be obtained. Figure 10 shows the relationships between the injection amount of CO 2 and pressure in Shale 1 during three cycles of CO 2 injection. As shown in Fig. 10a, Δn 1 CO 2 increased with the injection pressure for the same cycle. There are two reasons for this phenomenon: (1) Because the incremental recovery factor increased with pressure (Fig. 8), the empty matrix-fracture space increased with pressure after the same cycle. The empty matrix-fracture space should be filled with CO 2 first. Therefore, Δn 1 CO 2 increased with the injection pressure for the same cycle. (2) The density of CO 2 also increased with pressure. As shown in Fig. 10b, Δn 2 CO 2 initially increased with pressure and the increasing rate of Δn 2 CO 2 reduced rapidly when the pressure was higher than 12 MPa. The adsorption-dissolution amount of CO 2 in the kerogen and the diffusion coefficient of CO 2 in C 12 both increased with the injection pressure, which had an important effect on Δn 2 CO 2 Zhu et al. 2018b). Therefore, Δn 2 CO 2 increased with the injection pressure. However, the diffusion coefficient of CO 2 in C 12 remained nearly constant when the pressure was higher than the MMP (13.8 MPa). Therefore, the increasing rate of Δn 2 CO 2 reduced rapidly when the pressure was higher than 12 MPa.
Next, the relationships between recovery factor and Δn CO 2 in Shale 1 during three cycles of CO 2 injection were studied. However, because the residual oil-in-place of the second and third cycles is different from the first cycle, the incremental recovery factor of OOIP in the second and third cycles was less than that in the first cycle. Therefore, the recovery factor of residual oil-in-place (ROIP) at the nth cycle was calculated by Eq. (3).
where n re is the recovery factor of ROIP at the nth cycle. Figure 11 shows the relationships between n re and Δn CO 2 in Shale 1. The results show that n re in different cycles increased linearly with the injection amount of CO 2 . Although Δn CO 2 in the first cycle was smaller than that in the second and third cycles, n re in the first cycle was larger than that in the second and third cycles. The reason was that part of the CO 2 was injected into the empty matrix-fracture space during the second and third cycles, which had no contribution to the recovery factor. The results indicate that n re in the first cycle is the largest contributor to the recovery factor. Moreover, when the pressure was higher than the MMP, the increasing of n re and Δn CO 2 in the first cycle were both small. The reason is that the change of diffusion coefficient and boundary concentration of CO 2 in the shale were small when the pressure was higher than the MMP. Because the injection amount and pressure of CO 2 , which both changed with pressure during CO 2 huff-n-puff, had a combined effect on the results in Fig. 11, the effect of injection time on the recovery factor was studied in the next section (which had the same pressure and different injection amount of CO 2 ).
Effect of injection time
The recovery factors in Shale 1 and Sandstone 1 after one cycle of CO 2 injection were determined at different injection times (2, 10, 20, 40, and 60 h) at 12 MPa and 60 °C. The incremental recovery factor, matrix recovery factor, and fracture recovery factor were calculated using Eq. (1). Figure 12 shows the recovery factors in Shale 1 and Sandstone 1 at different injection times. The results show that an increase in injection time from 2 to 60 h amounts to a 13.2% (25.7%-38.9%) increase in η n . As shown in Fig. 12b, η n of Sandstone 1 increased from 26.6% to 46.4%, which was a slightly greater increase than that in Shale 1. The recovery factor consists of two components, the fracture recovery factor and the matrix recovery factor, which will be explained below.
First, when the injection time increased from 2 to 60 h, η f of Shale 1 increased from 39.2% to 46.2% and η f of Sandstone 1 increased from 39.9% to 49.4%. As the injection time increased, more CO 2 was injected into the matrix during the CO 2 huff; therefore, a larger amount of CO 2 flowed from the matrix through the fracture during the CO 2 puff. The displacement efficiency of the fracture was better for a long injection time during the CO 2 puff. Moreover, η f of Shale 1 was slightly larger than that of Sandstone 1 for the same injection time. The reason was that the injection amount of CO 2 in the matrix of Shale 1 was slightly larger than that in Sandstone 1 because of the adsorption-dissolution of CO 2 in the kerogen. According to our previous research, the adsorption amount of CO 2 in the shale can reach 0.92-2.58 mmol/g, which depends on the TOC (Zhu et al. 2019). The displacement efficiency of the fracture in Shale 1 was better than that in Sandstone 1 during the CO 2 puff. Therefore, the η f of Shale 1 was slightly greater than that of Sandstone 1 for the same injection time.
Second, when the injection time increased from 2 to 60 h, η m of Shale 1 increased from 9.0% to 28.7% and η m of Sandstone 1 increased from 14.0% to 43.2%. Because the injection pressure and temperature were the same in this section, the diffusion coefficient and boundary concentration of CO 2 in C 12 were the same. The flux of CO 2 , which diffused into the matrix, increased with injection time. Therefore, the matrix recovery factor increased with the injection time. It should be noted that the injection time had a greater effect on the matrix recovery factor than on the fracture recovery factor. This result occurred because the diffusive equilibrium time of CO 2 in the matrix was much longer than that in the fracture (0.5 h). The effect of the injection time on the
Fig. 11
Relationships between recovery factor of ROIP at the nth cycle ( n re ) and Δn CO 2 fracture was small during the CO 2 cycle injection. Moreover, η m of the sandstone was greater than that of shale for different injection times. The reason for this was that part of the oil in the shale was adsorbed and dissolved in the kerogen, which was difficult to be produced. Furthermore, η m of Sandstone 1 did not increase with the injection time when the injection time was longer than 20 h. However, η m of Shale 1 kept increasing with increasing injection time during the test. Therefore, more injection time is required for shale to reach the maximum recovery factor.
The injection amount of CO 2 ( Δn CO 2 ) in the matrix-fracture systems for different injection times were calculated by Eq. (2) (Fig. 13), which can be calculated by the CO 2 pressure decline curves in Fig. 17 in "Appendix". The Δn CO 2 curves for different injection times were the same for the same samples. The growth rate of Δn CO 2 in Shale 1 was larger than that in Sandstone 1, especially at the beginning. There were two reasons: (1) There were layers and microfractures in the shale, which was beneficial to CO 2 diffusion at the beginning. (2) A larger amount of CO 2 was adsorbed and dissolved into the kerogen to replace the adsorption-dissolution oil during CO 2 injection in the shale. Figure 14 shows the relationships between the recovery factor and Δn CO 2 for Shale 1 and Sandstone 1 at 12 MPa and 60 °C, which shows that the recovery factor increased with the injection amount of CO 2 . However, when Δn CO 2 was greater than 1.07 × 10 -3 mol/cm 3 (40 h), the recovery factor of Shale 1 did not increase with Δn CO 2 . Similarly, the recovery factor of Sandstone 1 did not increase with Δn CO 2 when it was greater than 3.88 × 10 -4 mol/cm 3 (20 h). The cause of this phenomenon appeared to be the collapse of foamy flow due to a higher volume of released CO 2 . Because of the higher Δn CO 2 , more gas is released from the solution than can be dispersed during CO 2 puff; therefore, when Δn CO 2 is large, CO 2 flows as a continuous phase (Busahmin and Maini 2010). When Δn CO 2 was greater than the optimal value, the recovery factor did not increase with Δn CO 2 . Besides, the optimal value of Δn CO 2 in the shale (1.07 × 10 -3 mol/cm 3 , 40 h) was larger than that in the sandstone (3.88 × 10 -4 mol/ cm 3 , 20 h). Because CO 2 can be adsorbed and dissolved in the kerogen, the critical gas saturation of shale was larger than sandstone. Therefore, a higher Δn CO 2 for shale was needed to form a continuous phase. These results indicate that there were optimal values of Δn CO 2 for CO 2 huff-n-puff and more injection time was needed for shale to reach the optimal Δn CO 2 .
Conclusions
A modified experimental method is developed to investigate the recovery factors and injectivity of CO 2 in the fracture-matrix system. The oil recovery factors of shales with different porosities are compared with sandstones to study the effects of kerogen on CO 2 huff-n-puff. The effects of rock properties, injection pressure, and injection time on the recovery factors are discussed. The major conclusions are as follows: (1) The matrix recovery factor in Shale 1 (50.5%) and Shale 2 (18.5%) are much lower than that in Sandstone 1 (74.3%) and Sandstone 1 (31.2%) during CO 2 huff-npuff, while the injectivity of CO 2 in the shale is larger than that in the sandstone. However, the production decline of the shale is smaller than sandstone. Therefore, more cycles are needed for the shale to reach a large recovery factor than the tight sandstone.
(2) Injection pressure has an important impact on the recovery factors of the shale. When the injection pressure increases from an immiscible pressure to a miscible pressure, the recovery factors of the shale and the injectivity of CO 2 increases dramatically during CO 2 huff-n-puff. The optimal injection pressure of CO 2 huffn-puff in the shale is close to the MMP (13.8 MPa). (3) The injectivity of CO 2 in the shale is better than the sandstone, especially at the beginning, because there are layers and microfractures in the shale and parts of CO 2 are adsorbed and dissolved in the kerogen. Meanwhile, there are optimal CO 2 injection amount during the huff-n-puff process, and the optimal CO 2 injection amount in the shale (1.07 × 10 -3 mol/cm 3 ) is larger than that in the sandstone (3.88 × 10 -4 mol/cm 3 ) at 12 MPa. Besides, more injection time is needed for the shale (40 h) to reach the optimal CO 2 injection amount than the tight sandstone (20 h). (4) The effects of sample properties, injection pressure, and time on the fracture recovery factor are smaller than those on the matrix recovery factor because the diffusion distance and equilibrium time of CO 2 in the fracture is short. Therefore, the application of multiple refracturing with CO 2 to generate microfractures is beneficial to enhancing shale oil recovery.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
|
2020-12-05T14:43:35.619Z
|
2020-12-05T00:00:00.000
|
{
"year": 2020,
"sha1": "35fa2f47768d477d18b5ecdafb5ff9f1e46cb85b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12182-020-00538-7.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ca2bd7eec7d34905c419134148bb9061b78cb8f0",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
244649478
|
pes2o/s2orc
|
v3-fos-license
|
The Association of Nephroblastoma Overexpressed (NOV) and Endothelial Progenitor Cells with Oxidative Stress in Obstructive Sleep Apnea
Objective Obstructive sleep apnea (OSA) is a sleep disorder characterized by intermittent hypoxia, chronic inflammation, and oxidative stress and is associated with cardiometabolic disease. Several biological substrates have been associated with OSA such as nephroblastoma overexpressed (NOV), endothelial progenitor cells (EPC), and circulating endothelial cells (CEC). Few studies have looked at the association of NOV with OSA while the EPC/CEC relationships with OSA are unclear. In this study, we hypothesize that (1) NOV is associated with the severity of OSA independent of BMI, identifying a protein that may play a role in the biogenesis of OSA complications, and (2) EPCs and CECs are also associated with the severity of OSA and are biomarkers of endothelial dysfunction in OSA. Methods 61 subjects underwent overnight polysomnography (PSG), clinical evaluation, and blood analysis for NOV, EPC, CEC, interleukin 6 (IL-6), and other potential biomarkers. Results NOV and EPCs were independently associated with the oxygen desaturation index (ODI) after adjusting for potential confounders including body mass index (BMI), age, and sex (NOV p = 0.032; EPC p = 0.001). EPC was also independently associated with AHI after adjusting for BMI, age, and sex (p = 0.017). IL-6 was independently associated with AHI, but not with ODI. Conclusion NOV and EPC levels correlate with the degree of OSA independent of BMI, indicating that these biomarkers could potentially further elucidate the relationship between OSA patients and their risk of the subsequent development of cardiovascular disease.
Introduction
Obstructive sleep apnea (OSA) is a highly prevalent disorder, ranging from 3% to 17% in the general population depending on age and gender [1]. OSA is characterized by repetitive episodes of upper airway closure resulting in a reduction or complete cessation of airflow and intermittent hypoxia; obstructive respiratory events are terminated with an arousal state accompanied by sympathetic surges [2]. The result of poor alveolar ventilation associated with apnea/hypopnea events reduces arterial oxygen saturation and increases arterial pressure of carbon dioxide causing intermittent hypoxia. This leads to oxidative imbalance and increased inflammatory cytokines, lipid peroxidation, and cell-free DNA [3]. The severity of OSA is quantified by overnight sleep studies which measure the apnea-hypopnea index (AHI) and oxygen desaturation index (ODI). Risk factors for OSA include high body mass index, male gender, and age, resulting in a patient population already at risk for cardiometabolic disease. Indeed, OSA has been associated with prevalent and incident hypertension [4,5], coronary artery disease, and cerebrovascular events [6], likely via inflammatory processes from oxidative stress with increased reactive oxygen species (ROS) formation and proinflammatory cytokines [7].
However, studies associating nephroblastoma overexpressed (NOV), endothelial progenitor cells (EPC), and circulating endothelial cells (CEC) with OSA and as a potential measure of vascular inflammation to determine the risk for cardiovascular disease (CVD) in OSA patients are minimal (NOV) or discrepant (EPC/CEC). A recent meta-analysis showed a linear correlation between AHI severity and olfactory dysfunction, but statistical differences between mild-moderatesevere were not seen [8]. Other recent findings include the apelin ligand of G protein-coupled receptor APJ; the apelin/ APJ system appears to be closely related to the development of respiratory diseases, including OSA, that may well be an attractive target for therapeutic intervention [9].
NOV is a multifunctional protein that plays a role in inflammation, cancer, and fibrosis through its involvement in adhesion and mitosis pathways [10] and has been associated with multiple disorders either directly or indirectly linked to cardiovascular disease. Previously, we demonstrated a novel association between OSA and NOV in a clinical sample of obese and nonobese subjects [11].
CECs and EPCs are involved with vascular injury and repair. CECs are essentially "sloughed endothelial cells" resulting from systemic inflammation, which are replaced by EPCs that are expected to increase as inflammationinduced CECs sloughing increase [12]. When the EPC can no longer sustain the replacement of sloughed CECs, this denuded area is now ripe for plaque formation [13]. In a prior study, we found that CECs in morbidly obese women at increased risk of cardiovascular disease were elevated and EPCs were altered in obesity, suggestive of early inflammation [12]. Another study has also shown CEC elevation in a population of type 2 diabetics, showing that inflammation induced by diabetes was independent of HgbA1C levels [14]. Although these studies do not involve OSA patients, due to the strong association of OSA with both morbid obesity and diabetes, similar associations with OSA are likely present.
In the current study, we sought to demonstrate that in a sample of well-characterized OSA subjects at increased risk for cardiovascular disease, baseline inflammation in OSA, as demonstrated by changes in known and novel inflammatory biomarkers, may help to risk stratify this population and further elucidate a pathogenic link between OSA and cardiovascular disease. This manuscript is a follow-up study to our previous work [11] with a larger sample of subjects and further blood analysis including EPC, CEC, and cytokines. Specifically, we hypothesized that NOV and other inflammatory adipokines, CECs, and EPCs would independently correlate with increasing OSA severity and provide further evidence to support novel pathways leading to endothelial damage and cardiovascular disease.
Methods
2.1. Study Design and Sample. Study subjects and controls were enrolled at New York-Presbyterian Brooklyn Methodist Hospital (NYPBMH), and laboratory analysis of blood samples was analyzed at New York Medical College (NYMC). Subjects were drawn from individuals presenting to the Center for Sleep Disorders at NYPBMH for evaluation of possible OSA and from faculty and staff of NYMBMH who were not at risk for OSA to serve as controls. Subjects were considered for enrollment only in the absence of a known history (chart review) of coronary artery disease, atherosclerosis, or congestive heart failure. Only adults (age > 18 years old) were recruited. All subjects provided informed consent. A total of 61 subjects were enrolled. IRB approval at the clinical site (NYPBMH) was obtained prior to enrollment. All data were collected prospectively.
Clinical Parameters.
All recruited patients had a complete history and physical examination. Patient demographics were collected including age, gender, and race. Patients underwent measurement of systolic and diastolic blood pressure, height and weight for BMI determination, neck circumference, and waist and hip circumference for waist-hip ratio determination by standard methods. Patients were also queried, and medication lists were evaluated to determine the presence of comorbid medical conditions (hypertension, diabetes, hyperlipidemia, chronic obstructive pulmonary disease, and asthma).
Polysomnography.
All patients underwent nocturnal polysomnography (PSG) either by (1) conventional fullmontage in-laboratory PSG or (2) home sleep testing; studies were performed in accordance with American Academy of Sleep Medicine (AASM) guidelines. Conventional full-montage in-laboratory PSG was performed using Compumedics (Victoria, Australia) software: standard 10-20 electroencephalography (EEG), electrocardiography (ECG), electromyography (EMG) of the chin and anterior tibialis muscle, electrooculography (EOG), snore, and pulse oximetry monitoring were utilized. Oral and nasal airflow were measured by pressure transducer and thermocouple. Respiratory effort was measured with respiratory impedance plethysmography bands at the chest and the abdomen including summation channel. Home sleep testing was performed using ResMed ApneaLink Air (San Diego, California). An apnea was defined as a reduction in peak thermal sensor (or nasal pressure signal in the case of home sleep testing) excursion by ≥90% of baseline, in which there is continued or increased inspiratory effort throughout the entire period of absent airflow lasting at least 10 seconds. Desaturation and/or arousal were not required. A hypopnea was defined as an abnormal respiratory event lasting at least 10 seconds with at least a ≥30% reduction in the nasal pressure signal excursion (or alternate sensor) accompanied by a ≥4% oxyhemoglobin desaturation. The AHI is a measure of OSA severity and derived from the number of apneas and hypopneas per hour of sleep (in-lab determination) or per hour of recording time (home sleep testing). OSA was defined as AHI ≥ 5/hr for analysis purposed. Further analysis with AHI ≥ 15/hr was also explored.
2.4. Laboratory Measurement. Venous blood was drawn from antecubital vein into a serum separator tube and a tube containing EDTA. Each SST sample was centrifuged at a force of 1600 g for 10 minutes after blood draw. The tubes were placed in an insulated container with dry ice until analysis. 2 Oxidative Medicine and Cellular Longevity 2.5. Plasma NOV Protein Levels. Subjects frozen plasma was suspended in buffer (mmol/l: 10 phosphate buffer, 250 sucrose, 1.0 EDTA, 0.1 PMSF, and 0.1% v/v tergitol, pH 7.5). Immunoblotting for NOV was performed as previously described [15]. NOV levels were based on densitometry fold-increase from a single control sample.
2.6. Blood Samples and Cytokine Measurements. After overnight fasting, venous blood was drawn from an antecubital vein to measure serum levels of inflammatory cytokines, Leptin, and EPC testing (blood was drawn in heparinized tubes). Serum samples were frozen at -80°C before analysis. IL-6 was determined using ELISA.
Isolation of Circulating Endothelial Cells.
A 10 ml sample of peripheral blood was obtained and used for CEC and EPC experiments. One ml of blood was incubated with 100 μl of anti-CD146 coated 45 μm Dynabeads (1:4 × 108 beads/ml) overnight at 4°C in a Dynal mixer (Dynal, Lake Success, New York) at 50 rpm. Cells bound to anti-CD146 coupled beads were separated from blood in a Dynal magnet, washed (3 washings using phosphate-buffered saline and 0.1% bovine serum albumin and repetitive mixing for 5 minutes in the Dynal mixer at 4°C), and dissolved in 100 μl buffer. Side-by-side assays were performed with Dynabeads coated with human antibodies against mouse IgG but without an antiendothelial antibody to check for nonspecific binding to the Dynabeads. The cells were mixed with acridine and visualized by light and fluorescence microscopy.
Isolation of Mononuclear Cells and EPC Colony
Formation. Peripheral mononuclear cells (PMNCs) were fractionated using Ficoll density-gradient centrifugation. Isolated PMNCs were resuspended in CFU-Hill Medium (Stem Cell Technologies, Vancouver, Canada) and plated on 6-well plates, coated with human fibronectin at a concentration of 5 × 10 6 cells per well. After 48 hours, the nonadherent cells were collected and replated onto fibronectincoated 24-well plates. EPC colonies were counted using an inverted microscope 7 days after plating. An EPC colony was defined as a cluster of at least 100 flat cells surrounding a cluster of rounded cells, as previously described [12].
Results are expressed as the mean number of colonyforming units (CFUs) per well.
2.9. Statistical Analysis. We summarized continuous variables using means and standard deviations and summarized categorical variables as frequencies and percentages. Continuous variables were compared with Student's t-test or Mann-Whitney/Wilcoxon paired test as appropriate. Categorical variables were compared with the chi-square test for independence. The relationship between NOV and OSA was compared in several different categories including ODI quartiles, AHI quartiles, and categories of OSA (no OSA, mild OSA, moderate OSA, and severe OSA) as defined by the AASM; these relationships were determined both between groups and overall trends. Due to nonnormal distribution of dependent variables, the cube-root transformation of NOV (and ODI/AHI when dependent variables), which approximated normality, was used in both multivariable models and between group comparisons (Student's t-test) to ensure validity of the model. All analyses were performed in Stata 15.1.
In multivariable analysis (Table 2), after adjusting for age, gender, and BMI, NOV was independently associated with ODI (p = 0:032). NOV was not independently associated with AHI. EPCs were independently associated with both ODI (p = 0:001) and AHI (p = 0:017). Leptin was independently associated with both ODI and AHI in model 2; however, when BMI was added, leptin was no longer associated with either parameter indicative that BMI is a strong confounding variable. IL-6 was independently associated with ODI (p < 0:0001) and AHI (p < 0:0001) in model 2; however, when BMI was added, it was no longer associated with ODI but maintained significance with AHI (p = 0:049), which supports that BMI confounded IL-6 as well.
Discussion
We have shown for the first time that measures of obstructive sleep apnea (OSA) are independently associated with the novel adipokine matricellular protein nephroblastoma overexpressed (NOV) and endothelial progenitor cells (EPC) after adjusting for baseline demographics and BMI. Specifically, NOV levels were higher in those with OSA compared to those without OSA in univariate analysis. Further, multivariable methods showed that NOV is associated with the oxygen desaturation index (ODI) after adjusting for age, gender, and BMI; this finding was not seen in association with the apnea hypopnea index, suggesting that intermittent hypoxia, as specifically measured by the ODI, is central to this relationship, and that the more general measure of sleep-disordered breathing (AHI), which may also include nonhypoxic arousal events, is not. EPCs, in contrast, are independently associated with both the ODI and AHI, suggesting that overall mechanisms of OSA including intermittent hypoxia and other pathophysiologic variables are important in this relationship. Leptin and IL-6 levels related to OSA measures appear to be modified by BMI.
Oxidative Medicine and Cellular Longevity
Inflammation in OSA is driven by intermittent hypoxia and fragmented sleep leading to oxidative stress, manifested by increased formation of ROS and elevated adipocytokines [16,17]. Oxidative imbalance is the result of this intermittent hypoxia with increased inflammatory cytokine production of IL-6, TNF, and lipid peroxidation; literature review has not identified which biomarkers better correlate with the severity of disease [3]. CPAP has been shown to decrease this oxidative stress [3] and normalize ROS, nitric oxide, and 8-isoprostane levels [18]. In contrast, obesity's baseline chronic inflammatory state is due to insulin and leptin resistance resulting in increased inflammatory cytokines released from white adipose tissue (WAT), which has diminished thermogenic capability from mitochondrial dysfunction compared to brown or beige adipose tissue (BBAT) [19][20][21]. Due to OSA's strong correlation with obesity, the inflammation and associated comorbid conditions of obesity can be seen in OSA patients. Our findings suggest that obesity plays a role in chronic inflammation (as seen in leptin and IL-6 levels modified by BMI) while the additional inflammatory pressure resulting in elevated NOV and EPC levels is likely driven by intermittent hypoxia and inflammation specific to OSA pathophysiology. The development of atherosclerosis is thought to be highly influenced by this 5 Oxidative Medicine and Cellular Longevity intermittent hypoxia; this involves a complex interaction of multiple factors that include oxidative stress and inflammation, autonomic nervous system dysfunction, and platelet activation [22].
NOV, a member of the CCN multifunctional proteins, plays a role in inflammation, cancer, and fibrosis through its involvement in adhesion and mitosis pathways [10]. It has been associated with multiple disorders either directly or indirectly linked to cardiovascular disease. NOV is an established regulator of and regulated by various cyto/chemokines, including the anti-inflammatory enzyme heme oxygenase-1 (HO-1) [23], [10]. A recent study in humans showed that NOV is strongly correlated with BMI and fat mass, decreases with weight loss, and is associated with elevated hemoglobin A1c levels [24]. Disease states associated with increased inflammation, including endothelial cell dysfunction, obesity, insulin resistance, metabolic syndrome, and interstitial renal fibrosis, have all been associated with increased NOV levels [23][24][25], [26]. Furthermore, epoxyeicosatrienoic acid (EET), a molecule that inhibits the inflammatory process, improves insulin sensitivity, and decreases NOV was shown to attenuate obesity-induced cardiomyopathy by down regulating NOV, increasing heme oxygenase-1 (HO-1) and Wnt signaling in both cardiac and pericardial fat [15,27]. This resulted in decreased inflammatory cytokines IL-6 and TNF, as well as an increase in anti-inflammatory molecules and mitochondrial integrity [15,27]. HO-1 upregulation and EET upregulation both result in marked reductions of IL-6, TNF, and NOV [28]. The knockout mouse model of peroxisome proliferatoractivated receptor gamma coactivator-1α (PGC-1 α) reversed these findings and blocking the nuclear coactivator of HO-1 suggested that HO-1 upregulation was the mechanism involved. These findings propose that NOV may play a role in the OSA-related risk of developing cardiac disease and may be a target for potential therapy. This was shown again by administration of an EET agonist that increased PGC-1 α, which induced the HO-1 increase, improved mitochondrial function, and induced a change in the pericardial and epicardial adipocyte phenotype from white to beige; the improved insulin receptor phosphorylation improved insulin sensitivity and resulted in reversal of heart failure [27]. Thymoquinone (TQ) is another molecule that comes from the Nigella sativa plant that has major antiinflammatory properties; in combination with omega fish oils, they improved insulin sensitivity in obesity and Figure 3: (a) Difference in EPC levels in subjects with OSA vs. no OSA; OSA defined as AHI ≥ 5/hr or AHI ≥ 15/hr. Those with OSA had higher EPC levels when more stringent criteria for OSA diagnosis (i.e., AHI ≥ 15) were used supportive that more severe disease has an association with higher EPC levels (n = 53, * p = 0:021). (b) Difference in CEC levels in subjects with OSA vs. no OSA; OSA defined as AHI ≥ 5/hr or AHI ≥ 15/hr. Those with OSA displayed higher levels of CEC, but the association did not reach statistical significance (n = 44).
6
Oxidative Medicine and Cellular Longevity Figure 4: (a) Difference in Leptin levels by OSA vs. no OSA. Those with OSA had higher levels of leptin (n = 59, * p ≤ 0:05) prior to adjusting for age, gender, and BMI. This association was lost when BMI was adjusted for indicating BMI is a strong confounding variable. Results are mean ± SE. (b) Difference in IL-6 levels by OSA vs. no OSA. Those with OSA had higher IL-6 levels (n = 59, * p ≤ 0:05). This association was seen in both ODI and AHI as parameters for diagnosing OSA prior to adjusting for BMI. When BMI was added IL-6 maintained association with AHI only (p = 0:049) supportive that BMI is a confounding variable. Results are mean ± SE. 7 Oxidative Medicine and Cellular Longevity promoted the browning of white fat, with upregulation of mitochondrial enzymes, HO-1 levels, and reduction of the inflammatory adipokine NOV, twist-related protein (TWIST2), and the adipocyte hypoxia inducible factor HIF-1α [29]. NOV induces cytokine formation by increasing adipogenesis, with decreased numbers of mitochondria and diminished mitochondrial function, which is all reversed by EET upregulation [27]. White adipose tissue has higher NOV levels than beige and brown adipose tissue [15]. In addition, the inflammatory state is destructive to mitochondria and contributes to multiorgan dysfunction [30][31][32]. The ability to generate thermogenesis via mitochondrial electron transport chain (ETC) uncoupling is highest in brown fat, followed by beige, and finally least in white fat [19,20]. Therefore, lean individuals with a higher brown/ beige adipose tissue (BBAT) to WAT ratio have better anti-inflammatory mechanisms to fight severe inflammation due to stable thermogenesis secondary to higher mitochondria concentration, which utilizes HO-1 to promote uncoupling [29,33].
Other potential predictive markers such as CECs and EPCs are of interest in proinflammatory disease states as well. CECs are accepted and reliable indicators of vascular damage [34]. CECs are a measure of "sloughed cells" from the vascular endothelium that are routinely replaced by bone marrow-derived EPCs as part of the injury-repair mechanism, but EPCs are rarely identified in the peripheral blood of healthy individuals, and when they are, they will be few in number [35]. When the production of EPCs are no longer able to replace the increasing amounts of sloughed CECs, the area of denuded endothelium is now primed for the development of vascular plaques [36]. EPCs have been implicated in plaque "vulnerability to rupture" [12,37]. As a result, their presence in peripheral blood is a measure of endothelial injury/repair [34]. Endothelial dysfunction is the initial stage of atherosclerotic disease and more need to be done to identify patients at increased risk and intervene [38].
Quantification of CEC and EPC has correlated with cardiovascular disease. Studies show increased levels of CECs in the acute coronary syndrome spectrum, stable angina, ischemic cerebrovascular accident (CVA), and critical limb ischemia [39]. CEC quantification 48 hours after acute coronary syndrome was shown to accurately risk stratify patients for major adverse coronary events and death at both one month and one year [39]. Furthermore, the pathogenesis of atherosclerosis and plaque rupture leading to ischemic events is a proinflammatory process resulting in higher oxidative stress in the endothelium [40]. On the other hand, decreased EPCs have been associated with increased levels of cardiovascular disease [36]. One theory posits that there is a finite supply of EPCs, and once they have been exhausted due to repetitive vascular injury, the evolution of cardiovascular disease is established [41].
Individuals with higher levels of oxidative stress, such as those with OSA and obesity, have higher risk for unstable plaque formation over the damaged endothelium that consists of cholesterol deposits and foam cells [40]. Presumably, those with established cardiovascular disease would demonstrate a reduction in EPCs, and this should be in a dose response manner to the severity of OSA. However, this was not what we found in our study, and the literature in OSA and EPCs is quite variable. Multiple investigators have found that EPCs are reduced in those with OSA [42]; [43]; [44]. Alternatively, other investigators found that EPCs were increased [45] or unchanged [46] [47] in OSA vs. control. Potential reasons for these discrepant results are numerous, including differences in EPC measurement, differences in patient population (i.e., race and age) [48], and differences in the presence and evolution of cardiovascular disease in each individual patient. Those with changes in EPC levels are at risk for a vulnerable thin fibrous cap rupture leading to acute thrombosis placing individuals with OSA at higher risk for cardiovascular disease [12,37,40]. Intermittent hypoxia seen in OSA causes increased ROS formation as a result of decreased oxygenation and may subsequently result in higher EPC levels as the endothelium undergoes repair [7,40]. In our study, EPC levels were found to be independently associated with both ODI (p = 0:001) and AHI (p = 0:017) ( Table 2). There was no statistically significant correlation between OSA and CEC in our data; of note, only 44 subjects had CECs measured in this study which may have resulted in a type 2 error.
The proinflammatory state of OSA was confirmed in our subjects by increased levels of IL-6. There was an overall positive correlation between AHI and IL-6 (Table 2, model 1 p = 0:003), but the relationship was weakened when adjusted for BMI (Table 2 model 3 p = 0:049), suggesting the inflammatory state of obesity acts as an effect modifier. Similar findings are well established in OSA patients, in which several inflammatory markers are known to be elevated in OSA, including leptin, CRP, TNF-a, and IL-6 [49].
There are several limitations in our study. Although we feel that the findings are generalizable, the fact that we had a large percentage of black subjects makes this less so. Known racial differences in cardiovascular disease [50] may indicate that the pathogenic mechanisms underlying the vascular inflammatory cascade may also be different. Because our study subjects in general did not have established coronary or other cardiovascular diseases, we were unable to differentiate those in whom EPCs may have been decreased due to exhaustion of the bone marrow response from those with an exuberant EPC response. Because we did not collect actigraphy data, we were unable to assess the role that shortened sleep duration and therefore sleep deprivation may have played in our results. Similarly, the lack of arterial blood gas analysis to determine the presence of hypoventilation prevented us from identifying subjects with obesity-hypoventilation syndrome, which may affect the degree of deoxygenation and thus oxidative stress. Finally, the cross-sectional nature of our study prevents any temporality and therefore any causal inference.
Conclusion
The subjects of our study demonstrated that OSA is independently correlated with NOV after adjusting for BMI, age, and sex when compared to control. NOV appeared to be driven by intermittent hypoxia rather than general 8 Oxidative Medicine and Cellular Longevity obstructive episodes as the correlation was with ODI and not AHI. EPCs were independently associated with both ODI and AHI, while CECs did not demonstrate an association with OSA. IL-6 was elevated in OSA subjects based on AHI, but BMI appeared to be a strong modifier of this relationship. Leptin was not associated with OSA after fully adjusting for BMI. In summary, the inflammatory adipokine NOV and EPC represent potential biomarkers that may help identify OSA patients at a current or future increased risk of cardiovascular disease from oxidative stress and may be a potential target to prevent the vascular downstream consequences of this systemic inflammatory cascade.
Data Availability
Access to data is restricted due to legal and ethical concerns.
Additional Points
Study Importance Questions. What is already known about this subject? (i) OSA is associated with oxidative stress of chronic inflammation, elevation of adipocytokines, and endothelial dysfunction. (ii) OSA is associated with cardiometabolic disease. What are the new findings in your manuscript? (i) NOV levels correlate with the degree of OSA independent of BMI. (ii) EPC levels correlate with the degree of OSA independent of BMI. How might your results change the direction of research or the focus of clinical practice? NOV and EPC represent potential biomarkers which may help to identify OSA patients at a current or future increased risk of cardiovascular disease caused by oxidative stress and chronic inflammation and may be a potential target to prevent the vascular downstream consequences of this systemic inflammatory cascade.
|
2021-11-26T16:29:13.929Z
|
2021-11-24T00:00:00.000
|
{
"year": 2021,
"sha1": "30125522d3709401debab019cdb51bd71190e8de",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2021/7138800",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35941e52337f43e744f61732a9914c075156350c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211074345
|
pes2o/s2orc
|
v3-fos-license
|
European recommendations and quality assurance for cytogenomic analysis of haematological neoplasms: reponse to the comments from the Francophone Group of Hematological Cytogenetics (GFCH)
The authors would like to thank the GFCH for their commentary on the recommendations for cytogenomic testing [1]. We are pleased to note that they are in agreement for the majority of aspects of our recommendations and appreciate where the GFCH have expanded or provided clarification to our statements. In particular we were pleased that the GFCH concur with the authors of the importance and value of chromosomebanding analysis in the diagnostic pathway of haematological neoplasms as well as the complementary nature of different testing strategies. We note that the GFCH has a few concerns and these are addressed individually below. In replying to their comments, it is important to restate that the recommendation document is a consensus document of working practices for cytogenomic testing in a number of different European countries and describes the minimum testing required, and that national policies should be taken into consideration [2]. We acknowledge that more extensive testing is undertaken in some countries particularly relating to the number of metaphases analysed and the pathological entities tested by chromosome-banding analysis. The aim of the recommendations was to provide the practical advice to help laboratories prioritise and rationalise cytogenomic testing where health care resources are restricted and where the extent of testing is limited by testing reimbursement. Choice of sample: We agree with this recommendation and thank them for the extra clarification provided to our statement regarding the use of peripheral blood samples for chromosome-banding analysis in this section. For cell culture of AML the authors agree that these rearrangements can be detected in 24 h cultures. The authors
The authors would like to thank the GFCH for their commentary on the recommendations for cytogenomic testing [1]. We are pleased to note that they are in agreement for the majority of aspects of our recommendations and appreciate where the GFCH have expanded or provided clarification to our statements.
In particular we were pleased that the GFCH concur with the authors of the importance and value of chromosomebanding analysis in the diagnostic pathway of haematological neoplasms as well as the complementary nature of different testing strategies. We note that the GFCH has a few concerns and these are addressed individually below.
In replying to their comments, it is important to restate that the recommendation document is a consensus document of working practices for cytogenomic testing in a number of different European countries and describes the minimum testing required, and that national policies should be taken into consideration [2]. We acknowledge that more extensive testing is undertaken in some countries particularly relating to the number of metaphases analysed and the pathological entities tested by chromosome-banding analysis. The aim of the recommendations was to provide the practical advice to help laboratories prioritise and rationalise cytogenomic testing where health care resources are restricted and where the extent of testing is limited by testing reimbursement.
Choice of sample: We agree with this recommendation and thank them for the extra clarification provided to our statement regarding the use of peripheral blood samples for chromosome-banding analysis in this section.
For cell culture of AML the authors agree that these rearrangements can be detected in 24 h cultures. The authors state that a 48 h culture should be considered but this was not intended to be a requirement. This observation referred mainly to historic data of t(15;17) cases when FISH or RT-PCR were not widely available.
Choromosome banding analysis: We agree that the full ISCN 2016 definition of a clone includes the example cited but this is also covered by the statement provided in our paper.
We agree that ideally 20 metaphases would be analysed regardless of the result, and indeed are aware that many laboratories do systematically analyse this number. However, a minimum of ten metaphases are considered acceptable when an abnormal clone is detected, unless there is suspicion of clonal evolution (e.g., one metaphase with additional abnormality) where a more extensive analysis should be performed.
Recommended testing: We state that chromosomebanding analysis is mandatory for ALL in Table 2 [2]. However, for information, we refer to the publication of the International Berlin-Frankfurt-Münster study group on recommendations for the detection of prognostically relevant genetic abnormalities in childhood B-cell precursor acute lymphoblastic leukaemia [3]. The vast majority of laboratories undertake chromosome analysis of childhood B-cell ALL but our experience through EQA has also shown that some laboratories are only following this strategy when cases require risk stratification. Of note, for the new childhood ALLTogether trial karyotyping is not mandatory in all cases provided sufficient extensive first line testing using FISH, PCR, and SNP-array is performed.
FISH in AML: Regarding NUP98 in AML, the authors are in agreement with this approach. We recognise that the text should have stated 17 months-18 years not 5 years.
Waldenström macrogobulinemia (WM): The authors consider that cytogenomic testing, including chromosome analysis, for recurrent abnormalities should be undertaken as the detection of disease-specific recurrent abnormalities can be beneficial in a differential diagnosis case. Many laboratories would not undertake routine chromosome analysis for all WM cases although we recognise that country-specific guidelines may recommend this.
High grade B-cell lymphomas: We are in concordance with the overall choice of probes to be used for this disease entity and agree that as MYC breakpoints are located across a large genomic region the choice of adequate FISH probe(s) is paramount. It is therefore important that laboratories understand the limitations of FISH probes when they select which probes to use.
Concerning chromosome analysis, many laboratories do not undertake routine chromosome analysis of high grade B-cell lymphoma where FISH is the priority testing for these cases. We agree that karyotyping is useful, however, this is not always possible. In addition, when karyotyping is performed, this is also often supplemented by FISH analysis. Many laboratories no longer undertake chromosome analysis but instead perform FISH on FFPE sections to detect the most significant abnormalities.
Reporting time: We are aware that a large number of examinations are requested, and that laboratories can find it difficult to adhere to recommended turnaround times but it is important that laboratories organise the work flow so that reporting times can be met. Concerning prioritisation of testing, we recommend that this is assigned according to clinical need as reporting results in follow-up samples can also be urgent since targeted therapy is frequently used in a relapse or emerging relapse setting.
In summary, the authors are grateful for the response by the GFCH to our recommendations as this has enabled us to clarify certain points and address a few omissions. In addition, both the recommendations and the GFCH underline the importance of cytogenetics and a combined approach to testing. Finally, the authors recognise that different countries will have different approaches to ensure that full testing is addressed and adoption of these recommendations, together with this clarification, will assist in further harmonisation of genetic testing.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
|
2020-02-11T16:14:02.741Z
|
2020-02-07T00:00:00.000
|
{
"year": 2020,
"sha1": "c82161955454626c6a737efee93d9c7106a09a27",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41375-020-0736-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c82161955454626c6a737efee93d9c7106a09a27",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211776980
|
pes2o/s2orc
|
v3-fos-license
|
Research on Financial Early Warning of Private Listed Companies Based on Z-score Model
Chinese private economy is an important force to promote economic and social development. President Xi Jinping pointed out in his speech at the private enterprise symposium that it is necessary to help the private economy to develop and grow better. However, Chinese private enterprises are still affected by problems such as financing difficulties, and they still face the problems of rapid deterioration of financial situation and financial crisis. Therefore, this paper studies the financial indicators of some private listed companies, so as to play a certain early warning role before private enterprises enter the financial dilemma. Keywords—private enterprise; financial early warning; Zscore model
INTRODUCTION
The financial crisis can put the entire enterprise in a difficult position, and even cause it to go bankrupt without ever turning to self-help. The private economy is an indispensable force in China's economic development. The state has also introduced various measures to support private enterprises in reform and development. However, the financial status of private enterprises is not as good as that of state-owned enterprises, especially due to difficulties in financing and inadequate financial structure, making it easier to face financial crisis. The financial crisis is not suddenly formed, but a process of gradual change. This paper establishes a financial early warning model for private listed companies by studying the changes of some financial indicators. For a long time, scholars have little research on the financial early warning of private enterprises. This paper constructs an effective financial early warning model to quantitatively predict and analyze private listed companies, and provides theoretical and technical support for their sustainable and healthy development. Fitzpatrick (1932) [1] The first study found that he compared the financial data of 19 listed companies and found that the financial ratio of companies in financial distress is significantly different from that of normal companies, so that the financial ratio of the company can reflect its Self-financial status, and pointed out that the financial ratio has a predictive effect on the future of the company. Beaver (1966) [2] based on this, using mathematical statistics method to establish a univariate financial early warning model, with 79 financial failure enterprises and 79 between 1956 and 1964 The non-failed enterprises are sampled and analyzed on the basis of the five years before the failure. The debt cash ratio, return on assets, asset-liability ratio and asset security ratio are the four most effective financial indicators for predicting corporate crises. The forecasting effect of the company's financial position is good, followed by the forecasting effect of asset yield and asset-liability ratio.
The representative of the multivariate analysis model is the Z model, which is a bankruptcy prediction model designed by the famous financial expert Edward Altman (1968) [3]. The method is to construct a multivariate linear equation that can classify the sample company with the smallest classification error rate through linear regression technique. When the discriminant, the company's independent variable value is returned to the discriminant equation, and the equation value is used as the standard pair. Whether the company failed to judge. Ohlson (1980) [4] first used the Logit model for financial early warning. He selected 105 bankrupt companies and 2058 non-bankrupt companies from 1970 to 1976 to form an unpaired sample, analyzing the distribution of sample companies in the ruin probability interval and analysing the relationship between two types of discriminating errors and dividing points. Odom and Sharda (1990) [5] were the first to apply the artificial neural network model (ANN) to financial early warning, using a three-layer feedforward ANN, using the same five financial ratios in the Z model, and using the MDA model as the standard.
Jie Sun (2013) [6] selected 30 financial indicators data of 135 pairs of listed companies in Shanghai and Shenzhen stock markets, using financial intelligence single classifier, multi-classifier combination and group decision-based method for financial early warning method research. This proposes to build a multi-level financial early warning system and introduce the combined thinking into the field of financial crisis prediction.
II. INTRODUCTION TO THE Z-SCORE MODEL
In 1968, Edward Altman established the famous Z-score model of five variables. Its expression is: where X1 = (current assets -current liabilities) / total assets, X2 = retained earnings / total assets, X3 = (total profit + financial expenses) / total Assets, X4 = (stock market value * total number of shares) / total liabilities, X5 = sales / total assets. Professor Atman determined the critical value Z=2.675 through statistical analysis. If Z<2.675, the borrower would default; if Z>2.675, the borrower would be classified as nondefault. At the same time, when Z<1.81, the company is facing a financial crisis and is divided into bankruptcy; when Z≤2.99, the probability of bankruptcy in this year is 95%, and the probability of bankruptcy within two years is 70%. Uncertainty is divided into gray areas; when Z>2.99, enterprises generally do not go bankrupt and are classified as safe areas.
In 2000, Professor Atman revised the Z model, eliminating the total asset turnover rate, which is a very sensitive indicator of the industry, and obtained the Z3 model, thus minimizing the potential impact of the Z model on the industry. Its expression is: Where X1 = working capital / total assets X2=Retained earnings/total assets X3=EBIT before interest/total assets X4=Shareholders' equity/total liabilities When Z3<1.23, it indicates that the enterprise may go bankrupt at any time and faces a serious financial crisis; when Z3>2.9, it indicates that the possibility of bankruptcy is small and is in a safe area; when 1.23≤Z3≤2.9, it indicates that the enterprise is at In the undetermined gray area, bankruptcy or not can't be judged, and further observation and research is needed.
The Z-score model is simple to operate, independent of the collinearity of the independent variables and has high accuracy. Therefore, the multivariate is used to construct the Z-score model to help enterprises analyze the financial situation quickly and easily.
It can be seen from the above literature review that although there are many studies on the financial early warning model, this kind of research has not been embodied, and it is rare to study the financial early warning model in combination with the industry. It can be seen that the existing research results at home and abroad are not enough to be applied to private enterprises. This paper intends to conduct an empirical analysis of the financial early warning system of private listed companies, so that private enterprises can have reference when establishing a financial early warning system.
A. Sample Selection
The article is a financial early warning study for private listed companies. In the 2015-2017 Shanghai Stock Exchange, a total of 36 private listed companies were specially treated for ST for the first time, eliminating 2 companies with B shares and incomplete financial data. The 34 ST companies were studied as samples. In order to better carry out comparative analysis, 34 non-ST groups with similar asset size were selected from non-ST private listed companies, and two sets of 2015-2017 annual report data were selected for research. The data referenced in the article came from the CSMAR database.
B. Modeling
According to the index frequency selected by 43 documents, the selected financial early warning indicators are shown in " Table I". Since the selected samples are divided into ST groups and non-ST groups, it is necessary to perform nonparametric tests on two independent samples to determine whether there is a significant difference between them. Therefore, the Mann-Whitney U method is selected for non-parametric tests. After using the Mann-Whitney U test, the cash ratio, interest guarantee multiple, accounts receivable turnover rate, capital accumulation rate, and net flow growth rate per share of operating activities were screened out. Total asset growth rate, owner's equity growth rate, return on assets, net profit margin of total assets, return on net assets, return on invested capital, profit margin of cost and expense, total cash recovery rate, and cash reinvestment ratio Significant differences in indicators.
The 14 variables were then subjected to stepwise discriminant analysis using Wilks' Lambda. The results are shown in " Table II". The value of Wilks' Lambda is 0.125, close to 0, indicating that the difference between the groups is large, the discriminant analysis is meaningful, and the significance is 0.001 far below 0.05, which is suitable for discriminant analysis.
Through further discriminant analysis, only the significant level of the total cash recovery rate, return on assets, and total net profit margin is lower than the established level of 0.05, which is significant. It can be concluded from the above " Table III" that the expression of the Z-score model of the financial warning of private listed companies is: Z=0.262*X22+3.455*X17-2.674*X16, where X22 is the total cash recovery rate and X17 is the total net profit of the assets. Rate, X16 is the rate of return on assets.
Substituting the model into ST and non-ST companies, the Z values of the two groups can be obtained. The average value of the ST group is -0.080, the mean square error is 0.077, the average value of the non-ST group is 0.012, and the mean square error is 0.078. Using the hypothesis test method in mathematical statistics, assuming that the Z value obeys the t-distribution of the significance level α=0.05, then the lower limit of Z value for the ST group company is α=0.
C. Logistic Regression Test
According to the above division of financial risk status, the sample data is brought into the Z-score model for selftest, and the test results are shown in the " Table V".
IV. CONCLUSION AND SUGGESTION
Through nonparametric test and discriminant analysis, this paper establishes a financial early warning model suitable for private listed companies. Through the analysis of private enterprises, it makes more targeted measures to improve the financial status of enterprises.
It can be seen from the Z-score model that private enterprises can improve the Z value, and the measures that can be taken are:
A. Increasing Net Cash Flow from Operating Activities
Private enterprises should improve their operational efficiency, strengthen the management of accounts receivable, reduce the total amount of accounts receivable, reduce the collection period of accounts receivable, and recover accounts receivable in a timely manner. This also Advances in Economics, Business and Management Research, volume 94 reduces unnecessary expenses incurred in the collection of receivables, thereby reducing financial costs.
B. Increasing Corporate Net Profit
Improve the sales of products of private enterprises and reduce the cost of products, but they cannot blindly expand. This is easy to face financial crisis due to financial breaks; enterprises should also strengthen tax planning and try to avoid taxation as much as possible, which is conducive to the sustainable development of private enterprises.
C. Reducing Corporate Finance Costs
The financial expenses of private enterprises are mainly the expenses incurred in financing, and the problem of financing difficulties has always been a mountain that hinders the development of private enterprises. Private enterprises can expand financing channels through commercial paper financing and borrowing from non-bank financial institutions. They can also make full use of national preferential policies to reduce financing costs.
|
2019-10-03T09:11:16.965Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "a34cde11c8725d503f589a87eb118f2c7e9d2192",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125917465.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "537541bb3fe44894d16b74ea216b25eee03707eb",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
226605665
|
pes2o/s2orc
|
v3-fos-license
|
Compressed air energy storage facility with water tank for thermal recovery
The paper presents the prototype of the first Romanian Compressed Air Energy Storage (CAES) installation. The relatively small scale facility consists of a twin-screw compressor, driven by a 110 kW threephase asynchronous motor, which supplies pressurized air into a 50m3 reservoir, of 20 bar maximum pressure. The air from the vessel is released into a twin-screw expander, whose shaft spins a 132 kW electric generator. The demonstrative model makes use of a 5m3 water tank acting as heat transfer unit, for minimising losses and increasing efficiency and the electric power generated. Air compression and decompression induce energy losses, resulting in a low efficiency, mainly caused by air heating during compression, waste heat being released into the atmosphere. A similar problem is air cooling during decompression, lowering the electric power generated. Thus, using a thermal storage unit plays an essential role in the proper functioning of the facility and in generating maximum electric power. Supervisory control and data acquisition is performed from the automation cabinets. During commissioning tests, a constant stable power of around 50 kW with an 80 kW peak was recorded.
Introduction
Electrical energy storage [1,2] is a process in which electrical energy from a power network is converted into a form that can be stored, and then converted back to electrical energy when needed. Such a process produces electricity at times of low demand, when the cost is low, or for the backup of intermittent energy sources such as wind or solar. The energy is stored until a high demand arises or no other energy source is available.
Compressed air energy storage (CAES) is a technique for supplying electric power to the grid for meeting peak load requirements, conventional compressed air energy storage being a practicable technology for electric load levelling. A CAES power generation facility uses electric motor-driven compressors to inject air into a reservoir, later releasing the compressed air to turn turbines and generate electricity into the grid. Even though CAES concept dates back more than four decades ago [3], environmental concerns and technological advances are encouraging their research and development interest to increase worldwide [4][5][6][7].
Compressing and decompressing air introduce energy losses, resulting in a reduced electric-to-electric efficiency. The low efficiency is mainly caused by air heating during compression. This waste heat, which holds a large share of the energy input, is released into the atmosphere. A similar problem is air cooling during decompression, lowering the electricity generated with the risk of water vapours freezing in the air.
The present paper proposes the relatively small-scale ROCAES system, with compressed air storage in high pressure vessels, and with thermal energy recovery by means of a water tank. In the compression and expansion processes occurring in a CAES installation, the heat loss in the compression process, respectively the useful energy loss in the expansion process, are high, in a direct dependency with compressor and expander systems. The screw compressor and screw expander used are machines that work in isothermal cycle, with oil injection for inner rotors lubrication and air temperature control during operation. Depending on the process heat recovery solution chosen, the efficiency of an energy storage station can vary between 41% and 75%, and a share of more than 65% of the heat resulted from compression can be recovered in the form of transferred energy [8].
2 Power plant design with thermal storage The CAES system presented consists of a 100 kW twin-screw compressor that supplies compressed air into a reservoir, and a 132 kW twin-screw expander. The compressor is driven by a 110 kW asynchronous three-phase motor. The expander is driven by the compressed air released from the 50m 3 reservoir, spinning a 132 kW asynchronous three-phase generator. The system's operation has been explained in detail in our previous paper [9]. The block diagram of ROCAES station is presented in Figure 1. Two air storage tanks have been envisaged, currently the installation being commissioned with only one pressurized air tank.
At hourly intervals, when the demand for electricity increases and hence the energy price is higher, the screw expander driving the electric generator starts its operation. The air compression is accompanied by an important energy loss due to the heating of the compressed air in the compression process. The heat of the cooling oil is partially recovered by the 5 m 3 water tank, that stores the thermal energy of the oil tray, and transfers it to the oil heating the expander, since air expansion is accompanied by a severe temperature drop. The compressed air is stored in a thermally insulated environment, in order to recover a share of the process heat. In the current configuration, with a 50 m 3 air reservoir, it is estimated that the compressor will be able to operate around 4 hours per day, and the expander about 30 minutes, in case of peak electricity demand. During operation, heated oil is necessary to be provided at expander's air inlet, for a better expansion work. The water tank is insulated as well, for reducing heat losses. At a temperature loss gradient of ~1°C/h, it is estimated that the temperature in the tank after 20 hours will be 20°C lower, considering that the thermal insulation consists of a 50 mm thick layer of basaltic mineral wool. In Figure 2 below, we present the ROCAES pieces of equipment installed at the commissioning location. The screw compressor, screw expander and the control room are placed inside the production hall, while the compressed air vessel is situated outside. The overall assembly of the ROCAES facility installed in the industrial hall of SC Popeci Heavy Equipment SA (Craiova, Romania) is presented in Figure 2 below. 3 Thermal storage and recovery unit 3.1 Water tank design Stored thermal energy [11][12][13] is typically divided into three categories: sensible heat, latent heat, and reaction heat. Sensible heat storage can be recognized in district heating systems or in water heater tanks. Every unit of heat added makes a sensible heat storage medium (water in our case) increase with a certain temperature amount [11]. The installation makes use of a 5 m 3 (5000 litres) water tank (Figure 3 left) for oil heat recovery during the cooling and expansion process. Figure 3 right presents the 50 m 3 compressed air reservoir. The cold and hot water in the thermal storage tank are considered to be fully mixed because its volume is relatively low, hence the temperature in the tank is considered to be uniform. This assumption simplifies the analysis because the storage and the load temperature are assumed to be equal [14]. The purpose of the heat recovery module is to store a volume of water (storage medium) resulted from the heated oil resulted from compression, and, at the required time, respectively during the functioning of the expander, to transfer the heat to the oil injected at its inlet. The three-way motor operated valve opens the circuits comprising the heat exchangers. The heat exchange takes place between the hot oil from the compressor and the storage medium represented by the water in the heat recovery module. The water pump mounted on the circuit forces water to flow through the heat exchangers. When the expander is started up, the three-way motor operated valve opens the water circuit that passes through the heat exchanger on the expander circuit. A heat exchange occurs between the storage medium and the oil injected into the expander, a significant part being recovered from the process heat. When the temperature in the separator vessel increases over 60°C, the PLC (Programmable Logic Controller), through its implemented and programmed software, automatically gives the command of opening the valve to recirculate the oil on the bypass circuit (see the block diagram in Figure 1).
Mathematical model
The computations were realised relying on the following input data: initial water temperature in the vessel 12°C (285.15K), maximum heated water temperature is 60°C (333.15K), hence a temperature difference of Δtw,max = 48°C = 48K.
The theoretical evolution of the pressure in the air tank, during compressor operating time, tR , was estimated to increase from 6 bar to 17.6 bar in 3837 s. The water quantity rate circulated in tR=1.07 hours is mvr = 1.303 kg/s. The amount of heat rate introduced into the water vessel by the hot oil of the compressor is calculated as: where: mvr [kg/s] -water quantity rate circulated in the tank volume (mvr = Vw,tank/tR where tR is the compressor operation time, and Vw,tank = 5 m 3 is the volume of the water tank); cw = 4184.4 J/(kgK) -specific water heat.
The functioning time for the pressurized air reservoir is important for the heat stored in the water heat recovery tank. Relying on the pressure in the air tank, the functioning of the expander will be analysed in five time intervals extracted from the data acquired during commissioning tests. The energy of the heated water, in the recirculation time tR=1.07 hours, is thus calculated as: The expander air flow is calculated with the formula below, as the flow was measured with an orifice plate, according to ISO 5167-2 standard.
where: Ṁa [kg/s] -air mass flow; ; ρa -air density on the upstream of the diaphragm; ΔP -differential pressure on the diaphragm. The resulting measurement accuracy is about ±1%. From air thermal equation of state, we find the air density: where: R -the universal gas constant = 8314.32 J/(kmol•K); T -gas temperature in Kelvin (T=273+t (°C)); M -molar mass (for air ~ 29 kg/kmol). We made the assumption that air is an ideal gas. The computations were realised on five functioning periods, relying on the experimental data, extracted in Table 1. The heat rate introduced into the expander oil system is: where: Ṁoil [kg/s] -oil flow mass circulated in the expander; coil = 2054 J/(kgK) -oil specific heat; Δtoil [K] -temperature range for oil cooling in the expander. The oil mass flow was calculated using the air heat extracted during expander operation, relying on temperature differences between theoretical cycle without heating through oil injection, and the practical temperature reached during expander operation, with oil injection. The theoretic expansion temperature without air heating is given by the equation: where: -expansion ratio; k -isentropic coefficient. The heat difference is given by the subsequent equation: where: Tev,exp value is acquired by the temperature transducer; Tev,teor is given by equation (6); Ṁa is given by the equation (3); R -the universal gas constant; M -molecular mass of air. The oil flow mass is given by the equation:
Automation system and commissioning tests
The purpose of the automated SCADA (Supervisory Control and Data Acquisition) system, represented by the independent sets of drives and automation cabinets for compressor and expander automation respectively, is to control and monitor the operation parameters of thermal, mechanical and electrical processes, to regulate and keep these ones within certain operation limits, in order to ensure maximum safety conditions. On compressor and expander skids, transducers are mounted for monitoring the important parameters of the installation. The information from the transducers is transmitted to the two separate PLCs located in the automation cabinets, for the independent operation of expander and compressor units.
After filling the compressed air reservoir with the compressor operating for about 4 hours, reaching a pressure of about 16 bar, the compressor was shut down. Then, the expander air inlet valve was opened, and the asynchronous three-phase machine was started in motor regime, with a softstarter, until reaching no load speed (~1470 rpm). Compressed air from the storage tank was released into the expander by opening the reservoir valve towards expander. The pressurized air released turns the expander, whose shaft spins the shaft of the electric machine above the synchronous speed of 1470 rpm, this one entering automatically in generator regime and injecting electric power into the grid.
The minimum pressure condition set at expander inlet is 2 bar. If not meeting this condition, the expander goes through an automatic shutdown sequence. The condition was set considering the minimum pressure in the air reservoir dropping down to 4-5 bar, considering the pressure losses until air reaching expander's inlet. In Figure 4 are presented the control panels interfacing the two PLCs, during compressor and expander operation tests.
Fig. 4. Compressor (left) and expander (right) control panels showing real-time parameters
In the subsequent figure, the generated power at the first experimental start-up is presented, for commissioning period T1. The positive power shows that the asynchronous electric machine is launched as motor, absorbing power from the grid. Then, as the expander inlet valve gradually opens to suction air from the reservoir, the electric machine is spun over its synchronous speed as generator. It has a normal instantaneous spike of almost 40 kW generated power when reaching synchronous speed, and then it begins to stabilize in generator regime, injecting almost 80 kW maximum electric power. The two power curves in Figure 5 are very close, but the injected power into the grid is slightly lower because electric consumers in the electric cabinets also need to be supplied with the power produced. The other T2…T5 periods have similar power graphs.
Experimental and theoretical results
Both the maximum power and the stabilization power ( Figure 5 above) were significantly higher than the ones obtained during bench tests without thermal recovery tank, even with preheating oil with a heating resistance [9,10]. In the tables below are given the parameters ranges (minimum and maximum values), measured during commissioning tests. In Table 2 we selected the minimum and maximum values from a total of 24 values used for iterations in PTC Mathcad software, where the computations were performed. The power of the expander is given by the following relation: where πt is the expansion coefficient, having the expression: We introduced a correction factor, ηcorrection, for the oil injection flow. A small deviation from the air flow rate Ṁ induces a significant error in the difference of heat exchanged with the oil ∆ calculated in equation (7). The air mass flow Ṁ depends on the Reynolds number air flow measured with the orifice plate in the diaphragm. An adjustment in Reynolds number gives the ηcorrection factor used to calculate more accurately the overall efficiency. In Table 3 are given the maximum values of the oil flow rate Ṁ (l/min) at a density ρoil of 905 kg/m 3 for ISO VG 68 oil type used, the injection oil heat rate Qoil_injection and the theoretical calculated power of the expander , , as well as the expansion coefficient . It is observed from Table 3 that at a maximum πt and a maximum heat oil rate injected Qoil_injection, we obtain the maximum theoretical power. The medium heat rate necessary for expander heating during T1…T5 periods is estimated as Qoil_injection,medium = 27.84 kW. The efficiency was determined iteratively, for a positive and linear increasing of the ∆ with the oil mass flow Ṁ , as ηmedium being the medium ratio between theoretical power Pteor and experimental power Pexp in 24 iterations, and: In Table 4 are given the efficiencies calculated for all experimental operation ranges. The expander oil was heated from -10°C to 56°C with the water stored in the storage water tank. The oil used for recirculation was drawn from the separator vessel of the expander, where the temperature was ~12°C. If we consider that the energy of the oil from 8 E3S Web of Conferences 180, 02002 (2020) TE-RE-RD 2020 https://doi.org/10.1051/e3sconf/202018002002 -10°C to 12°C was lost in the process, we obtain the values in Table 5. The medium heat rate lost in the process was Qloss ≅ 8.6 kW. The water pump circulates the water in the expander heat exchanger with the same flow as the compressor heat exchanger, that is mvr = 1.303 kg/s. The energy that can be produced for a functioning with 1°C decrease in the water temperature, with the rate of mvr, is: In order to achieve the medium heat rate Qoil_injection,medium of 27.84 kW needed for the expander, the water circulation rate should be: The amount of energy extracted from the 5000 litres of water, with a water mass flow of 6.65 kg/s (399.2 l/min), requires a total time of functioning tR,total of ~12.5 minutes for 1°C of water temperature decrease This time should be enough for the expander operating at the recorded parameters, for a total functioning time in generator regime, that was:
Conclusions
The ROCAES power facility was installed and commissioned at the industrial hall of Popeci Heavy Equipment plant in Craiova, Romania. It comprises a screw electro-compressor, driven by a 110 kW motor and a screw expander driving a 132 kW electric generator. The compressed air is stored in a 50 m 3 reservoir, of 20 bar maximum pressure. The demonstrative model makes use of a 5 m 3 water tank for minimising thermal energy losses, increasing the efficiency and generated power. Air compression and decompression processes are accompanied by energy losses due to air heating during compression, releasing waste heat into the atmosphere, and air cooling during decompression, lowering the electric power generated. This results in a low efficiency, hence using a thermal storage tank has an important part in the operation of the installation and in generating higher electric power.
A stabilized power of around 50 kW with a maximum of 80 kW was obtained. Both the peak power and the stabilization power recorded were found to be significantly higher than the ones obtained during in-house bench tests without the water tank, even if oil preheating was ensured.
For a reliable CAES energetic process, the aimed performances are of minimum 30 minutes of continuous operation at around 80 kW, considering 80% generator efficiency. A water temperature of 60°C requires higher oil and water mass flow through the heat exchangers, thus increasing the electrical energy consumed. For more power, a higher capacity of the air storage reservoir(s) is required, able to attain a higher pressure. Introducing an additional compressor stage as in [7], of higher pressure, would also increase the overall performances.
Further researches will consider increasing the stored capacity of compressed air, using two or more reservoirs. As well, an efficient solution for increasing the temperature of the stored water to a maximum of 90°C will be sought, and in case of not having enough energy for the expansion process, a parallel oil heating device will be ensured.
|
2020-07-30T02:05:12.958Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "f2c0d094344f2a80c0408f8df537218c8a0b8638",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/40/e3sconf_te-re-rd2020_02002.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ad296ca316937a464c6f72440fc67d1757835298",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
103253447
|
pes2o/s2orc
|
v3-fos-license
|
Blocked isocyanate silane modi fi ed Al 2 O 3 / polyamide 6 thermally conductive and electrical insulation composites with outstanding mechanical properties
The surface of alumina (Al 2 O 3 ) particles was modi fi ed by a blocked isocyanate silane coupling agent, which was synthesized by using methylethyl ketoxime (MEKO) to block isocyanate-propyltriethoxy silane via nucleophilic reactions. For comparison, the surface of Al 2 O 3 particles was also modi fi ed by propyltrimethoxy silane (KH-331). The thermally conductive and electrical insulating Al 2 O 3 /polyamide 6 composites were prepared by dispersing modi fi ed Al 2 O 3 in polyamide 6 (PA6) using a twin-screw extruder. The X-ray photoelectron spectroscopy (XPS) results revealed that blocked isocyanate-propyltriethoxy silane and KH-331 were successfully attached to the surface of Al 2 O 3 particles with chemical bonds. The blocked isocyanate silane modi fi ed Al 2 O 3 were well dispersed in PA6 as revealed by scanning electron microscopy (SEM). When the Al 2 O 3 content was 70 wt%, the tensile strength (85.2 MPa) and fl exural strength (136.4 MPa) of blocked isocyanate silane modi fi ed Al 2 O 3 /PA6 composites were higher than those of KH-331 modi fi ed Al 2 O 3 /PA6 composites (52.5 MPa, 77.8 MPa), respectively. The thermal conductivity and impact strength of blocked isocyanate silane modi fi ed Al 2 O 3 /PA6 composites were also higher than that of KH-331 modi fi ed Al 2 O 3 /PA6 composites at the same content of Al 2 O 3 . The volume electrical resistivity of modi fi ed Al 2 O 3 /PA6 composites decreased with increasing Al 2 O 3 content, but they still were good electrical insulating materials.
Introduction
The demands of materials, which are thermally conductive but electrically insulating, increase rapidly in electronic and electric products. Conventional polymers have the advantages of electrical insulation and easy processing, but have the drawback of being thermally insulating. 1 Various high thermally conductive and electrically insulating llers, such as Al 2 O 3 , 2 BN, 3 AlN, 4 Si 3 N 4 (ref. 5) and SiC, 6 have been widely used to improve the thermal conductivity of polymer matrices.
To get polymer/ller composites with high thermal conductivity and electrical resistivity, especially with a thermal conductivity higher than 1.0 W (m À1 K À1 ), a large amount of llers must be added to the polymer matrix. 7 However, composites with such a high loading of llers usually possess low mechanical properties, because the surface of most thermally conductive llers possess many polar hydroxyl groups and a high surface energy, and the llers tend to aggregate and have weak interfacial bonding with the polymer matrix. The surface modication of llers can improve the dispersion of llers in polymer matrices and enhance the interfacial interactions between the llers and polymer. Konnola et al. 8 graed carboxyl terminated poly(acrylonitrile-co-butadiene) on to graphite oxide, and the modied graphite oxide excellently dispersed in epoxy matrix, resulting in an effective sheet/matrix interfacial bonding. Sandomierski et al. 9 modied silica-based llers with in situ generated 4-hydroxymethylbenzenediazonium salt, the surface-functionalized llers could react with phenolic resins, and the exural strength of diazonium-modied silica/phenolic resin composites was found to be up to 35% higher than that of the composites prepared without any diazonium salts. Gardebjer et al. 10 chemically modied the surface of cellulose nanocrystals (CNC) with polylactic acid, and they found that graed PLA chains on average comprised two lactic acid units attached to 48% of all available hydroxyl groups on the surface of the CNC. Compared to unmodied CNC, the modied CNC showed less aggregation in organic solvents and hydrophobic polymer materials. The increased interaction was observed between the polymer and llers aer surface modication.
PA6, which owns good mechanical properties, heat resistance and electrical insulation, and also easy processing performance, is suitable for using as thermally conductive composites matrix. Jia et al. 11 found that graphite and Sn have a signicant synergistic effect on the thermal conductivity improvement of PA6. Li et al. 12 prepared PA6 composites by in situ thermal polycondensation of 3-caprolactam in the existing of 3D graphene foam, the thermal conductivity was improved by 300% to 0.847 W (m À1 K À1 ) of PA6 composites at 2.0 wt% graphene loading from 0.210 W (m À1 K À1 ) of pure PA6 matrix. The loading of graphite or graphene can improve the thermal conductivity but drastically decrease electrical insulation of PA6 composites. Sato et al. 13 made 3-aminopropyltriethoxysilane treated alumina and water-dispersible polycarbodiimide graed alumina, and they found the organic substance layer with ordered structure decreased scattering for phonon transfer between alumina llers and PA6 matrices. The mechanical properties of composites were not mentioned. Li et al. 14 used carbon and glass bers to reinforce the mechanical properties of Mg(OH) 2 , Al 2 O 3 , and ake graphite-lled PA6 composites, the mechanical properties and thermal conductivity of carbon bers reinforced composites were better than that of glass bers reinforced composites, while carbon bers decreased the electrical insulation of composites. Hu et al. 15 melt blended 3-aminopropyltriethoxysilane modied tetrapod-shaped zinc oxide whisker (T-ZnOw) with nylon 11, they found the tensile strength increased with the initial loading of T-ZnOw, but decreased when the loading of T-ZnOw higher than 15 phr.
The purpose of this work is to prepare PA6 based composites which have not only good heat dissipation and electrical insulation but also excellent mechanical properties. First, thermal conductive but electrical insulation ller of Al 2 O 3 particles were surface modied by a blocked isocyanate silane. Then, PA6 based composites were obtained by meltblending of surface modied Al 2 O 3 and PA6. During meltblending process, the blocked isocyanate groups on Al 2 O 3 surface deblocked to regenerate isocyanate groups, which could react in situ with the terminal amino groups and/or carboxyl groups of PA6 chain. The molecular weight of PA6 was increased and the interface compatibility of Al 2 O 3 particles and PA6 matrix was enhanced. As a result, the mechanical property and thermal conductivity of composites were improved simultaneously.
Materials
Commercially available PA6 (YH800) was supplied by Hunan YueHua Chemical Co., Ltd. (China). Al 2 O 3 powders were obtained from Fsvictor Chemical Material Co., Ltd. (China) with average particle sizes of 4-6 mm. Isocyanate-propyltriethoxy silane, propyltrimethoxy silane (KH-331), methylethyl ketoxime (MEKO), toluene, ethanol were purchased from Aladdin Industrial Corporation (China). All reagents were laboratory grade chemicals and were used as received without any further purication except toluene must be puried by removing the moisture before using. In a 250 mL round-bottom ask, 160 mL toluene, 1 g sodium, and 0.1 g benzophenone were introduced. The mixture was reuxed at 120 C until the solution color changed to dark blue. Then the solution was heated to 130 C and the toluene in the ask was distilled, sealed and stored for using.
Synthesis of MEKO blocked isocyanate-propyltriethoxy silane
Isocyanate-propyltriethoxy silane was dissolved by anhydrous toluene in a 4-neck glass reactor equipped with a condenser, a constant pressure dropping funnel, a N 2 inlet and a stirrer. The calculated amount of MEKO diluted with anhydrous toluene was added into the reactor dropwisely under stirring with the OH/NCO molar ratio of 1.2 : 1 in 0.5 h at room temperature. The reaction mixture was kept for an additional 0.5 h at ambient temperature, subsequently heated to 80 C and stirred continuously for 4 h in an oil bath under nitrogen atmosphere. Aer the completion of the reaction, the reaction mixture was allowed to cool down to room temperature. The nal liquid product was obtained aer toluene and excess MEKO were removed by rotary evaporators at 70 C under vacuum.
Surface modication of Al 2 O 3 particles
The blocked isocyanate silane (0.5 wt% based on the weight of Al 2 O 3 particles), water (the molar ratio of water/blocked isocyanate silane was 3 : 1), and ethanol (the weight is the same as blocked isocyanate silane) were added to a glass vessel, acetic acid (hydrolysis catalyst) was added to regulate the pH value to 3.5-5.5, then the mixture was maintained at 30 C for 0.5 h in order to hydrolyze blocked isocyanate silane. On the other hand, the Al 2 O 3 powders were added to a SHR-10C high-speed mixer (Jiangsu Zhangjiagang Beier Machinery Co., Ltd. China). When the temperature of Al 2 O 3 powders arrived at 100 C, the hydrolyzed blocked isocyanate silane solution was injected homogeneously to high-speed mixer and stirred for 15 min. The modied Al 2 O 3 powders were dried in a vacuum oven at 80 C for 8 h to eliminate the solvent. Similar process was employed to prepare KH-331 modied Al 2 O 3 powders.
Preparation of modied Al 2 O 3 /PA6 composites
Before blending, PA6 was dried at 80 C in vacuum oven for 5 h to remove the moisture. PA6 and modied Al 2 O 3 powders were mixed in above high-speed mixer at 1550 rpm for 10 min. Aer mixing, modied Al 2 O 3 /PA6 composites were manufactured by a SJSH-30 twin-screw extruder (Nanjing Rubber & Plastic Machinery Plant Co., Ltd. China) at screw speed of 79 rpm, feeding speed of 17.9 rpm and melt temperature at 235-250 C. The granule modied Al 2 O 3 /PA6 composites were dried at 80 C for 5 h and injection molded to obtain specimens with standard shapes by a HTL90-F5B injection molding machine (Ningbo Haitai Machinery Manufacture Co., Ltd. China).
Characterization
Fourier transform infrared spectroscopy (FTIR). To characterize completion of the blocking reaction, isocyanatepropyltriethoxy silane and blocked isocyanate silane were coated on KBr chip and analyzed by FTIR (Nicolet 67, Thermo Nicolet, USA), respectively. The scans were performed in the frequency range from 400 to 4000 cm À1 .
Differential scanning calorimetry analysis (DSC). The deblocking temperature of the blocked isocyanate silane was tested by DSC (Q2000, TA, USA). The samples was heated from room temperature to 250 C at a heating rate of 5 C min À1 under a nitrogen atmosphere. To test the crystallization and melt temperatures of PA6 and composites, sample (typically 8 mg) was heated at a rate of 50 C min À1 to 270 C and maintained isothermal conditions for 3 min to eliminate any residual crystals, then the resulting melt was cooled at 10 C min À1 to room temperature and heated at 10 C min À1 to 270 C under a nitrogen atmosphere.
X-ray photoelectron spectroscopy (XPS). XPS (ESCALAB250, Thermo, USA) was employed to analyze the interaction between coupling agent and Al 2 O 3 particles. Prior to test, modied Al 2 O 3 powders samples were packed with lter paper, and extracted by Soxhlet extraction apparatus for 48 h with acetone as extraction solvent.
Scanning electron microscopy (SEM). SEM (JSM-6490LV, JEOL Ltd., Japan) was used to observe the dispersion of modi-ed Al 2 O 3 particles in the PA6 matrix. The cross-section of impact samples were coated with gold prior to fractographic examination.
Mechanical properties testing. Electronic universal testing machine (CMT4304, Shenzhen Sans, China) and pendulum impact testing machine (ZBC1400-1, Shenzhen Sans, China) were carried out to investigate the mechanical properties of modied Al 2 O 3 /PA6 composites. The tensile strength was measured according to ISO 527-2:1993 at a cross-head rate of 50 mm min À1 . Bending testing were performed in accordance with ISO 178:2001 at a bending rate of 2 mm min À1 and the deection of 7 mm. The impact strength was measured according to ISO 179-1:2000, the notch depth of samples was 2 mm and pendulum energy used was 2.75 J. The values reported were the averages from ve specimens each.
Thermal conductivity testing. The thermal conductivity (in units of W m À1 K À1 ) of composites at room temperature were measured according to ISO 22007-2.2 using Thermal analyzer (TPS 2500S, Hot Disk, Sweden), which is based on the Transient Plane Source Method. The double helix probe of 4 mm diameter was placed between two lamellar samples of 2 mm thickness, at on both sides.
Electrical resistivity measurements. The measurements were conducted on a digital high resistance test xture PC68 (Shanghai Precision Instrument Manufacture, China) using an injection-molded plate with a diameter of 60.0 mm and a thickness of 1.0 mm.
Blocking and deblocking of isocyanate-propyltriethoxy silane
The FT-IR spectra of isocyanate-propyltriethoxy silane and MEKO blocked isocyanate-propyltriethoxy silane were shown in Fig. 1. The absorption at around 2270 cm À1 , which was attributed to the antisymmetric stretching vibration of the -N]C]O groups, was found in isocyanate-propyltriethoxy silane sample ( Fig. 1(a)), but not appeared in MEKO blocked isocyanate-propyltriethoxy silane sample ( Fig. 1(b)). This result indicated that the The blocking and deblocking reaction of MEKO and isocyanate-propyltriethoxy silane could be described as Scheme 1.
The deblocking temperature of MEKO blocked isocyanatepropyltriethoxy silane was tested by DSC method, and the result was presented in Fig. 2. There was a sharp endothermic peak in the temperature from 177 to 200 C, which corresponded to the deblocking temperature of blocked isocyanate silane.
The interaction of modied Al 2 O 3 particles and PA6
The MEKO blocked isocyanate-propyltriethoxy silane and KH-331 (for comparing) modied Al 2 O 3 particles were Soxhlet extracted for 48 h to remove free coupling agents and any physically bonded coupling. The modied Al 2 O 3 particles were characterized by XPS analysis, and the results were shown in Fig. 3. Two strong peaks emerged at about 531.25 eV and 74.31 eV, which attributed to the binding energies of O 1s and Al 2p, respectively. Two additional peaks appeared at 285 eV and 101.68 eV which corresponded to the binding energies of C 1s and Si 2p, respectively. In Fig. 3(a), the binding energy of N 1s was appeared at 399 eV. All these results conrmed that MEKO blocked isocyanate-propyltriethoxy silane and KH-331 was successfully attached to the surface of Al 2 O 3 particles with a chemical bond.
The SEM images of MEKO blocked isocyanate-propyltriethoxy silane and KH-331 modied Al 2 O 3 /PA6 composites lled with 40 wt% Al 2 O 3 were shown in Fig. 4(a) and (b), respectively. It could be seen that MEKO blocked isocyanate-propyltriethoxy silane modi-ed Al 2 O 3 particles dispersed well than that of KH-331 modied , the blocked isocyanate groups attached to the surface of Al 2 O 3 particles deblocked to regenerate isocyanate groups, which could react in situ with the terminal amino groups and/or carboxyl groups of PA6 chain. As a result, interfacial adhesion between Al 2 O 3 particles and PA6 matrix was improved and the PA6 chains were extended by MEKO blocked isocyanatepropyltriethoxy silane modied Al 2 O 3 particles.
DSC analysis
The crystallization and melt behavior of PA6 and composites were evaluated by DSC, and the results were shown in Fig. 5. The effect of pristine and modied Al 2 O 3 on crystallization and melt temperature on PA6 matrix was almost the same ( Fig. 5(b)-(d)). The crystallization temperature of Al 2 O 3 /PA6 composites (191-192 C) was considerably higher than that of pure PA6 (173 C) (Fig. 5 le), indicating that Al 2 O 3 particles had nucleation effect Scheme 1 Blocking and deblocking reaction of MEKO and isocyanate-propyltriethoxy silane. View Article Online on PA6 matrix. The melt behavior of Al 2 O 3 /PA6 composites was very different from that of pure PA6 (Fig. 5 right). The lower temperature endotherm (around 213 C) of Al 2 O 3 /PA6 composites attributed to the g-phase crystallites present in the samples, 17 indicating that Al 2 O 3 particles promoted part of PA6 formatting g-phase crystallites.
Mechanical properties
The tensile strength of modied Al 2 O 3 /PA6 composites with different content of Al 2 O 3 was shown in Fig. 6. The tensile strength of KH-331 modied Al 2 O 3 /PA6 composites ( Fig. 6(a)) increased rst and then decreased with the increasing of Al 2 O 3 content, while the tensile strength of MEKO blocked isocyanate-propyltriethoxy silane modied Al 2 O 3 /PA6 composites ( Fig. 6(b)) increased steadily with the increasing of Al 2 O 3 content. When the content of Al 2 O 3 was higher than 50 wt%, the tensile strength of KH-331 modied Al 2 O 3 /PA6 composites decreased sharply, but the tensile strength of MEKO blocked isocyanate-propyltriethoxy silane modied Al 2 O 3 /PA6 composites increased rapidly. When the content of Al 2 O 3 increased to 70 wt%, the tensile strength of MEKO blocked isocyanate-propyltriethoxy silane modied Al 2 O 3 /PA6 composites reached 85.2 MPa, which was considerably higher than that of neat PA6. The reasonable explain might be that MEKO blocked isocyanate-propyltriethoxy silane modied Al 2 O 3 played a role of chain extender, so with the increasing of Al 2 O 3 content, the active points participated in the extender chain reaction increased, which contributed to increase the molecular weight of PA6 and enhance the interface compatibility of Al 2 O 3 particles and PA6 matrix. Meanwhile, the dispersion of Al 2 O 3 particles in PA6 matrix was also improved. Fig. 7 showed the exural strength of modied Al 2 O 3 /PA6 composites lled with different content of Al 2 O 3 . The exural strength of KH-331 modied Al 2 O 3 /PA6 composites ( Fig. 7(a)) increased rst and then decreased with the increasing of Al 2 O 3 content. It was observed that the tendency of the exural strength of MEKO blocked isocyanate-propyltriethoxy silane modied Al 2 O 3 /PA6 composites ( Fig. 7(b)) was the same as that of tensile strength, which increased signicantly with the increasing of Al 2 O 3 content. When the content of Al 2 O 3 was 70 wt%, the exural strength of MEKO blocked isocyanatepropyltriethoxy silane modied Al 2 O 3 /PA6 composites reached 136.4 MPa.
The impact strength of modied Al 2 O 3 /PA6 composites was shown in Fig. 8 13 reported similar conclusions. With the increasing of Al 2 O 3 content, thermal conductivity particles could touch each other closely, the thickness between PA6 and Al 2 O 3 particles reduced, which decreased the thermal resistance of Al 2 O 3 /PA6 composites, resulting in an increase in thermal conductivity of Al 2 O 3 /PA6 composites. Moreover, the thermal conductivity of modied Al 2 O 3 /PA6 composites increased rapidly when the Al 2 O 3 content was higher than 60 wt%. Once the heat network is formed by the llers, the composite exhibits a percolation behavior, the thermal conductivity enhanced steeply for instance. 18 In addition, the MEKO blocked isocyanatepropyltriethoxy silane modied Al 2 O 3 /PA6 composites exhibited higher thermal conductivity as compared with the KH-331 modied Al 2 O 3 /PA6 composites at the same weight fraction of Al 2 O 3 loading. The thermal conductivity of the lled composites is determined not only by the content of ller, but also by the synergetic effect of matrix and ller. In this paper, in situ reaction of deblocked isocyanate groups with the terminal amino groups and/or carboxyl groups of PA6 chain strengthened the interfacial interactions between PA6 and blocked isocyanate silane modied Al 2 O 3 particles. Fig. 4 also showed that MEKO blocked isocyanate-propyltriethoxy silane modied Al 2 O 3 particles were more uniformly dispersed in PA6, so the thermal conductive particles could touch each other more closely, which can form net-like or chain-like thermal conductive channel and heat could more easily be transferred. Many researchers 19,20 also got the conclusion that composites with a better interfacial adhesion exhibited a higher thermal conductivity. When the content of Al 2 O 3 was 70 wt%, the thermal conductivity of MEKO blocked isocyanate-propyltriethoxy silane modied Al 2 O 3 /PA6 composites reached 1.18 W (m À1 K À1 ), which was almost 6 times higher than that of neat PA6.
Electrical insulation
The volume electrical resistivity of modied Al 2 O 3 /PA6 composites with different Al 2 O 3 content was shown in Fig. 10
Conclusions
The thermally conductive and electrical insulating PA6 composites containing blocked isocyanate silane modied Al 2 O 3 particles as llers were successfully fabricated by twinscrew extruder. The blocked isocyanate-propyltriethoxy silane attached to the surface of Al 2 O 3 particles with chemical bond. The blocked isocyanate groups on the surface of Al 2 O 3 deblocked to regenerate isocyanate groups, which could react in situ with the terminal amino groups and/or carboxyl groups of PA6 chain. As a result, blocked isocyanate silane modied Al 2 O 3 particles were more homogeneous dispersed and had stronger interaction with PA6 matrix.
The tensile and exural strength of blocked isocyanate silane modied Al 2 O 3 /PA6 composites increase steadily with increasing the content of Al 2 O 3 , while the tensile and exural strength of KH-331 modied Al 2 O 3 /PA6 composites decreased when Al 2 O 3 content was higher than 40 wt%. When the Al 2 O 3 content increased to 70 wt%, the tensile strength and exural strength of blocked isocyanate silane modied Al 2 O 3 /PA6 composites reached 85.2 MPa and 136.4 MPa, respectively, which was considerably higher than that of neat PA6. The thermal conductivity and impact strength of blocked isocyanate silane modied Al 2 O 3 /PA6 composites were also improved due to the enhanced interaction between Al 2 O 3 particles and PA6 matrix. When the content of Al 2 O 3 was 70 wt%, the thermal conductivity of blocked isocyanate silane modied Al 2 O 3 /PA6 composites reached 1.18 W (m À1 K À1 ). Though the volume electrical resistivity of modied Al 2 O 3 /PA6 composites decreased with increasing Al 2 O 3 content, it is still higher than 3.0 Â 10 14 U cm.
|
2019-04-09T13:01:40.676Z
|
2017-06-05T00:00:00.000
|
{
"year": 2017,
"sha1": "62bed58f304592794a0f9510744b7f0562f393d5",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c7ra04454b",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cf2a67c281cc30808c4ca1dd93e1f063da6da7de",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
36993979
|
pes2o/s2orc
|
v3-fos-license
|
Morphological differences among egg nests and adult individuals of Cicadatra persica (Hemiptera, Cicadidae), distributed in Erneh, Syria
Abstract The aim of this study is determining the different patterns of egg nests and the morphological differences between the specimens of Cicadatra persica Kirkalidy, 1909 (Hemiptera: Cicadidae) distributed in fruit orchards in Erneh located on AL-Sheikh mountain south west of Syria. The appearance of 80 egg nests was studied, and the results showed that there were two basic patterns of egg nests laid by Cicadatra persica, 90% of the egg nests were of the first pattern (consists of several adjacent slits), while 10% of them were of the second pattern (consists of several divergent slits). A random sample consisting of 300 specimens (150 males and 150 females) were also studied concentrating on the differences in the color of the supra-antennal plate and in the number of spurs on the tibia of the hind legs. The results showed that there were two basic patterns of individuals based on the differences in the color of supra-antennal plate. The first pattern (individuals with yellow supra-antennal plates), constituted more than 90%, and the second one (individuals with black supra-antennal plates) constituted less than 10%. The results also showed that there were 27 different patterns based on the number of spurs on the tibia of the hind legs. One of them was a common pattern (2, 3) whose individuals have 2 spurs on the upper side of the tibia of the hind legs and 3 spurs on the lateral side of the tibia of the hind legs. The total percent of this common pattern was 76%. The other 26 patterns were different from each other, and the total percent of all these different patterns was 24%.
introduction
Cicadas are large insects obvious in their environment because of their mating calls. However, they receive relatively little attention because they are often difficult to catch and there are few individuals who can identify insects of the group (Sanborn 2008). Morphological studies on cicadas were restricted to identify some species. There are few studies which were conducted to distinguish between some closely species. For example, morphological and occurrence studies of species of the genus Fidicinoides have been carried out by Boulard and Martinelli (1996), Sanborn (2008), Sanborn et al.(2008), Santos and Martinelli (2007, 2009a, 2009b and Santos et al. (2010). Some species of Brazilian Fidicinoides were also characterized morphologically, presenting illustrations of the head, thorax, abdomen, right forewing and male sternite VIII of the species of Brazilian Fidicinoides (Santos and Martinelli 2011).
In practice it is usually not always possible to have live specimens and thus difficulties may arise in the identification of cicadas. In many instances, like in the genus Cicada Linnaeus, it is difficult to separate species only on the basis of their morphology. Five species of the genus Cicada were analyzed to use a set of measurements of the external morphology and male genitalia to identify and quantify subtle differences among the five species (Simões and Quartau 2009). Another study was conducted to test the discrimination capabilities of numerical techniques commonly used for classificatory purposes, as well as to discover the most effective characters to distinguish between Cicada orni and C. barbara which are very similar and sometimes difficult to distinguish using external keys (Quartau 1988).
For the species Cicadatra persica, morphological studies have been restricted to describe the morphological characters of the species like in the research of Mozaffarian and Sanborn (2009). There is also another morphological study on Cicadatra persica in which the morphology of genital organs and maximum oviposition of capacity of female was determined in Turkey (Kartal and Zeybekoğlu 1999). Cicadatra persica was recorded for the first time in Syria in summer 2011 (Dardar et al. 2012). Little is known about the morphological patterns of C. persica. This study was undertaken with two main objectives in mind. The first was to distinguish between two basic patterns of egg nests laid by C. persica during summer 2011. The second objective is to distinguish among the patterns of C. persica based on the color of the supra-antennal plate and the number of spurs located on the tibia of the hind legs as well as the patterns of egg nests.
Egg nests
80 egg nests were collected from three different apple fruit orchards in the village Erneh. The samples were collected on 9 th , 11 th , and 17 th of July. 50 twigs hold one egg nest were cut from each orchard by using a paring scissor. The collected twigs (150) were mixed well together, then 80 twigs were chosen randomly from them one after one, then they were left in the room temperature to be dried and to prevent them from decomposition caused by humidity. The external structure of the chosen egg nests were studied in the laboratory.
Adult individuals
300 adults (150 males + 150 females) were collected from several fruit orchards in the village Erneh on 27 th of June, 2011. Then they were put in a plastic container and kept in the refrigerator under 4°-6°C. The color of the supra-antennal plate and the number of spurs on the tibia of the hind legs of the collected adults were studied in the laboratory by using a Binocular microscope.
Egg nests
It was observed that the female of Cicadatra persica lay two basic patterns of egg nests. The first pattern of egg nest consists of several adjacent slits (Fig. 1), while the second pattern of egg nest consists of several divergent slits (Fig. 2). 72 egg nests were from the first pattern which constituted 90% (Fig. 1) and 8 egg nests were from the second pattern which constituted 10% (Fig. 2).
Adult individuals
The results showed that there were two basic patterns of specimens according to the color of the supra-antennal plate ( Table 1). The first pattern involved Individuals with yellow supra-antennal plates (Fig. 3), and the second pattern involved Individuals with black supra-antennal plates (Fig. 4).
The results also showed that there were several patterns of individuals according to the number of spurs on the lateral and upper sides of the hind legs. The total number of patterns was 27. There were 26 patterns which was different from each other. The percent of those different patterns in individuals with yellow supra-antennal plates was 22%, and 2% in individuals with black supra-antennal plates, and the total percent was 24%. The most common pattern was (2, 3) whose individuals have 2 spurs on the upper side of the tibia of the hind legs and 3 spurs on the lateral side (Fig. 5). The per- cent of that common pattern was 72.33% in individuals with yellow supra-antennal plates, and 3.67% in individuals with black supra-antennal plates, and the total percent was 76% (Table 2, 3). The hind leg of Cicadatra persica had 14 different patterns (Figs 5-18) based on the number of spurs on its tibia, and they were: (2, 3), (2, 4), (2, 5), (3, 5), (3, 3), (3, 4), (1, 3), (2, 2), (1, 1), (1, 2), (0, 1), (2, 6), (1, 6), (4, 4). * the first number refer to the number of the spurs on the upper side of tibia of the hind leg, and the second number refer to the number of the spurs on the lateral side of tibia of the hind leg.
Egg nests and adults
The results showed that there could be a relation between the two basic patterns of egg nests made by females of C. persica and the two basic patterns of individuals based on the color of supra-antennal plate. The first pattern of egg nests which formed 90% could be laid by the first pattern of females with yellow supra-antennal plates which formed more than 90% of total individuals. The second pattern of egg nests which formed 10% could be laid by the second pattern of females with black supra-antennal plates which formed less than 10% of total individuals. But this supposition needs to be proved by separating the individuals of each pattern and monitoring the egg nests laid by each of them. This result also refer to that could be two basic strains of C. persica the first one with a yellow supra-antennal plate, and the second with a black supraantennal plate, and this supposition also need to be proved by doing some microbiological studies on the DNA of this species.
Adults and host plants
The results showed that there was a common pattern (2, 3) of individuals based on the number of spurs on the tibia of the hind legs whose individuals have 2 spurs on the upper side of the tibia and 3 spurs on the lateral side of the tibia. The total percent of that pattern was 76% and this percent correspond with the percent of apple fruit orchards in Erneh which is about 75%. The total percent of other patterns was 24% and this corresponds with the percent of other different fruit orchards in Erneh which is about 25%. The morphological differences among the individuals of C. persica in the number of spurs on the tibia of the hind legs may be related to the host plant which the individual feed on its sap during the juvenile stage underground.
Conclusion
This research showed that there are different patterns of egg nests and morphological differences of Cicadatra persica, distributed in fruit orchards in Erneh. The result lead to do further investigations on the morphological differences and studying other morphological characters of this species and also to study the DNA of those different patterns of C. persica to prove if these differences in the morphological characters related to the genetic differences or other ecological factors.
|
2016-05-04T20:20:58.661Z
|
2013-07-30T00:00:00.000
|
{
"year": 2013,
"sha1": "73d4f99d5c1b2390dc6dfc66aa36750de6dc93c3",
"oa_license": "CCBY",
"oa_url": "https://zookeys.pensoft.net/article/3467/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "73d4f99d5c1b2390dc6dfc66aa36750de6dc93c3",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.