id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
257922715
|
pes2o/s2orc
|
v3-fos-license
|
Interaction and association analysis of malting related traits in barley
Barley is considered as a foundation of the brewing and malting industry. Varieties with superior malt quality traits are required for efficient brewing and distillation processes. Among these, the Diastatic Power (DP), wort-Viscosity (VIS), β-glucan content (BG), Malt Extract (ME) and Alpha-Amylase (AA) are controlled by several genes linked to numerous quantitative trait loci (QTL), identified for barley malting quality. One of the well-known QTL, QTL2, associated with barley malting trait present on chromosome 4H harbours a key gene, called as HvTLP8 that has been identified for influencing the barley malting quality through its interaction with β-glucan in a redox-dependent manner. In this study, we examined to develop a functional molecular marker for HvTLP8 in the selection of superior malting cultivars. We first examined the expression of HvTLP8 and HvTLP17 containing carbohydrate binding domains in barley malt and feed varieties. The higher expression of HvTLP8 prompted us to further investigate its role as a marker for malting trait. By exploring the 1000 bp downstream 3’ UTR region of HvTLP8, we found single nucleotide polymorphism (SNP) in between Steptoe (feed variety) and Morex (malt variety), which was further validated by Cleaved Amplified Polymorphic Sequence (CAPS) marker assay. Analysis of 91 individuals from the Steptoe x Morex doubled haploid (DH) mapping population revealed CAPS polymorphism in HvTLP8. Highly significant (p<0.001) correlations among ME, AA and DP malting traits were observed. The correlation coefficient (r) between these traits ranged from 0.53 to 0.65. However, the polymorphism in HvTLP8 did not correlate effectively with ME, AA, and DP. Altogether, these findings will help us to further design the experiment regarding the HvTLP8 variation and its association with other desirable traits.
Introduction
Barley is one of the most important cereal crops used globally as food, feed for livestock, and in the brewing industry. Barley grains are the main raw material to produce malts, which in turn are processed to produce beer in the liquor industries. Malting quality traits of barley varieties are highly desirable to produce premium quality beer. Diastatic Power (DP), Viscosity a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 (VIS), β-glucan content (BG), Malt Extract (ME) and Alpha-Amylase (AA) are some of the key parameters for determining the malting quality of the barley [1,2]. ME is composed of all the soluble elements of malt such as carbohydrates, proteins, and their hydrolyzed products. Moreover, it is a key source of fermentable sugars and essential enzymes for the hydrolysis of starch [3]. Mixed linked (1-3, 1-4) β-glucans (hereafter "β-glucan") constitute the major nonstarch polysaccharides component of endosperm and aleurone cell walls of barley seed [4]. Among different cultivars, barley grain is the fundamental source of β-glucan content ranging between 3.4% and 5.7% [5]. β-glucan concentration in barley grains (4-10% w/w) is substantially higher compared to wheat (1% w/w) [6], which also has decent utility in the brewing industry.
Barley varieties with higher β-glucan content are desirable to the food industry as it serves as dietary fiber, which protects against various human health conditions such as lowering the blood cholesterol [7,8]. However, for the production of high-quality products through efficient malting practices, cultivars with low β-glucan concentrations are required by the brewery industries [9]. Higher levels of β-glucan can contribute towards issues like haze formation, viscous wort and reduced wort filtration during the brewing processes [10]. ME is a complex quantitative trait that is controlled by multiple genes and its concentrations can be variable in different cultivars. It is also considered as a mega-trait which is the product of interactions between many sub-trait [11,12].
Genetic manipulation and selection of malting quality traits such as ME and β-glucan content are challenging for the breeders due to the complex inheritance nature of these traits.
Quantitative trait loci (QTL) analysis has been utilized as a molecular tool to detect and estimate genomic regions that are associated with traits of interest such as malting quality traits. To date, outcomes of several studies revealed the identification of more than 250 malting quality-related QTLs. Specifically, QTLs with high variance for ME and β-glucan content have been identified and localized on all the different barley chromosomes [11]. From these QTLs, QTL2 on chromosome 4H is reported as a major barley malting quality QTL which contributes 29% and 38% variation for key malting quality parameters such as BG and ME, respectively [13]. Similarly, QTL2 is also known to have an effective influence on other malting quality traits including AA and DP [1,14]. In another study, the telomeric region of chromosome 4H containing the malting quality associated complex was fine mapped, which revealed a total of 15 putative QTLs for BG [2], ME [3], AA [6] and DP [1,4]. Another major QTL, QMe.NaTx-2H was identified by using the doubled haploid (DH) population located on chromosome 2H that accounts for 48.4% of the total phenotypic variation (R 2 ) for the ME [15]. Another QTL, Qme1.1 present on chromosome 1H has contributed with R 2 of 21.1% in the ME phenotype [16]. In addition, two closely positioned QTLs were identified on chromosome 4H which accounts for the R 2 about 8-13% and 4-10%, respectively [17]. β-glucan is also a very important malting quality trait that is influenced by both the environment and genotypic factors, but the latter has a more significant contribution, relatively [18]. Members of the cellulose synthase-like (CslF) gene family have been reported to play role in the β-glucan synthesis [19,20]. Several efforts have been made to develop molecular markers to select barley varieties with better malting traits such as ME and β-glucan content. For instance, around 1,524 SNPs were genotyped to detect several genes that are associated with six malting traits including βglucan content and ME [21]. Further, Randomly Amplified Polymorphic DNA (RAPD), Diversity Arrays Technology (DArt) and QTL-related PCR-based makers have been used to screen different barley populations for improved malting quality traits [2,[22][23][24][25].
Recently, QTL2, a locus with a large proportion of variation for ME and β-glucan content, was dissected which harbours a key gene HvTLP8, apparently involved in interacting with βglucan in a redox-dependent manner [26]. In the present study, we made efforts to characterize HvTLP8 at the molecular level to develop functional markers that can help barley breeders to identify varieties with superior malting traits. These markers include variation at DNA and protein levels. Moreover, we examined the linear association between the marker and the barley malting traits (e.g., ME, AA, and DP) as well as among these traits.
Plant material and growth conditions
The seeds of barley malt and feed varieties (Table 1) Seeds were grown in the greenhouse under the photoperiod regime of 16 hrs day and 8 hrs night with an average temperature of 20˚C. Fresh leaf samples were collected for DNA extraction and stored in -80˚C for future use. Mature seeds of two malting (AC Metcalfe and Morex) and two feed (Steptoe and CDC Cowboy) were surface sterilized with a 20% working concentration of bleach followed by three rinses with distilled water. Seeds were germinated at room temperature under dark on wet Whatman filter paper in sterile petri-plates. Samples were collected and flash-frozen in liquid nitrogen at 16 hrs of grain germination stage. The harvested samples were stored at -80˚C for future experiments.
Total RNA isolation and DNase I treatment
Total RNA was isolated from the 16 hrs germinated grains using a Spectrum Plant Total RNA kit, following the manufacturer's protocol (Sigma-Aldrich, St. Louis, MO, USA). Before cDNA synthesis, extracted RNA samples were subjected to DNase I treatment to avoid DNA contamination by using the RQ1 RNase-Free DNase kit (Promega, USA). Reaction mixtures were incubated at 37˚C for 30 minutes (mins) followed by the addition of 1μl of RQ1 DNase stop solution for reaction termination. Reactions were further incubated at 65˚C for 10 mins to inactivate DNase I.
cDNA synthesis and quantitative real-time PCR (qRT-PCR) analysis
First-strand cDNA from 1μg of DNase I treated total RNA samples was synthesized by following the recommended protocol of AffinityScript QPCR cDNA Synthesis Kit (Agilent Technologies, USA). Relative transcript levels were measured by qRT-PCR using Wisent advanced qPCR master mix (Wisent Bioproducts, Canada) on Mx30005p qPCR system (Stratagene, USA). The qPCR cycle conditions were 95˚C for 2 mins; 40 cycles of 95˚C for 5 seconds (sec) and 60˚C for 30 secs. For each sample, two independent biological and three technical replicates were used. Relative transcript levels were analyzed by following the 2 -ΔΔCq method [27] using HvActin as an internal reference control [28]. The significance of gene expression between different malt and feed varieties was measured by all pairs Tukey test at (P � 0.05). Integrated DNA Technologies (IDT) primer quest tool (https://www.idtdna.com/ PrimerQuest/Home/Index) was used to design the primers for HvTLPs (HvTLP8 and HvTLP17). The primer sequences are listed in (Table 2).
DNA extraction, PCR amplification and gel electrophoresis
Leaf samples (50 mg) were frozen in liquid nitrogen and grounded by using tissue lyser (Qiagen, USA). DNA extraction was performed by following the modified phenol/chloroform method as described by Singh et al. [29]. Primers (HvTLP8_F: ATGCCATTCTTCCTCACCA CAG and HvTLP8_R: TCATGGGCAGAAGATGAC) were used to amplify the coding sequence of HvTLP8. Primers (HvTLP8_3'UTR_F: CGAGCACACGGACAAGAATA and HvTLP8_ 3'UTR_R: GCAACGACTCCAGTGAACTTA) targeting the 1000 bp downstream of 3' UTR region of HvTLP8 were used. PCR amplification was performed in a 20 μl reaction, containing 1 μl of gDNA for each sample. PCR amplification was performed using GoTaq 1 G2 green master mix (Promega, USA). The PCR conditions were 95˚C for 2 min, followed by 36 cycles at 95˚C for 30 seconds with the annealing temperature of 56˚C. A 20 μl of amplified product was analyzed on 1.2% agarose gel.
Cloning and CAPS assay
Amplified fragments of TLP8 were extracted from agarose gel and purified by using the recommended protocol of the Nucleospin gel and PCR clean-up kit (Takara, USA). Purified TLP8 fragments were ligated into the pGEMT-easy vector by using the manufacturer's protocol. The ligation mixture was transformed into the DH5α competent cells. Several colonies were inoculated for plasmid isolation. Plasmid extraction was performed by using the modified TIANs method. The quality of plasmid was determined by using the NanoDrop ND-1000 (NanoDrop Technologies, Wilmington, DE, USA). Next, the clones were confirmed by PCR and restriction digestion. The positive clones were further sent for Sanger sequencing to Genome Quebec
Malting traits data and HvTLP8 variation
A doubled haploid (DH) mapping population (Table 3) derived from a cross between Steptoe (S) x Morex (M) [30] was used to investigate the polymorphism in HvTLP8. These two parents (Steptoe and Morex) were grown in controlled environment as described above. The young leaves were collected for DNA extraction. The protocols for DNA extraction, PCR amplification, CAPS assay was performed as described in the previous method sections above. Morex (malting variety) and Steptoe (feed variety) are two contrasting parents used for the traits under investigation. Morex has higher ME, AA and DP as compared to Steptoe. Data of DHs based on means for three different malting traits (ME, AA, and DP) over nine environments was retrieved from the GrainGenes database (https://wheat.pw.usda.gov/ggpages/SxM/ phenotypes.html). Correlation analysis between TLP8 marker and malting quality traits as well as other traits described above was performed as described in [31].
Differential expression of HvTLPs in malt and feed varieties
Our previous genome-wide analysis of the barley genome identified 19 TLPs some of which possesses carbohydrate binding domain (CBD) [32]. The role of carbohydrate binding domain in TLP8 has been documented for its interaction with beta-glucan [26]. Therefore, we selected CBD containing TLPs, HvTLP17 and HvTLP8 to examine the expression during germination. The expression of HvTLP8 was higher in malting (AC Metcalfe & Morex) varieties as compared to feed varieties. Among malting varieties, the highest HvTLP8 expression was observed for Morex. However, lower and similar expression was observed for both of the feed varieties Steptoe and CDC Cowboy. In the case of HvTLP17, we have observed higher expression in only one malt (AC Metcalfe) variety and lower expression in feed varieties at the 16 hrs stage of grain germination (Fig 1).
Variation in the coding sequence and in 3' UTR region of HvTLP8
Based on HvTLP8/17 expression data we decided to explore HvTLP8 as the marker for malting quality. First, we PCR amplified the coding regions of HvTLP8 from malt and feed varieties and sequenced. Alignment of the sequenced amplified fragments from different malt and feed varieties indicated no polymorphism for the HvTLP8 coding region (data not shown). Next, we decided to sequence the un-translated regions (UTRs) of HvTLP8. We searched for polymorphism to the 1000 bp downstream of the 3' UTR region and were unable to identify any conserved variation specific to malting or feed varieties in this region (Fig 2). However, the sequencing results indicated differences in the sequence of HvTLP8 3' UTR region in between Steptoe (six-row feed) and Morex (six-row malt), which was striking. Polymorphism included eight single bp SNPs, a two bp deletion in Morex and a six bp difference in Steptoe, which resulted in additional site development of MwoI restriction enzyme (Fig 3). When amplified fragments of the HvTLP8-3' -UTR region was subjected to digestion with MwoI electrophoresed samples indicated a clear discriminative banding pattern in Steptoe and Morex varieties (Fig 4). Digestion of Steptoe PCR amplified HvTLP8 fragments generated four bands of 81 bp, 125 bp, 232 bp and 314 bp sizes, whereas Morex HvTLP8 fragments produced three bands of 125 bp, 229 bp and 390 bp sizes (Fig 3).
Association of SNP variation present in 3' UTR of HvTLP8 with malting quality traits
Our discovery of SNP variation present between the 3' UTR of HvTLP8 of Steptoe and Morex (Fig 3A), prompted us to elucidate the genetic variation of these SNPs in different mapping populations. We used a DH mapping population generated from the cross between S x M genotypes to analyze the SNP variation for HvTLP8. By performing the CAPS assay on 91 DHs and the parents, we found polymorphism for HvTLP8 among lines (Fig 5A and 5B). Table 3). S x M DH mapping population is well studied for malting traits especially for ME, AA and DP [12]. HvTLP8 demonstrated polymorphism in the S x M mapping population, which intrigued us to examine its relationship with ME, AA, and DP. Regression analysis indicates that HvTLP8 did not explain the variation for these traits. The observed R 2 values were 1.32, 0.94 and 0.29 for AA, ME, and DP respectively (Table 4). Similarly, correlation analysis revealed that HvTLP8 was insignificantly associated with these traits; however, the correlation among AA, ME and DP were highly significant (p<0.001; Fig 6). The highest correlation was found between AA and ME (0.65). The correlation coefficient (r) between traits ranged from 0.53 to 0.65 (Fig 6).
Discussion
The molecular basis of barley malting quality trait plasticity is poorly understood. Here we study the behaviour of TLP8 as a marker for barley malting quality. In our earlier report, HvTLP8 was identified as a key gene that influences the malting quality via interaction with βglucan in a redox-dependent manner [32]. We observed that TLP8 contained the carbohydrate-binding motif and its expression was differential in different barley malting and feed varieties at both mRNA and protein levels, respectively [26]. Our other recent finding reported 19 HvTLPs in the barley genome other than HvTLP8 and only two germination specific TLPs, HvTLP8, and HvTLP17 contained the carbohydrate-binding motif (CQTGDCQG) and can have a possible interaction with the β-glucan [33]. In addition to these two TLPs, HvTLP14 also possesses partial carbohydrate binding motif [32] as glycine (G) was substituted to glutamine (Q), therefore removed from further investigations. Thus, we examined the mRNA expression of HvTLP8 and HvTLP17 (Fig 1). The expression analysis data indicated that HvTLP17 has a higher gene expression level in the malting variety, AC Metcalfe, than in the Morex. However, the expression of HvTLP8 was higher in malt than in feed varieties. The expression pattern of HvTLP17 was different from HvTLP8 in this study and also in the earlier report [26]. However, HvTLP17 has a carbohydrate binding motif similar to HvTLP8 [33]. It is possible that HvTLP17 might be binding with other carbohydrates but not with beta-glucan. These results indicated that HvTLP8 might be a good candidate gene to develop a potential molecular marker that can differentiate between malt and feed barley varieties. To investigate this, we analysed the full-length coding regions (CDS) of HvTLP8 to find potential SNPs that can be associated with malting. We could not observe SNPs between the CDS region of HvTLP8 in different malt and feed varieties. Nevertheless, we found differences at gene expression level for HvTLP8 in different malt and feed varieties, when the regions from the CDS and 3' UTR were targeted [26]. Next, we expanded our SNP exploration targeting the 1000 bp downstream of HvTLP8 3' UTR regions in different malt and feed varieties. Interestingly, we observed SNP polymorphism in two six-row varieties Steptoe and Morex (Figs 2, 3A and 3B). We observed several polymorphic SNPs in HvTLP8 of Steptoe and Morex. However, a six bp deletion in the 3'UTR region of HvTLP8 in Morex was one of the major polymorphic sites identified. Previous investigation on SNP discovery in the reference genome of barley by using 16,127 assemblies have reported higher SNP density in the 5'UTR region compared to the 3' UTR region [34]. In another study, SNP polymorphism analysis of the HvP5CS1 gene revealed 16 SNPs, from which 7 SNPs were found in the 3' downstream sequence of the non-coding region [35]. Likewise, our results of SNP discovery in the HvTLP8 3' UTR also exist in the downstream region as found in these reports. Varieties having higher ME, DP, AA, and lower BG traits are considered malting varieties which are an absolute requirement for the malting industry. The malting quality of barley depends on the interaction of various malting traits and the genetic architecture of genotypes. To fish out the genes for malting, numerous malting quality associated QTLs have been identified in barley. Among these QTLs, QTL2 accounts for 37.6% of the variation for the malt extract [13]. The HvTLP8, resides on the QTL2 present on chromosome 4H of the barley genome. Steptoe and Morex are two contrasting six-row barley varieties for malting traits such as ME, DP, AA, and BG. The polymorphism observed in the 3'UTR of HvTLP8 was further tested on S x M DH mapping population that is well studied for the malting traits [1,12,30]. We performed CAPS marker assay on 91 DHs and found polymorphism for HvTLP8 in 1000 bp downstream of the 3' UTR region (Figs 4, 5A and 5B). A total of 52.75% of DHs showed a closer relationship with Steptoe, while 47.25% were related to Morex (Table 3). Furthermore, our regression analysis on malt quality parameters demonstrated low variance of HvTLP8 with changing AA, DP, and ME (Table 4). Similarly, correlation data indicated a low association of HvTLP8 with these malting traits. However, a significant correlation was observed between AA, DP, and ME (Fig 6). QTL2 is a complex genetic region harbouring genes for different malting traits, which are known to interact with each other [30] and HvTLP8 may only be associated with soluble beta-glucan content, which has not been considered in this study. Polymorphism identified in the HvTLP8 could be correlated with other traits such as beta-glucan content mapped in the QTL2 region to develop potential genetic markers. Our data analysis showed very low deviation for the AA and DP values, as it was reported by Ulrich et al. [12]. However, the correlation values between traits were found to be higher (Fig 6) when compared to the results (r = 0.56** and 0.39**) reported by Ulrich et al. [12] referring to the correlations between AA and ME as well as between DP and ME, respectively. This could be due to the availability of analysis tools having better algorithms and the way of analysis which have been used (p<0.001) for comparison (p<0.01) as reported by Ulrich et al. [12].
Our current data and previous studies [32,33] suggested that the coding sequence of HvTLP8 is completely similar in different malt and feed varieties. However, expression of this gene varies greatly between malt and feed varieties. Difference in the gene expression needs further investigation to identify regulatory elements in the promoter region, which may provide further information about its transcriptional regulation. We also noticed that purified HvTLP8 was interacting with carbohydrate moiety at the protein level [26], therefore any marker technology which differentiate gene and protein expression could be utilized for selection of better malting quality barley genotypes. For example, markers, that deploy proteins and enzymes (e.g., ELISA-based) could provide better options. ELISA-based markers have been used for the screening of traits like lysine content in wheat [36], levels of deoxynivalenol (DON) [37] and Hordein (Gluten) [38] in beer samples. In future, development of HvTLP8 specific antibodies to develop an ELISA based biochemical marker will help the barley malting breeding community for selection of superior barley malting varieties.
Conclusion
In this study, we have explored the CDS and 3' UTR (1000 bp downstream region) of HvTLP8 for SNP variation in different malting and feed varieties that can be linked to malting. We found SNP variation in the 3' UTR downstream region of HvTLP8 of Steptoe and Morex. Importantly, we found a six bp deletion in the Morex variety as a key variation. Similarly, we found polymorphism while exploring the 3' UTR of HvTLP8 (1000 bp downstream) in the DH population. Our correlation analysis indicated that HvTLP8 was insignificantly correlated with malting traits (ME, AA, and DP), however, highly significant correlations were recorded for these traits at (p<0.001). The identified SNPs in the HvTLP8 can be further characterized to reveal their possible association with the malting traits. In future, the 5' UTR upstream region could be explored for the potential HvTLP8 variation that may have a possible association with key malting traits.
|
2023-04-05T06:17:24.928Z
|
2023-04-04T00:00:00.000
|
{
"year": 2023,
"sha1": "b5d76d826e421da37d150dd639da41d4f8d20fa6",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ab03fcb47646d1e0c137164519be40b0fa836548",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213849791
|
pes2o/s2orc
|
v3-fos-license
|
THE INTERNATIONAL JOURNAL OF BUSINESS & MANAGEMENT Emigration of the Intelligentsia and Its Effects on the Socio-Economic Development of Zimbabwe
the highly Abstract: This paper discusses skills emigration of the intelligentsia and its effects on the socio-economic development of Zimbabwe. Skills flight has been a huge challenge to the developing countries and this has contributed much to their under development. Since the turn of the millennium, Zimbabwe has lost key skilled and talented people who left the country due to economic hardships. Several studies have shown that there are very high levels of emigration among the young and the very highly skilled people of Zimbabwe. Skills flight or brain drain is a very old phenomenon in human history and usually takes place for various reasons. Some of these reasons are social, economic and sometimes of a political nature. The aforementioned reasons either push people to leave or pull people if these are seen to be better in other countries. Globalization has also resulted in an increase of movement of people from one country to another. The world has become a global village with very little to separate countries. Skills flight has merits and demerits, and this paper will try to unpack these issues so that there is some clarity. While some schools of thought argue that the sending countries lose because of skills flight, it is not always the case because these people help in the development of their native countries by sending money back home (diaspora remittances) that is used for developmental projects and assist in improvement of foreign currency earnings for their countries. It also denies the remaining population the opportunity to benefit from their highly skilled compatriots (skills and knowledge transfer) who in most cases would have benefitted from state funding. However, for the host countries, they benefit by having the best brains who are usually very intelligent and have good leadership skills. These emigrants help to develop the economies of host counties. In some cases, there are tensions and conflict that arisen the host countries as a result of the skilled people who would have migrated to these countries, for example, there have been cases of xenophobic attacks on foreigners in South Africa, racism (in Europe), among other problems caused as a result of immigrant labour. It is argued in this paper that skills emigration in Zimbabwe has affected the performance of the Zimbabwe economy and development agenda.
Theoretical Framework
The early 1960s literature covers a wide range of topics that relate to skills flight of the working classfrom Developing Countries to Developed Countries. These started to take place, particularly, focusing on the impact on welfare for the economies experiencing a loss of skilled people.
According to Johnson (1965), "in the absence of any very persuasive evidence to the contrary […] there is no significant probability of world loss from the international migration of educated people". His argument is that the entire world, in one way or another, is negatively affected by skills flight. It is the case when the social loss resulting from skills flight to developing countries is greater than the gains to the migrant. Scott (1966a, 1966b) were also in agreement as they argued that there is no loss attributed to Less Developed associated to brain drain, but they instead argued that skills flight of highly skilled people provides an opportunity for nations' capital-labour ration, hence raising the average income of a nation in the long run. Potentially, the parent countries largely benefit from the researches done by Engineers and Scientists who are in foreign lands as these are accessed publicly after being published. However, Grubel and Scott however overlooked the effects of redistribution of welfare (Weisbrod, 1966). Godfrey (1970) shared the same sentiments as those by Grubel and Scott, saying that attention should be given to average incomes instead of the theoretical distribution of the total between individuals. Consequently, he introduces the idea of schemes of compensation which are aimed at bringing some equilibrium to the effects of skills flight. He proposed some solutions to tame brain drain, such reducing the number of students studying abroad or making education country specific that it will not be useful in outside the country's foreign employers. This argument resonated well with Myint (1968), who argued that if a country's qualifications are less acceptable internationally, there will be a reduction in brain drain, because there will not be demand for such skills.
While analysing the issues of brain drain, and its effects, (Watanabe, 1969), he argues that skills flight will, affect the welfare of a country by slowing its development. However, he also acknowledges the effect of brain drainis a major cause of slow rate of development, thus implying the best solution to reduce brain drain is by making sure developing countries are also developed. In their detailed literature review on the analysis of the effects of brain drain, (Bhagwati and Rodriguez 1975), classified the literature into contributions dealt with comparative-static or dynamic formulations including those that assumed a perfectly competitive model. Leland (1982, 1984); Katz and Stark (1984) emphasised on information distribution asymmetry about the skills of migrants. They argue that receiving countries have worse information about the abilities of migrants and they therefore peg wages based on their perceived knowledge of migrants. Resultantly, only people with below average skills have an incentive to migrate. Webb (1985) addresses the effects of brain drain and how this influences the distribution of opportunities for education in Developing Countries. He asserts that Government objectives play a crucial role to his model. He argues that Governments are either concerned with the efficiency of their education systems, or their endowment of learned labour. For those families that have "abundant cash", migration plays a positive role, while for those that are "cash constrained", the role is negative. However, if carefully aligned with good public policies, there is usually no negative effects of migration.
In presenting a survey on theoretical models on migration decision making, (Gallup; 1997) as well as the gravity model, the two-sector model, family decision-making models, information and networks, literature on the brain drain phenomenon, will remain influenced by specific case studies in Sub Sharan Africa because of the subject's political inclination, inaccurate and sometimes poor data.
Causes and Effects of Skills Emigrations
Brain drain is the is the movement of skilled, trained or qualified people from their country of origin to another country in search of employment, then take up residence in the host countries. Under these circumstances, professionals, in whom a country would have invested hugely from the scarce resources emigrate to other countries to seek better job opportunities in other countries. Brain drain can also be defined as the migration of "human capital" internationally as migrants. These migrants emigrate to other countries, for employment, but they usually do not acquire citizenship of the host countries of employment.
Additionally, skills flight is the leaving of a country by skilled, qualifies and especially talented people of working age to other geographies, outside one's own country. This maybe because of economic decline, conflicts, lack of opportunities in native countries, hazardous conditions, or other reasons. This can be juxtaposed with skills flight. Skills flight maybe defined as investment of skills that have been acquired in one's country, acquired using resources of one's country, but not used in the country where these had been obtained. Usually when people acquire skills or are assisted to gain skills and do not return to their native countries to use such skills, this is referred to as brain drain. Countries lose investment in people when skilled and qualified individuals leave and do not come back to use acquired skills and talents. In most cases, the capital that the individual would have been a part of is reduced by their departure. These very skilled and trained people would have been trained using very scarce resources of the native countries, and at a social cost. This has brought a sense of despair to developing countries that bear the brunt of skills flight and are at the mercy of the effects of brain drain, while it's a benefit for those countries where these people emigrate to. This scenario happens when those that leave their countries to study abroad do not return to their native countries after studying, or when those educated in their countries leave for greener pastures.
Brain drain is a worse devil, because it drains the home country native countries of very limited resources, while enriching the host nations of the developed word. This state of affairs is a big undoing for the native countries that are always grappling with underdevelopment and poverty. This is so because they are already in financial difficulties. Some developing nations have sound educational policies and standards that are like those of the developed countries. As a result, these are used to escape from troubled economies or unsustainable livelihoods. Historically, brain drain of massive proportions after the First World War when there was massive reconstruction in the developed world, for example in the United States, Britain and other Western countries. The reconstruction phase throughout the developed world resulted in increased demand of highly skilled people by such countries. This period of development by both the developing and developed countries resulted in both making relentless all efforts so that they would industrialise. This 'race' was because every nation wanted to could catch up with the United States of America's technical and scientific standards. This is believed to have had a huge effect on the ability of developing countries efforts to also develop. Brain drain have been as a result of circumstances and situations that discourage people to stay in their countries and those in other countries that make skilled people want to go there, i.e. those situations make leave their native countries, either going to neighbouring countries or abroad, to countries such as the USA, UK, Canada, Australia, etc. The pull factors have been described as those factors that attract people to emigrate to destinations where they are lured by prospects of good opportunities and better lives. This phenomenon usually occurs in countries where there is a mismatch between those who are willing and able to work cannot be absorbed into the job market, or where their skills and talents are unrecognised or poorly rewarded. This becomes the case when people's jobs have been terminated and where jobs are none existent, and where favouritism and nepotism, or when someone's social standing, rather than capability and ability are used as the basis for being employed.
Skills flight or brain drain may be because of discriminated based on gender, social class, ethnicity, skin colour, race or skewed policies. Poor systems in skills training may result in graduates who do not match the available opportunities, and hence not well, suited to the needs of areas of the economy. Zimbabwe inherited an educational system from the former colonial master (Britain), but that system is no longer relevant to the current needs of the country. In some cases, those who would have trained outside may not be able to fit back into the system of their native countries for them use their learned skills, e.g. Gambia, in the 1990'swhen their people went to study outside the country because there was no university in the country. huge amount of the country's resources for funding their people's studies and training abroad was used. However, these people did not return because they had nowhere to use the acquired skills, especially those who had become Professors.
The migrants usually make comparisons between conditions obtaining in their native countries to those of foreign countries. When there are huge gaps in standards of life, tastes and circumstances, and cost of doing business, these give rise to migrants' decision making. Macro-economic factors such as rates of inflation, income, wages buying power, do also affect decision making of migrants. For example, wages in Zimbabwe are very low and with two-digit inflation, people are bound to emigrate, and the greater number of working class are civil servants.
Political instability in the countries of birth sometimes force people to the emigrate. Political instability may result in the persecution of a group of people either because of political affiliation, activism or politically motivated violence may drive people out of their native countries and go outside to seek better life and opportunities.
Some have been victims of ethnic, religious, cultural and sectoral violence and clashes. Emigration may be as a result of civil unrest, tribal purging, or social unrest in native countries. While it is generally expected that after completing their studies, students from less developed countries will go back to their native lands, they rarely do so as they are lured by better prospects and sometimes better opportunities in the host countries. This mostly is the case with specialist areas such as medicine, education, engineering and Information, Communication and Technology (ICT). Most of these professionals decide not to go back home but remain and seek employment opportunities in the host countries. The other reasons for not going back to their native countries maybe due to pride and the prestige of having obtained their qualifications abroad.
Many Zimbabweans go to study abroad because of limited number of training institutions in the country where everyone may not be absorbed. Because of the limited number of training institutions, the selection criteria for college entrance becomes tough, hence some are left to look for alternative opportunities elsewhere. When those who have gone abroad to study acquire internationally accepted qualifications, and they shun the country, and they opt to work outside Zimbabwe.
Workers are generally not well paid in the developing countries; hence this becomes a push factor for those that would have acquired qualifications both inside and outside the native countries. Most developing countries have uneven number of available opportunities to the number of people who are qualified, willing and able to provide their labour, hence faced with these scenarios they decide to emigrate. For example, Zimbabwe's unemployment rate is said to be around ninety five percent, while the colleges and universities are producing graduates every year who are expecting to be employed. This misnomer results in those skilled people to emigrate to other countries in search of better prospects. In Venezuela, political instability and poor macroeconomics measures have led to hyperinflation and mass exodus of people to neighbouring countries and to the United States of America. The impasse between the sitting President Maduro and the opposition leader has created a huge man-made disaster. The same situation is taking place in Zimbabwe where there has been issues to do with rigged elections. This has resulted in inflation going up and unemployment rates also going up. The socio-economic environment is very fluid and difficult for the citizens. Deep rooted political and economic development failures, poverty, poor governance and corruptionhave resulted in a plethora of problems for developing countries such as Zimbabwe.
Studies point out thatthe largest number of migrants are from Africa, and Africa bears the brunt of skills flight as they lose the most promising talent and trained personnel.In the late 1990's many Ghanaian physicians were said to be practising in the United States and estimates indicate that more than twenty thousand Nigerian academics work in the USA, while many Ethiopian Physicians work in the state of Chicago in USA. It is argued that yearly, more than twenty thousand skilled people leave Africa According to a recent report by the British Broadcasting Corporation (BBC), one third of professionals from Africa emigrated in the past decade and this translates to a replacement cost of $4Billion to the continent yearly if these people are to be replaced, while rich countries such as USA made a savings of up to $26 billion which they could have spent for training 130,000 physicians. This situation exacerbates the already dire situation for poor countries in the developing world. Most affected are countries such as Ethiopia that has a history of internal conflicts, internal strife, hunger famine and other natural disasters. Most medical doctors who have been trained using scarce state resources and this was with considerable social cost and debt provided by rich nations. Most of those that have gone to train outside did not return to Ethiopia, for example in the past decade between 1980-1990, out of about 23 000 students who went to study abroad, only a paltry number of 6000 returned. They were lured by generous working conditions, higher wages and better prospects in the host countries and other developed countries. In most cases generous working conditions and prospects of better work opportunities are the drivers of most Africans no emigrate. It is argued that although Nigeria and South Africa are Africa's biggest economies, they are at the top in terms of brain drain, especially to countries like the UK and the USA. Consequently, this has caused much underdevelopment to the African continent.
Pull factors are those entitlements that lure people and result in them leaving their native countries and opt to reside in a foreign land. Rich and developed countries that have not yet reached their establishments for workers or are yet to train enough workers with requisite skill sets get scarce skills from other by enticing them with better wages, good conditions of work and promises of better opportunities. Many Western Europe professionals moved to "countries of opportunity" notably the USA, Canada, Australia etc.
Many European professionals migrated to North America because of fewer opportunities in their native countries. They were attracted to higher wages and more flexible opportunities compared to their home countries. In some cases, former colonial masters provide easier opportunities for their former colonies and ease policies in terms of migration as well as easy policies for better integration into their societies and communities. For example, most of the countries that were former French colonies easily go and settle in France, while those from the British Commonwealth settle in Britain.
Most people have been attracted by the ever-expanding economy of the USA as well as the opportunities t that are provided in America. This resulted in a big number of Asian nationals to migrate to the United States of America and not to Europe. In the mid 1960'sthe USA relaxed their Emigration laws, and this had some effect in terms of the number of people who flocked to settle there. The flexible working environment also attracted people to go and work in the USA. There has been an unprecedented demand for skilled people with certain special talents, e.g., Turkey lost engineers with more than six years of experience. The figure increased by around 800% from 1977 to 1980. This scenario happened in Zimbabwe between 1990 to date where skilled and qualified people such as medical doctors, engineers' tradesman, nurses have emigrated to countries such as the United Kingdom, USA, Australia, Canada, New Zealand. A lot more have also emigrated to Germany after deciding to remain there after completion of studies. Most of the above professionals have emigrated because of the lure of good remuneration packages and better working conditions that are commensurate with the skills. As a result, many skilled people have chosen to go to United States of America, where certain skills are better paid compared to native countries. An example is when doctor is paid twenty-fold compared to the same Doctor working in Zimbabwe.
The world has become a global village and the advancement in technology has reduced the knowledge gap between developing and developed world such that it has become very easy for those from the developing countries to easily settle in countries of the developed world. Technology and the transfer of resources have also resulted in easy movement of people from developing countries to the developed world. Studies have shown that, in early 1990, 50% of the world's migrant workers are in the developed countries.
Brain drain has some duel effect, i.e. to both the sending and country the receiving country. Migration of professionals is a big concern for developing countries. This concerning because those emigrating contribute immensely to the developmental needs of their countries and without them, national development suffers. Having more people coming into their countries, resulted in the receiving countries have an increased tax base and increased productivity. There was additional income that is in turn used for further developmental programmes for the countries. However, the opposite is the effect for the countries that will be losing their skilled manpower. Since the best brains are those who emigrate, this result in the receiving countries having the best human capital, most talented and energetic young people who are the future leaders, while the native countries are left with people that maybe below average in terms of skills and intellectual capacity. The emigration result insignificant loss of much needed skills which may not be easy to recover.
Provision of routine tasks and medical procedures in developing countries suffer or are not performed because skilled people to carry out such tasks are not available. However, those who emigrated contribute immensely to the needs of the developed nations, e.g., the current 'great trek' by health personnel from Zimbabwe to the United Kingdom, Canada, Australia, New Zealand and others has resulted in many challenges for the country in terms of health delivery. It is estimated that from early 1990, more than twenty thousand people leave the continent every (UN Economic Commission for Africa and the International Organization for Migration) Since most governments advance educational loans that must be repaid to the students in colleges and universities, the emigrants then owe the government, but will not repay such loans and this increases the burden to the governments. These funds are revolving and must be passed on to the next generation, but if not repaid, this continues to drain the fiscus. Emigrationhas dire consequences, especially economically for those who would have returned to their native countries after their sojourns abroad. It maybe difficult to adapt to the conditions of their own countries since these are usually very different from that of the developed countries. Some qualifications obtained abroad may not be compatible with those of native countries and these may not be useful if there are vast differences in terms of technological advancement.
One wonders why any person in their right frame of mind will decide to stay in Zimbabwe, given the comatose state of the economy. Zimbabwe had failed to get back to the once earned status of being to the bread basket of Africa to be a basket case (Hammer et al 2003).Skills flight has resulted in crippling skills shortages in every sector of the Zimbabwe economy over the last two decades. Immigration to Zimbabwe came to almost a virtual halt in the early 1990s. The skills deficit has facilitated the economic and social collapse of the Zimbabwe. Emigration somehow reduces the pace of decline of a country by providing the country with much needed foreign currency through the remittances send back to family members left behind. Kapur & McHale (2005), asserts that, OECD countries have become home to about 90% of skilled and talented people from developing countries. This movement gained much traction since the (Lowell 2004). By rough estimates, 75% of the emigrants with university or college qualifications are from Africa, Latin America with 48%, and Asia Pacific with Pacific about 20 percent (Goldin 2006). It is argued that 10% of the educated from the developing world are now resident the USA, Australia or Western Europe by the year 2001.This figure is said to have gone up to 30 and 50 percent for those with skills in Science and Technology (Docquir 2005). It is argued that the "brain drain" retards development of a country: "While high skilled migration in sectors such as IT seems to have played an integral role in helping spur economic development in a few source countries, high-skilled migration in other sectors -health and medicine, in particular - [has] done considerable damage to source countries."(ibid). Despite all the negative effects of brain drain there are some positives attributed to brain drain, some of which are; the reduction in redundancy if there is excess labour for certain scope of work. Those emigrating may create space and provide opportunities for those who remained. Monies remitted back home helps the country in terms of exchange gains and development.
When professionals come back, they usually have rare technical skills and knowledge that is good for development. The returnees sometimes result in friction and tension within countries as they 'take over' jobs from those who would have remained albeit less skilled, and in some cases not well qualified for the jobs that they will be doing. Some of those who return may be sources of conflict with their native governments as they may exercise rights that they would have enjoyed abroad, which rights may not necessarily be provided by their home countries. Some may decide to be human rights activists, which some governments are very opposed to, hence conflict begins.
For the host countries, migrants contribute to the development of these countries, e.g. after World War One, Australia benefitted by as much as 58% because of foreigners who had flocked there. Countries in the Arab world such as Kuwait and United Arab Emirates benefitted between 69% and 85% respectively because of foreign nationals who were employed in these countries. Countries with ageing populations may also benefit a lot from immigrants as they form part of the younger generation. Emigrants maybe a source of conflict between the nationals of the receiving country and the immigrants. For example, there have been cases of xenophobic attacks on foreigners in South Africa as they argue that foreigners are taking away their jobs and business opportunities. Despite the fact that the immigrants are better qualified than the locals, this becomes a source of xenophobia, ethnic and racial tension.
Implications for Socio-Economic Development
"Refugee" any person who " … owing to well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country; or who, not having a nationality and being outside the country of his former habitual residence […], is unable or, owing to such fear, is unwilling to return to it" (1951 Convention on Refugees). It is believed that between 33% and 50% of the developing world's educated and skilled population are now resident in the developed world (Lowell, Findlay & Stewart, 2004). Sriskandarajah (2005) further argue that, the African Capacity Building Foundation reported that African countries lose about 20,000 skilled personnel to developed countries every year. This has a negative impact to developing countries that desperately need these sills to be able to implement their programmes.
Recommendations
Developing countries must put in place sound policies that dissuade people from wanting to leave their countries. These policies maybe both monetary and none monetary, for example, providing incentives for specialised and skilled people to work in the remote areas of the country. Providing conducive working environment such as flexible work arrangements such as working from home, tele working, job sharing among others. For example, the Zimbabwe Ministries of Education and Health have frozen all posts, even though the population has grown, and disease burden has increase, as well as an increase in educational needs of the population. This means that staff in these ministries is overworked, but underpaid. Consequently, skilled people decide to leave for greener pastures. Instead, the government must carry out a needs assessment to establish the gaps, for example, carrying out a work load indicator of staffing needs (WISN). When staffing needs are identified, more people must be hired so that those already working are not over loaded with work. Career advancement opportunities for those who want to acquire better skills, for examples, providing opportunities for General Practitioners to train as Physicians at a cost bone by the governments. Governments may offer performance-based contracts and remunerations that is tied to achieving of certain goals. Agreements maybe entered by governments so that skilled and talented nationals for a category may not be absorbed by the other governments. Zimbabwe and South Africa have this agreement in respect of Doctors, but the enforcement is not very good. When these people decide to do so, they Vol 7 Issue 9 DOI No.: 10.24940/theijbm/2019/v7/i9/BM1909-034 September, 2019 must obtain some consent from the native country, and when they go and work in those countries, some of the money they earn maybe repatriated back home (labour exporting). Developing nations must grow their economies by undertaking beneficiation of resources as opposed to be net exporters of raw material. If developing countries produce and process raw materials, these becomes a source of innovation and the requirement for skilled and talented people grows, while at the same time these people earning decent wages. Zimbabwe has a lot of raw materials that are not processed, put are exported in their raw form, which throws away some opportunities for job creation for Zimbabweans. The leaders of developing nations must be transparent in their dealings and eradicate corruption so that there is confidence by multi-lateral institutions who in turn extend lines of credit that are critical for development. Intelligent and skilled people know their rights and they do not want to stay in countries where their rights are trampled upon, so issues of human rights must have observed so that people are not pushes away from their own countries. So, developing countries' leaders must uphold human rights so that freedom of people is guaranteed and respected, otherwise no one wants to stay in a country where their rights are not respected, no matter how patriotic they maybe. Governments of developing countries must provide an enabling environment that has the necessary tools and equipment that make it easy for people to work, e.g. Zimbabwean Doctors have been complaining that although they are willing and able to work, they are not able to deliver because the hospitals do not have what they require to be able to do their work. The government of Zimbabwe have been perennially underfunding health. During the last budget, they allocated only 7% on health, which is way below the SADC average and less than half of the Abuja Declaration of 15%. This will push the skilled personnel away.
Conclusion
Brain drain may never be eradicated, but maybe reduced. So, for brain drain to be reduced, there are many things that developing countries must do to make sure this becomes a success. Be that as it may, brain drain will always be there because as people develop and progress socially and professionally, they want to explore and embark on more challenging adventures which are consistent with their earned status. Brain drain is not all doom and gloom neither is it hunky dory, but there must be a balance so that the benefits of brain drain must always outweigh the demerits of same. Otherwise, for developing countries, it is impossible to absorb all the skilled people in the country, so emigration comes in handy as it provides opportunities for those who may not be absorbed by the small and fragile economies, such as Zimbabwe.
|
2020-02-06T09:02:39.349Z
|
2019-09-30T00:00:00.000
|
{
"year": 2019,
"sha1": "4141da7393ef613c11f0527fae73c123b59a9758",
"oa_license": null,
"oa_url": "http://www.internationaljournalcorner.com/index.php/theijbm/article/download/148363/103845",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cc2d754e46065caf569b192dc9fac8b808dbbd99",
"s2fieldsofstudy": [
"Sociology",
"Economics"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
13833777
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Implementing an Educational Module About Inhaler Use on Severity of Dyspnea and Adherence to Inhalation Therapy Among Patients with Chronic Obstructive Pulmonary Disease
One of the most important routes for medication administration to treat chronic obstructive pulmonary disease patients is the inhaled one. If this method is not proper, medications will not be effective. The aim of this study was to determine the effect of implementing an educational module about inhaler use on severity of dyspnea and adherence to inhalation therapy among patients with chronic obstructive pulmonary disease. A purposive sample of 140 patients with COPD was selected and divided randomly and alternatively into two equal groups, 70 in each. The study was conducted at Chest department and Medical outpatient clinics of Menoufia University and Shebin El-Kom Teaching Hospitals. Five tools were utilized for data collection: Structural interview questionnaire, Bristol COPD knowledge questionnaire, Pressurized metered dose inhaler performance observational checklist, Shortness of breath questionnaire and Morisky Medication adherence scale. Results: 62.9% of study group and 54.3% of control group complained of severe dyspnea pre education, while 44.3% of study group and 54.3% of control group still complained of severe dyspnea one month post education. The improvement of dyspnea among study group than control group was not significant. However medication adherence was significantly improved among study group compared to control group post education. Conclusion: educational module about inhaler use significantly led to improve medication adherence and decrease dyspnea severity however the difference in dyspnea severity among both groups is not significant. Recommendations: Patient's education about correct inhaler use should be ongoing process for all COPD patients and the correct use of inhaler should be observed throughout patient's life.
Introduction
Chronic obstructive pulmonary disease (COPD) is a major health condition that characterized by irreversible progressive air flow limitation that leads to increase worldwide prevalence, morbidity and mortality [1]. In this condition, the airway becomes damaged making it increasingly difficult for air to pass in and out [2]. Moreover it affects the quality of life and economic status for those patients [3].
Most of the available information about COPD is from highly income countries. However, accurate epidemiological data are difficult and expensive to collect. In 2005 WHO estimated that, there are 210 million people diagnosed with COPD and three million patients were died of COPD. These deaths represent 5% of all deaths globally but approximately 90% of them occur in low income countries. In 2002, COPD was the fifth leading cause of death that will become the third cause worldwide in 2030. [4,5]. According to statistics by country for COPD (2013), the extrapolation of undiagnosed prevalence rate of COPD in Egypt is 4,197,651 and the diagnosed prevalence rate is 3,777,886 [6]. COPD is highly associated with over exposure to environmental factors especially tobacco smoke that is responsible about 80 to 90% of cases, occupational dust and air pollution [7]. Cough, sputum production and dyspnea on exertion as well as fatigue and sleep disturbances are the primary symptoms of COPD [1,8]. It also affects patients' physical functions, working and recreations activities, and emotional status as well as sexual relations [9].
Dyspnea is the subjective experience of unpleasant discomfort with breathing. It is a cardinal symptom of chronic obstructive pulmonary disease (COPD). The more progress, the more its severity and magnitudes that have negative impact on patient's abilities and quality of life to the extent that patients become isolated, often describing themselves as existing rather than living. Refractory dyspnea is a common and difficult symptom to treat in patients with advanced COPD. The most important question about the ideal management is whether various therapies are effective or not [10,11].
The most important line of management strategies for COPD are risk reduction such as smoking cessation and proper vaccination for Influenza and Pneumococcus, symptoms relief and prevention of exacerbations [12]. These can be achieved by pharmacological management especially bronchodilator to relieve bronchospasm and reduce airway obstruction, corticosteroids to improve symptoms and oxygen therapy to prevent acute dyspnea [1]. Medications can be given by inhalation or orally, however the inhaled rout is the preferred one because by this route, patients may need low dose with faster action and fewer adverse effects. Moreover, medications go directly to the target region of respiratory tract [13,14]. For this reason the inhaled route is the route of choice for the treatment of most bronchial diseases especially COPD. The inhalation technique influences the drug deposition in the lung and enhances the effectiveness of the treatment for the patients with COPD although inhalation technique is often incorrect in many COPD patients [15].
There are many factors that affect the effectiveness of inhaled drugs such as patient's age, sex and education, duration of the disease, type of inhalers and correct inhalation technique [16]. However majority of COPD patients' incorrectly perform the inhalation technique. Many clinical studies reported that up to 90% of those patients show incorrect technique. This may lead to decrease disease control, increase the absenteeism from work or school, unnecessary increase in medication dosage, increase risk of side effects, non adherence to inhaled medication and exacerbations manifestations especially deterioration of dyspnea that requires oral corticosteroid treatment [17]. Many COPD patients are non adherent to the inhaled drugs so mortality rate is susceptible to increase more than twice among those patients compared to patients who adhere to the inhaled medications [18,19].
Although COPD cannot be cured, optimal management provides symptom control, slows progression of the disease, and may improve the quality of life. Management of COPD becomes suboptimal due to poor adherence to evidencebased guidelines and under diagnosis or when patients fail to adhere to prescribed treatment regimens [20,21]. Adhering to inhaled medications is one of the most important factors for managing COPD in both clinical and ambulatory settings. In a recent study, it was shown that there was a significant decrease in the frequency of dyspnea, cough, sputum purulence or wheeze [18].
Moreover in a previous study, the instructions given repeatedly to COPD patients about inhalation technique contributed to adherence to therapeutic regimen that had a significant effect on health status [18]. Regular evaluation of inhalation technique is important to optimize treatment effectiveness and the inhalation technique can be significantly improved by brief instruction given by any trained health care personal on correct inhaler technique [17]. So this study aimed to determine the effect of implementing an educational module about inhaler use on severity of dyspnea and adherence to inhalation therapy among patients with chronic obstructive pulmonary disease.
Aim of the Study
The aim of the study was to determine the effect of implementing an educational module about inhaler use on severity of dyspnea and adherence to inhalation therapy among patients with chronic obstructive pulmonary disease.
Research Hypotheses
The following research hypotheses were formulated to achieve the aim of the study: a. Patients who follow educational module instructions about inhalation technique show a significant reduction in severity of dyspnea than patients who don't. b. Patients who follow educational module instructions about inhalation technique exhibit medication adherence than patients who don't.
Design
A quasi experimental research design was utilized to achieve the aim of this study.
Setting
The current study was conducted at Chest department and Medical outpatient clinics of Menoufia University and Shebin El-Kom Teaching Hospitals.
Subjects
A purposive sample of 140 patients with chronic obstructive pulmonary disease was selected by using the following power analysis equation:-Adherence to Inhalation Therapy Among Patients with Chronic Obstructive Pulmonary Disease
Power Analysis
The patients to be selected were determined by using the following equation: n= (z2×p × q)/D2. Since the actual prevalence of COPD was 10%, and a value of 0.025 was chosen as the acceptable limit of precision (D). Based on these assumptions, the sample size was estimated to be 140 patients. They were divided alternatively into two equal groups 70 patients in each.
Study group (1) received a detailed education and training about correct inhalation technique along with routine medical care such as administration of oxygen therapy and nebulizer.
Control group (2) was exposed to routine medical care only.
Inclusion Criteria
Subjects were considered eligible for the study if they had the following criteria: Adult conscious patients. Under inhalation therapy. Complained of dyspnea. Carry regular visits to medical outpatient clinics every one to three months. Free from any associated disorders.
Sampling Technique
The sample size was determined and calculated using EPI info program and it was estimated to be 138 patients at coefficient interval 99%. The researchers increased the sample size to 140 patients.
Tools
For the aim of the study and to collect the necessary data, five tools were utilized by the researchers. These tools were as follow:
Tool I: Structured Interviewing Questionnaire
It was constructed by the researchers to collect data about biosociodemographic data. It covered the following two parts: Part one: Sociodemographic Data. It was comprised of seven items related to patients' age, sex, marital status, educational level, occupation, working hours and home status. Part two: Medical data. It was concerned with information related to medical data such as smoking status, patients' present complaints, use of oxygen, history of inhaler use, previous hospitalization, family history of lung diseases and environmental factors that may increase the intensity of symptoms.
Tool II: Bristol COPD Knowledge Questionnaire (BCKO)
It was developed by White et al., (2006) [22]. The questionnaire was translated into Arabic, back translated and linguistically validated then utilized by the researchers to assess patient's COPD related knowledge. It consisted of sixty five statement about COPD symptoms (such as ankle edema, fatigue, chest pain, dyspnea, sputum and rapid weight loss), manifestations of chest infection, benifits of exercises for COPD, importance of smoking cessation, vaccination and medications especially inhaled bronchodilators, antibiotics and chorticosteroides.
Scoring system: Each item was given a score of one if the answer is correct and zero if the answer is wrong or don't know, then all scores were summed. The possible score ranged from zero to sixty five and the patients were categorized into two groups based on their scores: A BCKO score of 28 or more was considered high knowledge, while a score of less than 28 was considered low knowledge score.
Reliabilty [23] tested the reliability of the english version of the questionnaire, it was demonstrated to be 0.82 with strong test re-test agreement. while the researchers used a test re-test method to test reliabilty of the questionnaire after translation, it was 0.84.
Tool III: Pressurized Metered Dose Inhaler
Performance Observational Chicklist It was developed by the researchers according to the guidelines of the inhaler manufacture to assess patient's performance of pressurized metered dose inhaler such as removing the cap, holding inhaler uprightly, breathing out gently, putting mouthpiece between teeth and breathing in slowly through mouth, etc. The number of chicklist's steps were eleven. The patient was given one degree for each step that was performed accurately or zero for inaccurate technique or skipped step then all degrees were summed. Subjects were considered to have good performance if he had seven degrees or more. The more the degree, the higher the performance.
Tool VI: Shortness of Breath Questionnaire
It was developed by Eakin et al., (1998) [24] and used by the researchers to rate patient's breathlessness and assess which physical activity that may percipitate breathlessness. It consisted of 24 listed phsical activities to be assessed if it perciptitate dyspnea or not such as shortness of breathing at rest, at walking on a level at own pace, walking upstairs, while eating, dressing, doing dishes, and shopping.
Scoring system: the questionnaire is a six point likert scale rated from zero to five in which: Zero is not at all breathlessness 1. is very mild breathlessness 2. is mild breathlessness 3. is avarage breathlessness 4. is severe breathlessness 5. is maximum breathlessness or unable to do the activity because of breathlessness. The total scores were summed that ranged from zero to one hundered twenty with higher score indicated inability to do any activity because of breathlessness. Reliability: Tabberer et al., (2015) [25] tested the reliability of the questionnaire. They found that this questionnaire had high internal consistency (Cronbach's alpha = 0.936) with high test re-test reliability (Pearson's correlation coefficient = 0.86). Morisky et al., (1986) [26] and used by the researchers to assess patient's adherence to the prescribed inhaled medications. It consisted of eight questions about adherence to the prescribed medications such as did patient sometimes miss the medication, did he/she stopped taking the medication for reason other than forgetting or have the patient stopped taking medication without telling the doctor.
Tool V: Morisky Medication Adherence Scale (MMAS) It was developed by
Scoring system: each item was given a score of zero if patient adhere to the prescribed medications and one for nonadherence, then all scores were summed given a score of: Zero means high adherence 1-2 means medium adherence 3-8 means low adherence Reliability: Moharamazad et al., (2015) [27] told that a test re-test reliability showed good reproducibility (r= 0.94)
Methods
(1) Data was collected from the beginning of March 2014 to the end of August 2015.
(2) An official approval was obtained from hospitals' director and the head nurses of the Chest department and Medical outpatient clinics after an explanation of the aim of the study.
Tools Development
The first and third tools were developed by the researchers after extensive review of the relevant literature. While the second tool was developed by White et al., (2006) [22] and translated into Arabic by the researchers, fourth one developed by Eakin et al., (1998) [24] and the fifth tool developed by Morisky et al., (1986) [26].
First and third tools were tested by a panel of five experts in the field of Nursing and Medicine to determine its content validity, relevance and completeness. While the second tool was translated into Arabic, back translated and linguistically validated by the same panel of experts.
The reliability of the first and third tools were tested by using a test re-test method and the 4 Pearson correlation coefficient. It was 0.91 for tool I and 0.94 for the third tool. Also the second tool was tested for reliability by the researchers after translation into Arabic by using a test re test method. It was 0.84. (3) A formal consent to participate in this study was obtained from all participants after explaining the aim of the study and they were assured that all collected data would be absolutely confidential and only will be used for the study' aim. The researchers emphasized that participation in the study is entirely voluntary and anonymity of the patients were assured through coding data. Subjects were also informed that refusal to participate in the study would not affect their care.
Pilot Study
A pilot study was conducted prior to data collection on 14 patients (10%) to test all tools for clarity, objectivity, relevance, feasibility and the applicability of the tools. Also it was conducted to identify any problem associated with administering the tools and measure the time needed for data collection then the necessary modifications were carried out accordingly. Data included in pilot study was excluded from the current study
Data Collection
The patients who fulfilled the inclusion criteria were selected and divided randomly and alternatively into two equal groups Study group (1) received a detailed education and training about correct inhalation technique along with routine medical care such as administration of oxygen therapy and nebulizer. Control group (2) was exposed to routine medical care only.
The study was conducted in four consecutive phase. These phases were:-
Assessment Phase
The first interview was carried out by the researchers for each participant of both groups for collecting baseline data about sociodemographic and medical data, COPD related knowledge, severity of dyspnea, and medication adherence level. The interview carried out often in the patient's room with hospitalized patients or in the waiting area of outpatient clinics if he/she wasn't hospitalized. It took about 25 to 30 minute using tool I, tool II, tool VI and tool V. Then the researchers utilized the third tool (observational checklist) to assess each participant of both groups for their performance of inhaler use.
Planning Phase
Based on assessment phase and extensive literature review [2,6]
Evaluation Phase
Evaluation of all subjects of both groups was done twice during the study period. The first time post two weeks from last session and the other after one month using all tools. A comparison between both groups was carried out to determine the effect of education about inhaler use on severity of dyspnea and adherence to inhalation therapy among patients with chronic obstructive pulmonary disease.
Statistical Analysis
The collected data were organized, tabulated and statistically analyzed using SPSS software (Statistical Package for the Social Sciences, version 16, SPSS Inc. Chicago, IL, USA). For quantitative data, the range, mean and standard deviation were calculated. For qualitative data, comparison between two groups and more was done using Chi-square test (χ 2 ). For comparison between means of two groups of parametric data of independent samples, student t-test was used. For comparison between more than two means of parametric data, F value of ANOVA test was calculated. Correlation between variables was evaluated using Pearson's correlation coefficient (r). Significance was adopted at p<0.05 for interpretation of results of tests of significance [28]. Table 1 illustrated that the mean age for study group was 57.77±12.43 years and for control group was 56.77±11.37 years. More than three fourths of both study and control groups were married (78.6% and 87.1% respectively). About one third of them had secondary education (37.1% for study group and 30.0% for control group). Regarding working hours, about half of both study and control groups (59.3% and 48.4% respectively) worked eight hours per day. In relation to the home condition, it was show that the majority of both groups had sun in their houses (85.7% and 88.6% respectively) and proper room ventilation (84.3% and 88.6% respectively). Also about two thirds of both groups had sewage (68.6% and 62.9% respectively. Lastly about one third of study group (31.4%) and more than half of control group (51.4%) gave up smoking. Table 2 revealed that about two thirds of both study and control groups (62.9% and 71.4% respectively) visited output clinic once per week. More than one third of both groups (42.9% and 37.1%) complained of productive cough, wheezing, and dyspnea. All subjects of both groups (100%) used oxygen during attack. Regarding duration of inhaler use, the majority of both groups (80.0% and 82.9% respectively) used the inhaler from two or more years. All of them (100%) received training about inhaler use but none of them (0.0%) followed up this training. As regard family history of lung diseases, the majority of both groups (81.4% and 87.1% respectively) didn't have positive family history. Smoke was the most aggravating factors for the disease manifestations about one third of both groups (32.9% and 41.4% respectively). Table 3 showed that about one quarter of both study and control groups (24.3% and 25.7% respectively) had low knowledge score at pre intervention. These scores were improved after two weeks and one month post intervention. 100% of study group and 74.3% of the control group had a high knowledge score two weeks and one month post intervention. Statistically significant differences were existed between both groups two weeks and one month post intervention. Figure 1 revealed that mean total performance score of inhaler use for study group was significantly improved throughout the study period from 6.06 ± 0.63 pre interventions to 9.71 ± 1.10 two weeks and one month post intervention. While the mean total performance score for control group remained stable all over the study period at 6.13 ± 0.74. Statistical significant differences existed between both groups two weeks and one month post intervention at (p = 0.0001). Table 4 demonstrated that about two thirds of study group I (62.9%) and more than half of control group II (54.3%) complained of severe dyspnea, while one month after education only 44.3% of study group still complained of severe dyspnea compared to 54.3% of control group with insignificant differences existed between both groups. Table 5 showed that only 4.3% of study group adhere moderately to medication pre intervention that was significantly improved to 47.1% two weeks and one month post intervention. But control group remained stable all over the study period with 10% of them had moderate adherence to medication. There were statistical significant differences between both groups two weeks and one month post intervention. Table 6 revealed that there was statistical significant difference between dyspnea severity scores in relation to disease duration among study group. Table 7 showed that no statistical significant differences were existed between medication adherence levels in relation to selected sociodemographic characteristics among study and control groups. Table 8 shows that a negative correlation was found among dyspnea severity score and knowledge score for study group post intervention. Table 5. Levels of medication adherence among both studied subjects pre, two weeks and one month post-education (n=140).
Morisky medication adherence
The studied COPD patients (n=140) Table 7. Mean scores of medication adherence pre education among studied subjects in relation to sociodemographic data and duration of disease (n=140).
Discussion
The inhaled medications are the corner stone of therapy among patients with obstructive lung diseases and the inhaler is the only device for administrating the medication effectively which allows the drugs to be highly deposited in the lungs and it also minimizes the systemic adverse drug reaction [29].
Education of patients and their families has a major concern in recent years. This has been motivated by patient's needs for knowing more about their diseases so any health care personal should consider the self management as an important aspect for patient's recovery [22]. New evidence reported that control of COPD symptoms can be improved by brief verbal instructions and demonstration of correct inhaler technique [17].
Biosociodemographic characteristics of studied subjects: The results of present study found that the mean age of study group was 57.77±12.43 years and 56.77±11.37 years for control group. This finding is in line with Salah et al., (2013) who stated that the mean age of their sample was 55±5.7 years [30] and Tel et al., (2012) who found that the mean age of their patients was 66.03±11.33 years [31]. Also Wisniewski et al., (2014) revealed that the mean age of their studied patients was 63.7±7.1 years [32]. This may be related to the incidence of COPD is growing among older age.
Concerning sex, the current study reported that more than half of studied subjects of both groups were male. These percentages are less than the percentage of Salah et al., (2013) and Akinci and Yildrin (2013) who stated that most of their sample was male [30,33]. This may be due to the lower sample size of the current study in relation to their sample size. The majority of both groups of the current study were married. This is consistent with the mean age of them.
Regarding home condition, the present study revealed that the home of majority of both groups was in good condition (sun entered the home, the houses were not crowded, good ventilated, and had sewage), so home condition is not a factor for the disease's symptoms.
National Heart, Lung and Blood Institute (2013) stated that patients with a history of smoking are more vulnerable to COPD [34]. This is in line of our study which showed that more than two thirds of studied subjects had history of smoking. They were either current or former smokers. While the current smokers represented about one third of both study and control groups for the present study. This is in line with Wisniewski et al., (2014) and Fernandz et al. (2014) [32,35].
The mean years of disease duration for subjects of the current study was 11.11±3.52 years for study group and 11.23±3.78 years for control group. This result is near the results of Akinci and Yildrin (2013) [33] who found that the mean years of disease duration for their sample was 9.1±8.5 years.
The results of the present study showed that all subjects of both groups received pervious training about inhaler use but all of them didn't follow this training. These findings are in accordance with the findings of Fadaei et al., (2016) who found that approximately all of their subjects were trained in this regard but didn't follow these training [36]. These findings were explained by Restrepo et al., (2008) who reported that retention of instructions about appropriate use of inhaler is lost over time [37].
It was stated that patient's education is one of the most important roles for a nurse in any health care setting and the nurse provides the patients with the needed information for self care to ensure continuity of care [38]. In the current study, it was noticed that the total knowledge score of study group was improved than control group post education by two weeks and one month. The result is in line with Khdour et al., (2009) and Jarab et al., (2012) who reported that the study group showed improvement of COPD knowledge when compared with control group [39,40]. Also El-Sayed et al., (2012) found a statistical improvement of total knowledge score among study group than control group post education [41].
Performance of inhaler use
Correct inhaler technique is very important, however previous studies reported that incorrect inhaler technique is common among patients with asthma or COPD [16,42]. These studies coincide with the results of our study which stated that pre education, the mean total performance of inhaler use was low for both study and control groups. While after education by two weeks and one month, there was a significant improvement in mean total performance score of inhaler use among study group than control group. These results are supported by National asthma council (2008) which stated that large body of evidence from randomized trials have shown that patient's inhaler technique can be improved by education [17]. Also Fadaei et al., (2014) revealed that using inhaler correctly requires the patients to be informed about proper use instructions [36].
Severity of dyspnea One of the most severe symptoms of chronic obstructive pulmonary disease is dyspnea and its severity increases as the disease progresses [43]. The inhaled bronchodilators are considered a corner stone for treating dyspnea but incorrect inhaler use can reduce medication effectiveness that can lead to symptoms exacerbation especially dyspnea [44,45]. This is in agreement with the results of the current study which showed that more than half of both groups had severe dyspnea pre correct inhaler's education. Improvement in dyspnea severity was observed among study group after one month than control group; however this improvement is not statistically significant. This result didn't support hypothesis number one. This may be due to the short follow up period and the improvement of dyspnea may require long follow up period however the better result observed among study group suggested that the education about correct inhaler technique is beneficial. This is supported by who mentioned that patient's education about correct inhaler technique showed increase the effectiveness of the inhaled medication and relieving symptoms especially dyspnea [46].
The results of the current study showed that there was a statistical significance difference between mean dyspnea severity scores in relation to disease duration among study group. This may be related to the severity of the disease increase with increase duration of the disease and so manifestations such as dyspnea increase in severity.
Medication adherence
Although medical treatment of chronic obstructive pulmonary disease has advanced, non adherence to medication regimen poses a significant barrier to optimal management [37]. Non adherence to medication regimens is common among COPD patients because of chronic nature of the disease and the use of multiple medications especially they are often prescribed areolized medications to use from two to six times daily [47,48]. Haupt et al., (2008) stated that chronic obstructive pulmonary disease patients display significant low adherence to treatment [49]. Agh et al., (2011) showed that patients adhered to treatment poorly when they had several drugs prescribed [50]. This is in line with the result of the present study which illustrated that pre education, majority of both groups had low adherence to medication regimen.
It was reported that patient's knowledge of the disease process as well as the recommended treatment are critical for optimal medication adherence in patients with COPD [51]. Also several observational studies have shown that training patients on inhaler use improve not only inhalation technique but also adherence to treatment and so disease control [52,53]. This illustrates the finding of the current study that showed significant improvement in medication adherence among intervention group than control group after education about correct use of inhaler. This result supported hypotheses number two. Also this result is supported by pervious cross sectional study that revealed that instruction about inhalation technique repeatedly provided for chronic obstructive pulmonary disease patients contributes to therapeutic adherence [53].
The results of the current study revealed that mean medication adherence among both study and control groups didn't differ according to age, educational level of participants and their disease duration. This result in line with other studies carried by Wisniewski et al., (2014) and Agh et al., (2011) who reported that patient's age was not a factor that increase their adherence to treatment [32,50]. Moreover Wisniewski et al., (2014) didn't find any relation of medication adherence with education [32].
Conclusion
Based on the results of current study, it was concluded that, education about chronic obstructive pulmonary disease and correct inhaler use was effective in significantly improving total knowledge score among study group than control group. Also Poor inhalation technique is common among subjects of both groups and correct inhaler technique education appears vital to significantly improve performance of inhaler among study group than control group.
Severity of dyspnea reduced among study group than control group after correct inhaler education however the reduction of dyspnea severity was not significant. Moreover education about proper inhalation technique resulted in significant improvement of adherence to inhalation therapy among study group than control group.
Recommendations
Based on the findings of the current study, the following recommendations can be suggested: 1. Patients with chronic obstructive pulmonary disease must be regularly evaluated for their inhaler technique and reinforced to maintain correct technique because it can be deteriorated again after education.
A colored booklet about proper inhalation technique
supported by pictures for almost each step should be distributed to all COPD patients. 3. Replication of the study using a large probability sample from a broad geographical area to allow greater generalization of the results. 4. Replications of the study with long period of follow up to allow for greater understanding the effect of inhaler technique on dyspnea severity and allow for greater generalization of the results.
|
2019-03-16T13:11:59.337Z
|
2017-01-24T00:00:00.000
|
{
"year": 2017,
"sha1": "85811be0afd28f2cd4084ec8f33194d4f0dac5d3",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajns.20170601.16.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "858baa6d3e1e4f6a1923c89b75612282dd69e9e5",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219246861
|
pes2o/s2orc
|
v3-fos-license
|
Postgraduate Course ERS Munich 2006 - Avian influenza and SARS: updates on the big stories
Key points In many patients, the disease caused by H5N1 may follow an unusually aggressive clinical course, with rapid deterioration and high fatality. The incubation period is around 2-8 days and possibly as long as 17 days. Diarrhoea, vomiting and abdominal pain have been reported as early symptoms. Almost all reported patients develop pneumonia. Educational aims To update clinicians on the epidemiology and clinical manifestations, including complications, of avian influenza. To update on the options for treatment. Summary To date, all outbreaks of the highly pathogenic form of avian influenza have been caused by viruses of the H5 and H7 subtypes. The virus can improve its transmissibility among humans via reassortment, in which genetic material is exchanged between humans and avian virus during coinfection of a human and a pig. It can also go through a more gradual process of adaptive mutation during subsequent infections of humans. All evidence to date indicates that close contact with dead birds is the principal source of human infection with the H5N1 virus. Limited evidence suggests that some antiviral drugs, notably oseltamivir, can reduce the duration of viral replication and improve prospects of survival, provided that they are administered within 48 hours, ideally 12 hours, following symptoms onset. This review aims to give an overview of this topical and interesting issue.
The disease in birds
Avian influenza is an infectious disease of birds caused by type A strains of the influenza virus. The disease occurs worldwide. While all birds are thought to be susceptible to infection with avian influenza viruses, many wild bird species carry these viruses with no apparent signs of harm.
Other bird species, including domestic poultry, develop disease when infected with avian influenza viruses. In poultry, the viruses cause two distinctly different forms of disease: one common and mild; the other rare and highly lethal.
In the mild form, signs of illness may be expressed only as ruffled feathers, reduced egg production or mild effects on the respiratory system. Outbreaks can be so mild they escape detection unless regular testing for viruses is in place.
In contrast, the second and far less common highly pathogenic form is difficult to miss. First identified in Italy in 1878, highly pathogenic avian influenza is characterised by sudden onset of severe disease, rapid contagion and a mortality rate that can approach 100% within 48 hours. In this form of the disease, the virus not only affects the respiratory tract, as in the mild form, but also invades multiple organs and tissues. The resulting massive internal haemorrhaging has earned it the lay name of "chicken Ebola".
All 16 haemagluttinin (HA) and nine neuraminidase (NA) subtypes of influenza viruses are known to infect wild waterfowl, thus providing an extensive reservoir of influenza viruses perpetually circulating in bird populations. In wild birds, routine testing will nearly always identify some influenza viruses. The vast majority of these viruses cause no harm.
To date, all outbreaks of the highly pathogenic form of avian influenza have been caused by viruses of the H5 and H7 subtypes. Highly pathogenic viruses possess a tell-tale genetic "trade mark" or signature (a distinctive set of basic amino acids in the cleavage site of the HA) that distinguishes them from all other avian influenza viruses and is associated with their exceptional virulence.
Not all virus strains of the H5 and H7 subtypes are highly pathogenic, but most are thought to have the potential to become so. Recent research has shown that H5 and H7 viruses of low pathogenicity can, after circulation for sometimes short periods in a poultry population, mutate into highly pathogenic viruses. Considerable circumstantial evidence has long suggested that wild waterfowl introduce avian influenza viruses, in their low pathogenic form, to poultry flocks, but do not carry or directly spread highly pathogenic viruses. This role may, however, have changed very recently: at least some species of migratory waterfowl are now thought to be carrying the H5N1 virus in its highly pathogenic form and introducing it to new geographical areas located along their flight routes.
Apart from being highly contagious among poultry, avian influenza viruses are readily transmitted from farm to farm by the movement of live birds, people (especially when shoes and other clothing are contaminated), and contaminated vehicles, equipment, feed and cages. Highly pathogenic viruses can survive for long periods in the environment, especially when temperatures are low. For example, the highly pathogenic H5N1 virus can survive in bird faeces for at least 35 days at low temperature (4°C). At a much higher temperature (37°C), H5N1 viruses have been shown to survive in faecal samples for 6 days.
For highly pathogenic disease, the most important control measures are rapid culling of all infected or exposed birds, proper disposal of carcasses, the quarantining and rigorous disinfection of farms, and the implementation of strict sanitary or "biosecurity" measures. Restrictions on the movement of live poultry, both within and between countries, are another important control measure. The logistics of recommended control measures are most straightforward when applied to large commercial farms, where birds are housed indoors, usually under strictly controlled sanitary conditions, in large numbers. Control is far more difficult under poultry production systems, in which most birds are raised in small backyard flocks scattered throughout rural or peri-urban areas.
When culling (the first line of defence for containing outbreaks) fails or proves impracticable, vaccination of poultry in a high-risk area can be used as a supplementary emergency measure, provided quality-assured vaccines are used and recommendations from the World Organisation for Animal Health (OIE) are strictly followed. The use of poor-quality vaccines or vaccines that poorly match the circulating virus strain may accelerate mutation of the virus. Poorquality animal vaccines may also pose a risk for human health, as they may allow infected birds to shed virus while still appearing to be disease free.
Apart from being difficult to control, outbreaks in backyard flocks are associated with a heightened risk of human exposure and infection. These birds usually roam freely as they scavenge for food and often mingle with wild birds or share water sources with them. Such situations create abundant opportunities for human exposure to the virus, especially when birds enter households or are brought into households during adverse weather, or when they share areas where children play or sleep. Poverty exacerbates the problem: in situations where a prime source of food and income cannot be wasted, households frequently consume poultry when deaths or signs of illness appear in flocks. This practice carries a high risk of exposure to the virus during slaughtering, defeathering, butchering and preparation of poultry meat for cooking, but has proved difficult to change. Moreover, as deaths of birds in backyard flocks are common, especially under adverse weather conditions, owners may not interpret deaths or signs of illness in a flock as a signal of avian influenza and a reason to alert the authorities. This tendency may help explain why outbreaks in some rural areas have smouldered undetected for months. The frequent absence of compensation to farmers for destroyed birds further works against the spontaneous reporting of outbreaks and may encourage owners to hide their birds during culling operations.
The role of migratory birds
During 2005, an additional and significant source of international spread of the virus in birds became apparent for the first time. Scientists are increasingly convinced that at least some migratory waterfowl are now carrying the H5N1 virus in its highly pathogenic form, sometimes over long distances, and introducing the virus to poultry flocks in areas that lie along their migratory routes. Should this new role of migratory birds be scientifically confirmed, it will mark a change in a long-standing stable relationship between the H5N1 virus and its natural wild-bird reservoir.
Evidence supporting this altered role began to emerge in mid-2005 and has since been strengthened. The dying-off of more than 6,000 migratory birds, infected with the highly pathogenic H5N1 virus, that began at the Qinghai Lake nature reserve in central China in late April 2005, was highly unusual and probably unprecedented. Prior to that event, wild bird deaths from highly pathogenic avian influenza viruses were rare, usually occurring as isolated cases found within the flight distance of a poultry outbreak. Scientific studies comparing viruses from different outbreaks in birds have found that viruses from the most recently affected countries, all of which lie along migratory routes, are almost identical to viruses recovered from dead migratory birds at Qinghai Lake. Viruses from Turkey's first two human cases, which were fatal, were also virtually identical to viruses from Qinghai Lake.
Countries affected by outbreaks in birds
The outbreaks of highly pathogenic H5N1 avian influenza began in South-East Asia in mid-2003 including: the Republic of Korea, Vietnam, Japan, Thailand, Cambodia, the Lao People's Democratic Republic, Indonesia, China and Malaysia.
In late July 2005, the virus spread geographically beyond its original focus in Asia to affect poultry and wild birds in the Russian Federation and adjacent parts of Kazakhstan. Almost simultaneously, Mongolia reported detection of the highly pathogenic virus in wild birds. In October 2005, the virus was reported in Turkey, Romania and Croatia. In early December 2005, Ukraine reported its first outbreak in domestic birds. Figure 1 is a map showing the different areas throughout the world which have reported a confirmed occurance of H5N1 in poultry and wild birds since 2003.
Further spread of the virus along the migratory routes of wild waterfowl is anticipated. Moreover, bird migration is a recurring event. Countries that lie along the flight pathways of birds migrating from central Asia may face a persistent risk of introduction or re-introduction of the virus to domestic poultry flocks.
Prior to the present situation, outbreaks of highly pathogenic avian influenza in poultry were considered rare. Excluding the current outbreaks caused by the H5N1 virus, only 24 outbreaks of highly pathogenic avian influenza have been recorded worldwide since 1959. Of these, 14 occurred in the past decade. The majority have shown limited geographical spread, a few remained confined to a single farm or flock, and only one spread internationally. All of the larger outbreaks were costly for the agricultural sector and difficult to control.
The disease in humans History and epidemiology
Influenza viruses are normally highly species specific, meaning that viruses that infect an individual species (humans, certain species of birds, pigs, horses and seals) stay "true" to that species and only rarely spill over to cause infection in other species. Since 1959, instances of human infection with an avian influenza virus have been documented on only 10 occasions. Of the hundreds of strains of avian influenza A viruses, only four are known to have caused human infections: H5N1, H7N3, H7N7 and H9N2. In general, human infection with these viruses has resulted in mild symptoms and very little severe illness, with one notable exception: the highly pathogenic H5N1 virus.
Of all the influenza viruses that circulate in birds, the H5N1 virus is of greatest present concern for human health for two main reasons. First, the H5N1 virus has caused by far the greatest number of human cases of very severe disease and the greatest number of deaths by crossing the species barrier to infect humans. Figure 2 shows a map of all occurrences of H5N1 avian influenza since 2003.
A second implication for human health, of far greater concern, is the risk that the H5N1 virus (if given enough opportunities) will develop the characteristics it needs to start another influenza pandemic. The virus has met all prerequisites for the start of a pandemic save one: an ability to spread efficiently and sustainably among humans. While H5N1 is presently the virus of greatest concern, the possibility that other avian influenza viruses, known to infect humans, might cause a pandemic cannot be ruled out.
The virus can improve its transmissibility among humans via two principal mechanisms. The first is a "reassortment" event, in which genetic material is exchanged between human and avian viruses during co-infection of a human or pig. Reassortment could result in a fully transmissible pandemic virus, marked by a sudden surge of cases with explosive spread.
The second mechanism is a more gradual process of adaptive mutation, whereby the capability of the virus to bind to human cells increases during subsequent infections of humans. Adaptive mutation, expressed initially as small clusters of human cases with some evidence of human-to-human transmission, would probably give the world some time to take defensive action, if detected sufficiently early.
During the first documented outbreak of human infections with H5N1, which occurred in Hong Kong in 1997, the 18 human cases coincided with an outbreak of highly pathogenic avian influenza, caused by a virtually identical virus, in poultry farms and live markets. Extensive studies of the human cases determined that direct contact with diseased poultry was the source of infection. Studies carried out in family members and social contacts of patients, health workers engaged in their care and poultry cullers found very limited, if any, evidence of spread of the virus from one person to another. Human infections ceased following the rapid destruction (within 3 days) of Hong Kong's entire poultry population, estimated at ~1.5 million birds. Some experts believe that this drastic action may have averted an influenza pandemic.
All evidence to date indicates that close contact with dead or sick birds is the principal source of human infection with the H5N1 virus. Especially risky behaviours identified include the slaughtering, defeathering, butchering and preparation for consumption of infected birds. In a few cases, exposure to chicken faeces when children played in an area frequented by freeranging poultry is thought to have been the source of infection. Swimming in water where the carcasses of dead infected birds have been discarded or which may have been contaminated by faeces from infected ducks or other birds might be another source of exposure. In some cases, investigations have been unable to identify a plausible exposure source, suggesting that some as yet unknown environmental factor, involving contamination with the virus, may be implicated in a small number of cases. Some explanations that have been put forward include a possible role of peri-domestic birds, such as pigeons, or the use of untreated bird faeces as fertiliser.
At present, H5N1 avian influenza largely remains a disease of birds. The species barrier is significant: the virus does not easily cross from birds to infect humans. For unknown reasons, most cases have occurred in rural and peri-urban households where small flocks of poultry are kept. Again for unknown reasons, very few cases have been detected in presumed high-risk groups, such as commercial poultry workers, workers at live poultry markets, cullers, veterinarians and health staff caring for patients without adequate protective equipment. Also lacking is an explanation for the puzzling concentration of cases in previously healthy children and young adults.
Research is urgently needed to better define the exposure circumstances, behaviours and possible genetic or immunological factors that might enhance the likelihood of human infection. for persons showing influenza-like illness, especially with fever and symptoms in the lower respiratory tract, who have a history of close contact with birds in an area where confirmed outbreaks of highly pathogenic H5N1 avian influenza are occurring. Exposure to an environment that may have been contaminated by faeces from infected birds is a second, though less common, source of human infection.
Assessment of possible cases
Not all human cases have arisen from exposure to dead or visibly ill domestic birds. Research published in 2005 has shown that domestic ducks can excrete large quantities of highly pathogenic virus without showing signs of illness. A history of poultry consumption in an affected country is not a risk factor, provided the food was thoroughly cooked and the person was not involved in food preparation. As no efficient human-to-human transmission of the virus is known to be occurring anywhere, simply travelling to a country with ongoing outbreaks in poultry or sporadic human cases does not place a traveller at enhanced risk of infection, provided the person did not visit live or "wet" poultry markets, farms or other environments where exposure to diseased birds may have occurred.
Clinical features
In many patients, the disease caused by the H5N1 virus follows an unusually aggressive clinical course, with rapid deterioration and high fatality. Like most emerging disease, H5N1 influenza in humans is poorly understood. Clinical data from cases in 1997 and the current outbreak are beginning to provide a picture of the clinical features of disease, but much remains to be learned. Moreover, the current picture could change given the propensity of this virus to mutate rapidly and unpredictably.
The incubation period for H5N1 avian influenza may be longer than that for normal seasonal influenza, which is ~2-3 days. Current data for H5N1 infection indicate an incubation period ranging 2-8 days and possibly as long as 17 days. However, the possibility of multiple exposure to the virus makes it difficult to define the incubation period precisely.
Initial symptoms include a high fever, usually with a temperature higher than 38°C, and influenza-like symptoms. Diarrhoea, vomiting, abdominal pain, chest pain, and bleeding from the nose and gums have also been reported as early symptoms in some patients. Watery diarrhoea without blood appears to be more common in H5N1 avian influenza than in normal seasonal influenza. The spectrum of clinical symptoms may, however, be broader, and not all confirmed patients have presented with respiratory symptoms. In two patients from southern Vietnam, the clinical diagnosis was acute encephalitis; neither patient had respiratory symptoms at presentation. In another case, from Thailand, the patient presented with fever and diarrhoea, but no respiratory symptoms. All three patients had a recent history of direct exposure to infected poultry.
One feature seen in many patients is the development of manifestations in the lower respiratory tract early in the illness. Many patients have symptoms in the lower respiratory tract when they first seek treatment. On present evidence, difficulty in breathing develops ~5 days following the first symptoms. Respiratory distress, a hoarse voice and a crackling sound when inhaling are commonly noted. Sputum production is variable and sometimes bloody. Almost all patients develop pneumonia. During the Hong Kong outbreak, all severely ill patients had primary viral pneumonia, which did not respond to antibiotics. Limited data on patients indicate the presence of a primary viral pneumonia in H5N1, usually without microbiological evidence of bacterial supra-infection at presentation. Turkish clinicians have also reported pneumonia as a consistent feature in severe cases; as elsewhere, these patients did not respond to treatment with antibiotics.
In patients infected with the H5N1 virus, clinical deterioration is rapid. In Thailand, the time between onset of illness to the development of acute respiratory distress was ~6 days, with a range of 4-13 days. In severe cases in Turkey, clinicians have observed respiratory failure 3-5 days after symptom onset. Another common feature is multi-organ dysfunction. Common laboratory abnormalities include leukopenia (mainly lymphopenia), mild-to-moderate thrombocytopenia, elevated aminotransferases and with some instances of disseminated intravascular coagulation.
Treatment
Limited evidence suggests that some antiviral drugs, notably oseltamivir (commercially known as Tamiflu®), can reduce the duration of viral replication and improve prospects of survival, provided they are administered within 48 hours following symptom onset. However, prior to the outbreak in Turkey, most patients have been detected and treated late in the course of illness. For this reason, clinical data on the effectiveness of oseltamivir are limited. Moreover, oseltamivir and other antiviral drugs were developed for the treatment and prophylaxis of seasonal influenza, which is a less severe disease associated with less prolonged viral replication. Recommendations on the optimum dose and duration of treatment for H5N1 avian influenza, also in children, need to undergo urgent review, and this is being undertaken by the WHO.
In suspected cases, oseltamivir should be prescribed as soon as possible (within 48 hours, ideally 12 hours, following symptom onset) to maximise its therapeutic benefits. However, given the significant mortality currently associated with H5N1 infection and evidence of prolonged viral replication in this disease, administration of the drug should also be considered in patients presenting later in the course of illness.
Currently recommended doses of oseltamivir for the treatment of influenza are contained in the product information on the manufacturer's website.
As the duration of viral replication may be prolonged in cases of H5N1 infection, clinicians should consider increasing the duration of treatment to 7-10 days in patients who are not showing a clinical response. In cases of severe infection with the H5N1 virus, clinicians may need to consider increasing the recommended daily dose or the duration of treatment, keeping in mind that doses above 300 mg per day are associated with increased side-effects. For all treated patients, consideration should be given to taking serial clinical samples for later assay to monitor changes in viral load, to assess drug susceptibility and to assess drug levels. These samples should be taken only in the presence of appropriate measures for infection control.
In severely ill H5N1 patients or in H5N1 patients with severe gastrointestinal symptoms, drug absorption may be impaired. This possibility should be considered when managing these patients.
Countries with human cases in the current outbreak
To date there have been just over 250 human cases. Most human cases have been reported in Asia. The first patients were from Vietnam and developed symptoms in December, 2003, but they were not confirmed as H5N1 infection until January 11, 2004. Thailand reported its first cases on January 23, 2004. The first case in Cambodia was reported on February 2, 2005. The next country to report cases was Indonesia, which confirmed its first infection on July 21, 2005. China's first two cases were reported on November 16, 2005. Confirmation of the first cases in Turkey came on January 5, 2006, followed by the first reported case in Iraq on January 30, 2006. All human cases have coincided with outbreaks of highly pathogenic H5N1 avian influenza in poultry. To date, Vietnam has been the most severely affected country, with more than 90 cases.
Altogether, more than half of the laboratoryconfirmed cases have been fatal. H5N1 avian influenza in humans is still a rare disease, but a severe one that must be closely watched and studied, particularly because of the potential of this virus to evolve in ways that could start a pandemic.
REVIEW Postgraduate Course ERS Munich 2006 SARS
Severe acute respiratory syndrome (SARS) is a newly identified acute viral respiratory syndrome caused by a novel coronavirus (SARS-CoV), the SARS coronavirus (SARS-CoV), which is believed to have crossed the species barrier recently from animals to humans. It remains difficult to predict when or whether SARS will re-emerge in epidemic form, but clinicians should familiarise themselves with the varying clinical manifestations of this disease as highlighted in this update.
SARS was first recognised as a global threat in mid-March 2003. The first known cases of SARS occurred in the Guangdong province, China, in November 2002 [1,2], and the World Health Organization (WHO) reported the last human chain transmission of SARS in that epidemic on July 5, 2003. The aetiological agent, the SARS-CoV [3][4][5], is believed to be an animal virus that crossed the species barrier to humans. This could have been caused by ecological changes, or changes in human behaviour providing increased opportunities for human exposure to the virus and virus adaptation, enabling human-to-human transmission [6]. By July 2003, the international spread of SARS-CoV had resulted in 8,098 SARS cases in 26 countries with 774 deaths [7]. The epidemic caused significant social and economic disruption in areas with sustained local transmission of SARS, and on the international travel industry in addition to the direct impact on health services. While much has been learnt about this syndrome since March 2003, our knowledge about the epidemiology and ecology of SARS-CoV infection and of this disease remains incomplete.
The natural reservoir of SARS-CoV has not yet been identified, but a number of wildlife species (the Himalayan masked palm civet (Paguma larvata) the Chinese ferret badger (Melogate moschata) and the raccoon dog (Nyctereutes procyonoides)) consumed as delicacies in southern China, have shown laboratory evidence of infection with a related coronavirus [2,8]. Domestic cats living in the gardens of an apartment block in Hong Kong were also found to be infected with SARS-CoV [9]. More recently, ferrets (Mustela furo) and domestic cats (Felis domesticus) were experimentally infected with SARS-CoV and found to efficiently transmit the virus to previously uninfected animals housed with them [10]. These findings indicate that the reservoir for this pathogen may involve a range of animal species. The masked palm civet is the wildlife species most often associated with animal-to-human transmission; however, whether the civet is the natural reservoir of SARS-like coronaviruses remains unproven.
Since July 2003, there have been four occasions when SARS has reappeared. Three of these incidents were attributed to breaches in laboratory biosafety, and resulted in one or more cases of SARS (Singapore [11][12][13], Taipei [14] and Beijing [15,16]). Fortunately, only one of these incidents resulted in secondary transmission outside of the laboratory. The WHO recommends that each country ensures that the correct biosafety procedures are followed by all laboratories working with SARS-CoV and other pathogens [17], and that appropriate monitoring and investigation of illness in laboratory workers is undertaken.
The fourth incident (Guangzhou, Gouangdong province, China [18][19][20]) resulted in four sporadic, community-acquired cases arising over a 6-week period. Three of the cases were attributed to exposure to animal or environmental sources, whereas the source of exposure is unknown in the remaining case. There was no further community transmission.
REVIEW Postgraduate Course ERS Munich 2006
Clinical description Aetiology SARS is a disease caused by SARS-CoV.
Epidemiology
Nosocomial transmission of SARS-CoV has been a striking feature of the SARS outbreak. The majority of the cases have been in adults. Children are less commonly affected than adults and usually have a milder illness [21]. The mean incubation period is 5 days, with a range of 2-10 days, although there are isolated reports of longer incubation periods. Cases outside the 2-10-day incubation period have not necessarily been subjected to rigorous and standardised investigation including serological confirmation. There have been no reports of transmission occurring before the onset of symptoms.
Natural history Week 1 of illness
Patients initially develop influenza-like prodromal symptoms. Presenting symptoms include fever, malaise, myalgia, headache and rigors. No individual symptom or cluster of symptoms has proven specific. Although history of fever is the most frequently reported symptom, it may be absent on initial measurement.
Week 2 of illness
Cough (initially dry), dyspnoea and diarrhoea may be present in the 1st week, but more commonly reported in the 2nd week of illness. Severe cases develop rapidly, progressing to distress and oxygen desaturation, with ~20% requiring intensive care. Up to 70% of patients develop diarrhoea, which has been described as large volume and watery without blood or mucus. Transmission occurs mainly during the 2nd week of illness.
Clinical outcomes
Based on the analysis of data from Canada, China, Hong Kong SAR, Singapore, Vietnam and the USA during the 2003 epidemic, the case fatality ratio (CFR) of SARS is estimated to range from 0% to >50%, depending on the age group affected and reporting centre, with a crude global CFR of ~9.6%. Higher mortality has also been associated with male sex and presence of comorbidity in various studies.
Radiological findings
Early chest radiography or computed tomography changes are observed in most patients as early as days 3-4 after the initiation of illness, in spite of the absence of respiratory signs. These typically show patchy consolidation, starting with a unilateral peripheral lesion, which progresses to multiple lesions or ground-glass appearance. Some lesions follow a shifting pattern. Features during the later stages have sometimes included spontaneous pneumothorax, pneumomediastinum, sub-pleural fibrosis and/or cystic changes.
Haematological and biochemical findings
There are no haematological or biochemical parameters specific for SARS; however, studies have consistently highlighted the following.
Haematological findings
Lymphopenia is common on presentation and progresses during the course of the illness. Sometimes thrombocytopenia and prolonged activated partial thromboplastin time are observed.
Biochemical findings
Lactate dehydrogenase is frequently high and some reports have suggested an association with poor prognosis. Alanine aminotransferase, aspartate aminotransferase and creatine phosphokinase elevation are less frequently reported. Abnormal serum electrolytes have also been reported on presentation or during hospitalisation, including hyponatraemia, hypokalaemia, hypomagnesaemia and hypocalcaemia.
|
2019-08-20T02:02:23.681Z
|
2006-12-01T00:00:00.000
|
{
"year": 2006,
"sha1": "5d6242953c11b057a5b7d6f9e0881eeeb5d75dbf",
"oa_license": "CCBYNC",
"oa_url": "https://breathe.ersjournals.com/content/breathe/3/2/175.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "c75e17a75b185b6c7535ebbefc9294df75b865cf",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
174811039
|
pes2o/s2orc
|
v3-fos-license
|
Beach Pollution Effects on Health and Productivity in California
The United States (U.S.) Clean Water Act triggered over $1 trillion in investments in water pollution abatement. However, treated sewage discharge and untreated runoff water that are contaminated by fecal matter are discharged into California beach waters daily. Warnings are posted to thwart the public from contacting polluted coastal water, according to the California Code of Regulations (CCR). This paper evaluated the current policy by empirically examining the productivity loss, in the form of sick leave, which is caused by fecal-contaminated water along the California coast under the CCR. The findings of this study showed that Californians suffer productivity losses in the amount of 3.56 million sick leave days per year due to recreational beach water pollution. This paper also empirically examined the pollution-to-sickness graph that Cabelli’s classic study theoretically proposed. The results of the research assure that the existing water quality thresholds are still reasonably safe and appropriate, despite the thresholds being based on studies from the 1950s. The weakness of the CCR lies in its poor enforcement or compliance. Better compliance, in terms of posting pollution advisories and increasing public awareness regarding beach pollution effects on health, would lead to a significant decrease in sick leaves and a corresponding increase in productivity. Therefore, this study advocates for stronger enforcement by displaying pollution advisories and better public awareness of beach pollution effects on health.
Introduction
California beaches attract 23 million residents and 150 million tourists each year [1,2]. However, large volumes of treated sewage discharge and polluted runoff water flow into the California coastline through storm drains, which are adjacent to many frequently visited California beaches. As the runoff water travels across the land surface and flows through watersheds down to the coast, the runoff water accumulates fecal contamination along the way, which originates from various sources, such as animal waste and leaky sewage pipes [3]. Treated sewage discharge also has a significant amount of pathogens, despite its treatment [4]. As a result, substantial amounts of human and animal fecal matter is frequently released into the marine coastal waters. Such urban runoff pollution has a strong negative impact on the water quality of California's coastal water, estuaries, and bays [5,6].
Epidemiological studies have suggested that exposure to recreational waters that are polluted by animal and human waste may result in illnesses, which include respiratory diseases, gastrointestinal illnesses (GI), and skin, eye, and ear infections [7][8][9][10][11][12][13]. For instance, Fleisher et al. [7] surveyed 1216 UK participants consisting of swimmers and non-swimmers during the summers of 1989-1992. Their study found an association between swimming in contaminated marine waters and gastroenteritis,
CEDEN Water Pollution Data
The ENT data that were used in this study were obtained from the California Environmental Data Exchange Network (CEDEN). The State Water Resources Control Board manages the CEDEN to record the surface water quality test results [26,27]. This data portal contains the same water testing results that are used to determine the posting of the health advisories by the county health care agencies. The CEDEN documents the time-stamped surface water testing results at the chosen sampling locations near the beaches. These sampling locations were strategically chosen to represent the water quality of the coastline. Such sampling locations must be within the Core-Based Statistical Area (CBSA) and have valid ENT observations during the study period from 2004 to 2013. These points were collected from the CEDEN data portal. Figure 1 shows the spatial distribution of the sampling locations along the coast, where the ENT data were collected for this study. Some of the coastal areas do not have the sampling points, as seen in Figure 1. This could be due to the fact that (1) ENT observations were not sampled throughout the study time, (2) ENT observations were missing or not valid during this period, or (3) ENT observations were not within the studied CBSAs. Table 1 summarizes the data that were used for analysis. The fraction of polluted beachlines that lie within a Core-Based Statistical Area (CBSA) represent the probabilities of exposure. The fraction of Polluted beachline is defined as the fraction of sampling locations in the CBSA that exceeds the indicated pollution threshold, i.e., out of the 10 sampling locations in this CBSA, three of them are deemed polluted, and then the fraction of polluted beachline is 0.3 or 30%. Under the GM (STV) threshold, the probability of someone being exposed to polluted coastal water is 12% (7%). This is consistent with the estimates in the literature of [17]. Figure 2 depicts the probability that a certain faction of local beachline is polluted and it shows the probability distribution of the fraction of polluted local beachlines using both the STV and GM measures. The ENT data that were used in this study were obtained from the California Environmental Data Exchange Network (CEDEN). The State Water Resources Control Board manages the CEDEN to record the surface water quality test results [26,27]. This data portal contains the same water testing results that are used to determine the posting of the health advisories by the county health care agencies. The CEDEN documents the time-stamped surface water testing results at the chosen sampling locations near the beaches. These sampling locations were strategically chosen to represent the water quality of the coastline. Such sampling locations must be within the Core-Based Statistical Area (CBSA) and have valid ENT observations during the study period from 2004 to 2013. These points were collected from the CEDEN data portal. Figure 1 shows the spatial distribution of the sampling locations along the coast, where the ENT data were collected for this study. Some of the coastal areas do not have the sampling points, as seen in Figure 1. This could be due to the fact that (1) ENT observations were not sampled throughout the study time, (2) ENT observations were missing or not valid during this period, or (3) ENT observations were not within the studied CBSAs. Table 1 summarizes the data that were used for analysis. The fraction of polluted beachlines that lie within a Core-Based Statistical Area (CBSA) represent the probabilities of exposure. The fraction of Polluted beachline is defined as the fraction of sampling locations in the CBSA that exceeds the indicated pollution threshold, i.e., out of the 10 sampling locations in this CBSA, three of them are deemed polluted, and then the fraction of polluted beachline is 0.3 or 30%. Under the GM (STV) threshold, the probability of someone being exposed to polluted coastal water is 12% (7%). This is consistent with the estimates in the literature of [17]. Figure 2 depicts the probability that a certain faction of local beachline is polluted and it shows the probability distribution of the fraction of polluted local beachlines using both the STV and GM measures.
Current Population Survey (CPS)
The 2004-2013 waves of CPS were used to measure the amount of sick leave. This study observed individuals between 18 and 65 years of age who have a valid report of their race, ethnicity, gender, employment status, and sick leave. Sick leave is a YES or NO to the question, in which participants were asked "Have you ever taken a sick day from a labor market activity during the week before the CPS interview?", where a labor market activity is a combination of working full-time, working part-time, and job search.
This study linked the pollution indicators that were derived from the CEDEN to individuals in the CPS using the month, the year, and the longitude and latitude of the sampling locations. The
Current Population Survey (CPS)
The 2004-2013 waves of CPS were used to measure the amount of sick leave. This study observed individuals between 18 and 65 years of age who have a valid report of their race, ethnicity, gender, employment status, and sick leave. Sick leave is a YES or NO to the question, in which participants were asked "Have you ever taken a sick day from a labor market activity during the week before the CPS interview?", where a labor market activity is a combination of working full-time, working part-time, and job search.
This study linked the pollution indicators that were derived from the CEDEN to individuals in the CPS using the month, the year, and the longitude and latitude of the sampling locations. The data were kept for people who live in the California CBSAs that have a beach within their borders with at least one sampling location that consistently gathered ENT density data. These individuals live in the following cities: Los Angeles, Long Beach, Santa Ana, Salinas, San Diego, Carlsbad, San Marcos, San Francisco, Oakland, Fremont, San Luis Obispo, Paso Robles, Santa Barbara, Santa Maria, Goleta, Santa Cruz, Watsonville, Santa Rosa, Petaluma, Oxnard, Thousand Oaks, and Ventura. This process left us with 501,110 individuals.
Methods
This section introduced the empirical strategies, where the study took advantage of the arguably exogenous variation in the degree of pollution of the coastal residents' local beach waters. The empirical model in this study is more sophisticated than the models that were adopted in related studies. As stated in the introduction, several papers examined the health effects of visiting polluted California beaches [17][18][19]. However, these studies did not estimate the pollution-to-sickness statistics, instead, these studies adopted the numbers that were generated in [20] or [21]. In comparison to Cabelli [20] and Kay et al. [21], the method used in this study is similar to theirs in many respects. First, following their methods, this model includes demographic and social economic controls that take the background rate of disease into account; second, this model reduces the bias that is introduced by day-to-day variations of pollutant density by using a monthly measure. However, this study takes one step further and it looks at a crucial outcome of the illness, which is sick leave from work. In addition, this study looks at the intent-to-treat (ITT) effect, rather than the treatment-on-the-treated (TOT) effect. The ITT effect includes all individuals of interest, regardless of the treatment that these individuals actually received. While, TOT is the effect of taking the experiment. Applying this concept to this case, the ITT effect would be the effect of water being polluted on everyone who might be going to the beach. These people could have seen the warnings and left or could have stayed, because these people were unaware of the pollution or simply did not care. The TOT effect would be the effect of exposure to polluted water versus not exposed. Typically, for evaluating the effect of a policy, the ITT estimator is more useful, because this estimator is more relatable to the real-life effect of the policy.
Concerns for endogeneity after controlling for the independent variables in this model are low for the following reasons. First, the identification strategy that is used in this model not only relies on the fecal matter density, but also on the pollution criteria that are set by the government. Individual behavior must be somehow guided by the degree of pollution on local beachline for an omitted variable bias to occur. This is very unlikely, as individuals cannot predict the water quality with sufficient accuracy for a number of reasons. Fecal pollution in coastal waters is barely noticeable or observable with the naked eye. This type of pollution varies drastically day to day and responds primarily to the upstream pollution conditions tens of miles away from the coast. Fecal pollution is also largely influenced by events that are unrelated to health, such as an upstream city's garden irrigation schedule or a sewage treatment plant's release schedule. Official reports of water quality are usually days, if not weeks, behind the calendar day. Hence, individuals cannot predict the water quality with a sufficient degree of accuracy.
Equation (1) shows the reduced-form relationship between pollution and the number of forgone work or job-search days. Below is the empirical model that was adopted. Equation (1) was estimated while using the STATA ® software.
Productivity loss is measured with Sick Leave, which a dummy indicator of taking sick leave in the previous week for an individual (i) in CBSA (g) and survey month (t). Polluted is the fraction of beachline that violated CCR policy in the residents' CBSA in the month of the survey. The primary definition of Polluted is determined by the monthly GM threshold of 35 cfu/100 mL that the California regulations recommend. First, the geometric mean of each month's ENT levels was calculated using the daily ENT according to the policy recommendations. Subsequently, this monthly GM was evaluated against the California regulations' monthly GM criteria of 35 cfu/100 mL to determine whether the sampling location was contaminated for that month. The average probability of contamination across sampling locations was calculated to represent the degree of pollution in local coastal water if multiple sampling locations were in the same CBSA. The monthly average probability of violating the STV of 104 cfu/100 mL is treated as a secondary measure and used in the robustness tests.
The covariates in X include the demographic variables for individual i, which include race (black, white, and other), ethnicity (Hispanic and non-Hispanic), and gender fixed effects. X also includes the employment status, which is a determinant of health [28]. γ is a vector of coefficients that corresponds to each X. δ t , δ g , and δ g * t are the year fixed effect, the CBSA fixed effect, and the CBSA-specific linear time trend, respectively. The CBSA fixed-effects capture the geographic differences in the economic environment, population density, the quality of beach water, and the average beach-going behavior among local residents. The CBSA fixed-effects also capture the differences in the operation efficiency across the different local health care agencies that are responsible for educating the local population in beach water safety and posting warning signs on the beach when the ENT level exceeds the threshold. The year fixed effects account for the sporadic events each year that cause changes in the disease occurrences and the yearly variation in beach water quality, such as that caused by population growth. The CBSA-specific time trends capture the changes over time, such as gradual decays of each CBSA's amenities that regulate fecal matter exposure, or the increase in the effort to educate the residents of water quality by CBSA, etc. The parameter θ, after controlling for these fixed effects, is arguably unbiased. ε is the random error. The 2004-2011 monthly rainfall (measured in mm) was retrieved from the CDC Wonder, which was used in the robustness checks.
This study considers θ as the lower-bound estimate of productivity losses due to fecal matter, because it is possible that some individuals would attend work, even when these individuals do not feel well, while others may have jobs with flexible hours that do not require taking sick leave. θ also does not distinguish among the channels through which fecal matter affects health. According to the F-diagram of [29], the pathways of fecal pathogens are transmitted to a new host, either directly or indirectly, through digestion, fluids, physical contact, flies, or field crops. In California, besides the direct contact that was evaluated, the only other likely channel is through flies and/or food. Contact with contaminated runoff water upstream was difficult, since most watershed drains were not accessible during the study period. Therefore, this study shows that the predominant reason for expose to coastal fecal contaminants would be direct physical contact with the coastal water. Table 2 panel A reports the results while using the monthly GM criteria of >35 cfu/mL as the measure for pollution. The first column presents the simple bivariate relationship between beach pollution and sick leave. This regression only uses the observations from the months during which there was local beach pollution that was determined by the monthly GM threshold. Columns (ii) to (iv) of Table 2 panel A gradually add the control variables and fixed effects until the model reaches the preferred specification stated in Equation (1). The coefficient that was reported in Column (iv) indicates that, if 100% of the local beachline has a pollution level above the monthly GM threshold of 35 cfu/100 mL, sick leave rises by 0.9 percentage points. To put this number into perspective, if the CPS sample is representative of the entire Californian population, then the effect can be translated into 3.56 million sick leave days per year due to polluted recreational water exposure among all Californians.
Robustness Tests
The results in panel A were replicated using an alternative STV measure in panel B of Table 2. Exceeding the STV threshold signals a very high ENT level of at least 104 cfu/100 mL. The STV measure is less reliable than the GM measure, because the STV measure is subject to a larger measurement error due to the sampling locations and/or the time of the day. Nevertheless, there is a similar but more severe health effect (1.8 percentage points) from exceeding the STV pollution threshold. However, the standard errors are also much larger due to the variation in the daily ENT measurements.
Panel A of Table 2 columns (v) and (vi) perform the robustness test, which considers the additional effects of rain on both measures of beach water quality. Rainfall increases the chance of sickness and it also increases the chance of the previously accumulated pollutants in the watershed to being washed to the shore [30]. Therefore, to prevent this omitted variable bias, the study controls for rainfall in this robustness test. These columns are based on individuals surveyed from 2004 to 2011 due to the limited years of rainfall data. Consequently, column (v) replicates the same exercise in column (iv) using 2004-2011 samples and serves as a point of comparison for adding the rainfall as a control variable in the analysis. The size of the effect in column (v) remained identical to column (iv), even with fewer observation years. Column (vi) adds the rainfall control variable. Controlling for rainfall did reduce the coefficient from 0.9 to 0.6 percentage points, although the change is statistically insignificant. This is consistent with our hypothesis that controlling for rainfall reduces upward omitted variable bias and helps to more accurately identify the effect of water quality. The 0.6 percentage point estimate was adopted as the primary and conservative estimate of the productivity loss due to runoff water pollution. A similar test was performed for the STV regressions in the Panel B columns (v) and (vi).
Effectiveness of the Water Quality Criteria
This section explores the effectiveness of the criteria from two aspects: whether people comply with warning signs and whether the monthly/daily thresholds are sufficiently low to be at safe levels. Figure 3 shows the effect of changing the GM criteria on productivity loss. Figure 3a graphically illustrates the size of the effect according to the pollution level. Gradually increasing the monthly GM in Equation (1) from 10 to 110 in 10 cfu/mL intervals produces these estimates. This figure aims to inform the choice of a proper GM criterium for beach pollution. Each point is a separate regression that evaluates the increase of sick leave if the local beachline shifted from no pollution to polluted with fecal matter density near the said GM level (criterium-10 to criterium to be precise). For example, the value that was plotted for GM of 20 cfu/mL is the estimated effect on sick leave had the beach pollution level been at 10-20 cfu/mL. Equation (1) is used to estimate these effects. Figure 3b shows the equivalent theoretical deduction of the water pollution-sickness relationship and the ideal theoretical threshold proposed in the benchmark-setting work of Cabelli [20]. This paper is the first to show the empirical curve of Cabelli's theoretical deduction.
relationship and the ideal theoretical threshold proposed in the benchmark-setting work of Cabelli [20]. This paper is the first to show the empirical curve of Cabelli's theoretical deduction.
Surprisingly, comparing the two figures shows that the pollution thresholds were set at a close-to-ideal level, and these thresholds should be reasonably safe. The theoretical prediction of the pollution-illness curve matches well with the shape of the empirical curve shown in Figure 3a. The ideal policy threshold should be strategically placed before the exponential growth segment of the curve for mild illnesses, which is almost precisely where 35 cfu/mL is on Figure 3a.
The lack of compliance with the policy is also what is surprising. Perfect compliance to the policy would imply that, whenever pollution exceeds the monthly GM threshold, a warning sign is posted, and everyone would follow the sign. The coefficient to the right of the threshold should be zero, because no one would have been exposed to polluted water. One would expect Figure 3a to display a sharp drop in the size of the coefficient immediately to the left of the threshold, even with imperfect compliance. However, there is not any observable discontinuity at the threshold. To further test whether compliance is improved with the high STV threshold, a similar exercise was performed using the various levels of STV in Figure 4. This figure aims to inform the choice of a proper STV criterium for beach pollution. Each point is a separate regression that evaluates the increase of sick leave if the local beachline went from no pollution to polluted with fecal matter density near the said STV level (criterium-5 to criterium to be precise). For example, the value that is plotted for STV of 50 cfu/mL is the estimated effect on sick leave if the beach pollution level been at 45-50 cfu/mL. Equation (1) is used to estimate these effects. No obvious trend break is noticeable near the 104 cfu/100 mL thresholds either. Surprisingly, comparing the two figures shows that the pollution thresholds were set at a close-to-ideal level, and these thresholds should be reasonably safe. The theoretical prediction of the pollution-illness curve matches well with the shape of the empirical curve shown in Figure 3a. The ideal policy threshold should be strategically placed before the exponential growth segment of the curve for mild illnesses, which is almost precisely where 35 cfu/mL is on Figure 3a.
The lack of compliance with the policy is also what is surprising. Perfect compliance to the policy would imply that, whenever pollution exceeds the monthly GM threshold, a warning sign is posted, and everyone would follow the sign. The coefficient to the right of the threshold should be zero, because no one would have been exposed to polluted water. One would expect Figure 3a to display a sharp drop in the size of the coefficient immediately to the left of the threshold, even with imperfect compliance. However, there is not any observable discontinuity at the threshold.
To further test whether compliance is improved with the high STV threshold, a similar exercise was performed using the various levels of STV in Figure 4. This figure aims to inform the choice of a proper STV criterium for beach pollution. Each point is a separate regression that evaluates the increase of sick leave if the local beachline went from no pollution to polluted with fecal matter density near the said STV level (criterium-5 to criterium to be precise). For example, the value that is plotted for STV of 50 cfu/mL is the estimated effect on sick leave if the beach pollution level been at 45-50 cfu/mL. Equation (1) is used to estimate these effects. No obvious trend break is noticeable near the 104 cfu/100 mL thresholds either.
Discussion
This study estimated the 3.56 million sick leave days per year due to polluted recreational water exposure among all Californians. This number is generally consistent with the GI infection predictions for Orange County's Newport and Huntington Beaches based on historical pollution-to-sickness estimates [12]. The two beaches were predicted to generate an average of 36,778 GI episodes per year and approximately 38,000 more other illness episodes per year, which included respiratory, eye, and ear infections [19]. It is important to note that beach-pollution induced illness is not the most prominent reason for sick leave. Therefore, this study explains approximately 0.6 percent of all sick leave per year according to the r-squared statistics of the preferred specification. Albeit small, this fraction of sick leave is the result of beach water pollution. Additionally, the small fraction equates to a large number of sick leave in total, which should not be taken lightly by policymakers.
Based on this study, the importance of the AB 411 to public health should not be underestimated, but it is important to note that these GM and STV criteria came into being rather haphazardly, for the purpose of fully understanding this policy. These criteria were converted from the coliform thresholds that were released by National Technical Advisory Committee in 1968 [24]. These coliform thresholds were obtained from a study by the U.S. Public Health Service, which found the threshold of sickness-inducing coliform density to be 400 fecal coliforms per 100 mL [31]. These results are referred to by the USEPA as "far from definitive", and the agency arbitrarily cut this density in half to 200 fecal coliforms per 100 mL in the 1960s [25]. Cabelli [20] and Dufour [32] later found that ENT density is a better predictor of GI illnesses when compared to coliform. In 1986, an unproven formula was implemented by the USEPA [25] to convert coliform into ENT standards. It is easy to see that various steps of the generation of the ENT criteria are debatable and the thresholds are not based on empirical research regarding ENT. Therefore, the policy calls for further tests on its effectiveness.
The effectiveness of the water quality criteria was assessed by two tests in this study. The first test is to determine the safeness of the monthly/daily water quality thresholds. These tests are important for a number of reasons. First, the only enforcement method is posting warning signs or closure signs. It would be beneficial for policymakers to test whether this is sufficient for preventing exposure to contaminated coastal water. Second, the California regulations have set the monthly GM pollution threshold for warning the public at 35 cfu/100 mL of ENT. This ENT standard is an arbitrary conversion of a debatable threshold measured using the fecal coliform density. Therefore, it is useful to see whether an empirical test that is based on ENT renders the ideal disease-prevention effect as is theoretically deduced. Surprisingly, the results indicate that the somewhat outdated
Discussion
This study estimated the 3.56 million sick leave days per year due to polluted recreational water exposure among all Californians. This number is generally consistent with the GI infection predictions for Orange County's Newport and Huntington Beaches based on historical pollution-to-sickness estimates [12]. The two beaches were predicted to generate an average of 36,778 GI episodes per year and approximately 38,000 more other illness episodes per year, which included respiratory, eye, and ear infections [19]. It is important to note that beach-pollution induced illness is not the most prominent reason for sick leave. Therefore, this study explains approximately 0.6 percent of all sick leave per year according to the r-squared statistics of the preferred specification. Albeit small, this fraction of sick leave is the result of beach water pollution. Additionally, the small fraction equates to a large number of sick leave in total, which should not be taken lightly by policymakers.
Based on this study, the importance of the AB 411 to public health should not be underestimated, but it is important to note that these GM and STV criteria came into being rather haphazardly, for the purpose of fully understanding this policy. These criteria were converted from the coliform thresholds that were released by National Technical Advisory Committee in 1968 [24]. These coliform thresholds were obtained from a study by the U.S. Public Health Service, which found the threshold of sickness-inducing coliform density to be 400 fecal coliforms per 100 mL [31]. These results are referred to by the USEPA as "far from definitive", and the agency arbitrarily cut this density in half to 200 fecal coliforms per 100 mL in the 1960s [25]. Cabelli [20] and Dufour [32] later found that ENT density is a better predictor of GI illnesses when compared to coliform. In 1986, an unproven formula was implemented by the USEPA [25] to convert coliform into ENT standards. It is easy to see that various steps of the generation of the ENT criteria are debatable and the thresholds are not based on empirical research regarding ENT. Therefore, the policy calls for further tests on its effectiveness.
The effectiveness of the water quality criteria was assessed by two tests in this study. The first test is to determine the safeness of the monthly/daily water quality thresholds. These tests are important for a number of reasons. First, the only enforcement method is posting warning signs or closure signs. It would be beneficial for policymakers to test whether this is sufficient for preventing exposure to contaminated coastal water. Second, the California regulations have set the monthly GM pollution threshold for warning the public at 35 cfu/100 mL of ENT. This ENT standard is an arbitrary conversion of a debatable threshold measured using the fecal coliform density. Therefore, it is useful to see whether an empirical test that is based on ENT renders the ideal disease-prevention effect as is theoretically deduced. Surprisingly, the results indicate that the somewhat outdated pollution thresholds are reasonably safe and appropriate. The empirical pollution-to-sickness curve matches well with the theory that was proposed by Cabelli [20]. The theoretically optimal threshold should be the start of the segment of accelerated growth in the pollution-to-sickness curve. Both the GM and STV thresholds precede the accelerated growth segment of their corresponding curves based on the shape of the empirical curve estimated in this study.
The second test focused on whether the recreators comply with the warning signs. As previous discussions suggested, our study should expect a significant decrease in the size of the effect when the thresholds were exceeded if compliance to the policy is high. Figures 3 and 4 showed no noticeable trend breaks near the GM or STV thresholds. The lack of such trend breaks indicates inadequate compliance to the existing policy being noticed using both monthly and daily water quality thresholds. If the current policy was strictly enforced, there would be a significant reduction in sick leaves and a corresponding gain in productivity. This finding implies that the warning signs were quite ineffective in preventing exposure to contaminated coastal water, and more effective measures are necessary.
The lack of enforcement of the policy is an alternative explanation for the lack of compliance. Brinks et al. [17] mentioned that the number of times a warning or a closure sign is posted on the beaches is much lower than the 12% estimated with the data. Meanwhile, it is worth noting that the enforcement of policies is often a clouded area with a lot of stakeholders weighing in to achieve the best balance that benefits the stakeholders in a fair manner. Preventing visitors from going certainly faces backlash from the beach-loving communities and beach cities, whose economy heavily depends on beach visitors. In fact, some beach lovers are willing to take the risk of going to "polluted" beach, despite the high-bacteria level warning. Therefore, the reality is that a beach is often closed when there is a sewage overflow or spill upstream from the beach, which is a real more serious risk, but it is not based on the ENT level of its water.
Conclusions
This study assesses the existing California Code of Regulations that controls the beach water quality monitoring for fecal contamination and determines the productivity losses due to exposure to beach fecal contamination. The hypothesis is that successful regulation would lead to no productivity losses. The study found a 0.6 percentage point increase in sick leave when the GM criterium is exceeded, which translates into 3.56 million sick leave days per year in coastal California. Furthermore, the regression model that was used is robust to alternative specifications. The results reveal that the beach water quality criteria are set at reasonably safe levels. However, the enforcement of the regulations needs to be strengthened to prevent visitors from going to polluted beaches. Public awareness of beach pollution effects on health should also be increased to prevent illness.
|
2019-06-07T20:32:30.158Z
|
2019-06-01T00:00:00.000
|
{
"year": 2019,
"sha1": "6bb6714ab871959a7cebe02ec2dc807e7108fba4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/16/11/1987/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6bb6714ab871959a7cebe02ec2dc807e7108fba4",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
}
|
119085875
|
pes2o/s2orc
|
v3-fos-license
|
Hole-driven MIT theory, Mott transition in VO_2, MoBRiK
For inhomogeneous high-T_c superconductors, hole-driven metal-insulator transition (MIT) theory explains that the gradual increase of conductivity with increasing hole doping is due to inhomogeneity with the local Mott system undergoing the first-order MIT and the local non-Mott system. For VO_2, a monoclinic and correlated metal (MCM) phase showing the linear characteristic as evidence of the Mott MIT is newly observed by applying electric field and temperature. The structural phase transition occurs between MCM and Rutile metal phases. Devices using the MIT are named MoBRiK.
It has been generally accepted that conductivity,σ, and T c for inhomogeneous high-T c superconductors [1] as strongly correlated systems gradually increase with doped hole density in a Mott insulator from under-doping to critical doping [2], although a first-order transition near the Mott insulator had been theoretically suggested [3]. These phenomena seemed to be explained by the Mott-Hubbard (MH) continuous metal-insulator transition (MIT) theory. However, since the MH theory was established for homogeneous system, the theory does not explain the phenomena in inhomogeneous system. The first-order Mott MIT without the structural phase transition (SPT) has not clearly been proved.
This paper briefly describes important ideas and physical meanings of the hole-driven MIT theory (extended Brinkman-Rice (BR) picture [4]) named by a reviewer in ref. 7 and explains the above phenomena. An experimental observation is also presented to clarify the Mott MIT.
Metal has the electronic structure of one electron per atom in real space, i.e. half filling, which indicates that δq= (q i -q j )=0 where q i and q j are charge quantities at i and j nearest neighbor sites, respectively [4]; there is no charge density wave. The BR picture explains physical properties of a strongly correlated metal, which was developed when n = l for a homogeneous system with one electron per atom, where n is the number of electrons in the Mott system with the electronic structure of metal ( Fig. 1(a) left) and l is the number of lattices in the measurement region [3]. The Mott insulator becomes metal via MIT.
When n<l (inhomogeneous) ( Fig. 1(a) right), Fourier transform from K-space to real-space is not satisfied. The local Mott insulator (system) becomes metal after MIT. When the inhomogeneous system is measured, local metal (Mott) regions are averaged over all lattice sites (measurement region); the measured physical quantity is an averaged one. Then the effective fractional charge is given by e ′ = ρe, where 0< ρ = n/l ≤1 is band filling. The fractional Coulomb energy between quasiparticles is defined by is the critical on-site Coulomb interaction in the BR picture [3]. The system is satisfied with Fourier transform. The physical meaning of the fractional e ′ and U is the effect of measurement (average) and not true effect.
The averaged system has an electronic structure of one effective charge per atom; ρ ′ = n ′ /l=1 where n ′ is the number of the effective charges. This is the same electronic structure as one of the BR picture with κ=1 [3,4]. The effective mass of the quasiparticle is derived from Gutzwiller's variational method and has the same form as that in the BR picture. This is because the averaged system with ρ ′ =1 and the true system satisfied with the BR picture are mathematically self-consistent. The effective mass is given by In the experimental analysis for an inhomogeneous system, κ was estimated as one [4]. The observed effective mass is given by Eq. (1) is defined at ρ =1 ( Fig. 1(b)) and has a firstorder discontinuous MIT between a Mott insulator with U c at ρ=1 and a metal at ρ max <1. The MIT is caused due to breakdown of U c by hole doping of a very low density, n c = 1 − ρ max , into the Mott insulator ( Fig. 1(c)); this is a hole-driven MIT. n c is a minimum constant hole density where the MIT occurs. Conversely, control of nc makes the Mott insulator switch between insulator and metal. After the MIT, the local Mott insulators become strongly correlated local metals with a κ =1 value in the BR picture. m * in Eq. (1) is an average of the true effective mass in the BR picture. For inhomogeneous superconductors, the gradual increase of conductivity with doping [2] is that the local Mott insulators in (Fig. 1(a) right) continuously change into metal with hole doping; σ ∝ (m * /m) 2 with doping density, ρ, dependence in Eq. (1). The reason why the measured conductivity with doping is continuous is that the magnitude of a local metal conductivity after the MIT is very small because the local area is within about 3nm [1]. Furthermore, the Mott MIT [5][6][7] and the Peierls MIT with the SPT [8,9] have been controversial even in VO 2 as a representative Mott insulator. This is due to ambiguity of relation between MIT and SPT. Fig. 1(d) clearly shows the relation. VO 2 was known to simultaneously undergo the MIT and the SPT near 68 • C (similar to V=1 case in Fig. 1(d)). Actually, even in this case, the MIT was not simultaneous with the SPT in our work [7]. When voltage is applied to a VO 2 -based device [7], the MIT occurs between T (monoclinic semiconductor phase, transient triclinic) [5,6] and MCM (monoclinic and correlated metal) phase with the increase of σ. The SPT happens between MCM and R (rutile tetragonal metal phase) (SPT instability, red-dash line in Fig. 1(d)); this was confirmed by micro-Raman experiment [10]. This indicates that the MIT is not related to the SPT. The MCM phase as evidence of the Mott transition clarifies the controversial problem. MCM differs from Pouget et al.'s M 2 Mott-Hubbard insulator phase [5,6]. We consider that T can be the paramagnetic Mott insulator with the equally spaced chain structure because T and MCM have the same structure.
MCM is caused by n(E) = n c (T, E) − n(T ), where n(E) is the hole density excited by electric field (voltage), n(T ) is the hole density excited by temperature, n c (T, E) is the critical hole density in which the MIT occurs by the electric field and temperature excitations [7]. Hole carriers were confirmed by Hall measurement [7]. For constant n c , n(T ) decreases as n(E) increases. Thus, T MIT decreases, which is evidence of the fact that the MIT is controlled by doped holes. This is predicted in Eq. (1). Further, MCM is regarded as a non-equilibrium state because metal exists at the divergence in Eq. (1) (Fig. 2(b)). First-order MIT devices using the transition between T and MCM are called MoBRiK from names of Mott-Brinkman-Rice-Kim physicists who have established and extended the MIT. * This was presented in the 8th International Conference on Materials and Mechanisms of Superconductivity and High Temperature Superconductors (M2S-HTSC-VIII) held in Dresden Germany from July 9 to July 14, 2006.
|
2019-04-14T02:11:55.484Z
|
2006-07-22T00:00:00.000
|
{
"year": 2006,
"sha1": "07e50c863ea664de8fa70323958b54ec4a7192d1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0607577",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c9ee7b0ae6f2369d8d1f698fbfe2ba40ea4a362d",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
247446624
|
pes2o/s2orc
|
v3-fos-license
|
Ensemble Semi-supervised Entity Alignment via Cycle-teaching
Entity alignment is to find identical entities in different knowledge graphs. Although embedding-based entity alignment has recently achieved remarkable progress, training data insufficiency remains a critical challenge. Conventional semi-supervised methods also suffer from the incorrect entity alignment in newly proposed training data. To resolve these issues, we design an iterative cycle-teaching framework for semi-supervised entity alignment. The key idea is to train multiple entity alignment models (called aligners) simultaneously and let each aligner iteratively teach its successor the proposed new entity alignment. We propose a diversity-aware alignment selection method to choose reliable entity alignment for each aligner. We also design a conflict resolution mechanism to resolve the alignment conflict when combining the new alignment of an aligner and that from its teacher. Besides, considering the influence of cycle-teaching order, we elaborately design a strategy to arrange the optimal order that can maximize the overall performance of multiple aligners. The cycle-teaching process can break the limitations of each model's learning capability and reduce the noise in new training data, leading to improved performance. Extensive experiments on benchmark datasets demonstrate the effectiveness of the proposed cycle-teaching framework, which significantly outperforms the state-of-the-art models when the training data is insufficient and the new entity alignment has much noise.
Introduction
Entity alignment seeks to find identical entities of different KGs that refer to the same real-world object. Recently, embedding-based entity alignment approaches have achieved great progress (Sun, Hu, and Li 2017;Sun et al. 2018;Cao et al. 2019;Wu et al. 2019a;Sun et al. 2020a;Zeng et al. 2020;Mao et al. 2020). One prime advantage of embedding techniques lies in relieving the heavy reliance on hand-craft features or rules. KG embeddings have also demonstrated their great strength to tackle the symbolic heterogeneity of different KGs (Wang et al. 2017). Especially for entity alignment, embedding-based approaches capture the similarity of entities in vector space. But these approaches highly rely on sufficient training data (i.e., seed entity alignment) to bridge different KG embedding spaces for alignment learning. The training data insufficiency issue in real-world scenarios (Chen et al. 2017) prevents embedding-based approaches from effectively capturing entity similarities across different KGs.
The semi-supervised approach is an effective solution to the above issue. It iteratively proposes new "reliable" entity alignment to augment training data (Zhu et al. 2017;Sun et al. 2018;Chen et al. 2018). There are several shortcomings of existing semi-supervised approaches, and not all semi-supervised strategies can bring improved performance and stability to entity alignment in real scenarios (Sun et al. 2020b). For example, although the popular self-training approach BootEA (Sun et al. 2018) can alleviate the erroraccumulation issue via a heuristic alignment editing method, the learning capability of its entity alignment model and the alignment selection bias still limit its performance. The cotraining approach KDCoE (Chen et al. 2018) incorporates literal descriptions as side information to complement the structure view of entities. However, it requires a high complementary of the two independent feature views and specific prior knowledge for feature selection. It usually fails to bring improvement due to the limited availability of descriptions.
In summary, the shortcomings of existing semi-supervised entity alignment approaches lie in the following aspects. (i) Noisy alignment accumulation. This is the critical challenge for semi-supervised approaches that the newly proposed entity alignment inevitably contains much noisy data. Iteratively accumulating new entity alignment as training data is also error-propagation. The incorrect entity alignment can spread to the following iterations with adverse effects on final performance. (ii) Biased alignment selection. The semi-supervised approach usually proposes the predicted high-confidence alignment to bootstrap itself, and such alignment will receive more training and higher confidence in the next iterations. The approach will then be more inclined to propose the same more and more "reliable" alignment, leading to a biased alignment selection. The performance improvement brought by retraining the entity alignment model with what it already knows is limited. (iii) Performance bottleneck of the aligner. Although the embedding-based entity alignment model (called aligner) can receive better optimization with more training data during semi-supervised learning, the aligner has its performance bottleneck due to its limited expressiveness on embedding learning. This can be reflected by the variable performance of the entity alignment approach when employing different embedding models, e.g., TransE (Bordes et al. 2013) and GCN (Kipf and Welling 2017).
To address the above shortcomings, we elaborately design a novel Cycle-Teaching framework for Entity Alignment, named CycTEA, which enables multiple entity alignment models (called aligners) to teach each other. CycTEA lets each aligner teach its selected new entity alignment to its subsequent aligner for robust semi-supervised training. The subsequent aligner can filter noisy alignment via alignment conflict resolution and get more reliable entity alignment to augment training data. The motivation behind our work is that, as different aligners have different alignment capacities, the selected new entity alignment of an aligner can benefit other aligners and help them filter the noisy alignment introduced by the biased alignment selection (Han et al. 2018).
Cycle-teaching possesses some critical advantages over the traditional ensemble semi-supervised method, e.g., Tri-Training (Zhou and Li 2005) that integrates three models in the "majority-teach-minority" way (i.e., majority vote). First, cycle-teaching can help each aligner break its performance bottleneck. It can produce more diverse and complementary entity alignment since the aligners have different capacities and are trained based on their own training data. Taught by the new "knowledge" from others, each aligner can overcome the ceiling of entity alignment performance. Second, cycleteaching can reduce the risk of noise training data. In cycleteaching, as different aligners have different learning abilities, they can filter different types of incorrect entity alignment by the proposed diversity-aware alignment selection and conflict resolution. The error flows from one aligner to its successor can be reduced during the iterations. Third, cycle-teaching can be easily extended to multiple aligners (more than three and also even number of aligners). It can avoid the problem that multiple models fail to reach an agreement by majority vote. Our contributions are summarized as follows: • We propose a novel semi-supervised learning framework, i.e., cycle-teaching, for entity alignment. It seeks to build a strong and robust entity alignment approach by integrating multiple simple aligners. It does not require sufficient feature views of entities or high performance of each aligner, and is able to achieve better generalization ability. • To guarantee the quality of new entity alignment, we propose a diversity-aware alignment selection method and resolve alignment conflict by re-matching. We determine the cycle-teaching order based on the complementarity and performance difference of neighboring aligners. The cycleteaching paradigm helps the multiple aligners combat the noise alignment during iterative training. For each aligner, its new entity alignment combined with the new knowledge learned from others can bring significant performance gain. • We show that conventional semi-supervised methods, e.g., self-training and co-training can be regarded as the special cases of cycle-teaching. The advantages of cycle-teaching lie in reducing noise alignment accumulation and markedly boosting each aligner by teaching it unknown alignment. • Our framework can integrate any entity alignment models, including relation-based models such as AlignE (Sun et al. 2018), RSN4EA (Guo, Sun, and Hu 2019) and AliNet (Sun et al. 2020a). Extensive experiments on the benchmark entity alignment datasets OpenEA (Sun et al. 2020b) demonstrate the effectiveness of our framework.
Related Work
Structure-based Entity Alignment. The assumption for structure-based entity alignment is that similar entities should have similar relational structures. Early studies such as MTransE (Chen et al. 2017), AlignE (Sun et al. 2018), SEA (Pei et al. 2019) exploit TransE (Bordes et al. 2013) as the base embedding model for relational structure learning. To capture entity alignment across different KGs, the two KGs are merged as one graph for joint embedding or separately embedded along with a linear mapping. Recent studies such as GCN-Align (Wang et al. 2018), AliNet (Sun et al. 2020a) and others Zhu et al. 2019;Li et al. 2019;Fey et al. 2020;Ye et al. 2019;Xu et al. 2019) design various graph neural networks (GNNs) for neighborhood structure learning and alignment learning. Some approaches that exploit long-term relational dependency of entities like IP-TransE (Zhu et al. 2017) and RSN4EA (Guo, Sun, and Hu 2019) have also achieved great progress. Attribute-enhanced Entity Alignment. Other approaches enhance entity alignment by learning from side information such as attribute correlation (Sun, Hu, and Li 2017), attribute values (Trisedya, Qi, and Zhang 2019;, entity names (Wu et al. 2019a(Wu et al. ,b, 2020Liu et al. 2020) and distant supervision information from pre-trained language models (Yang et al. 2019;Tang et al. 2020). Although achieving high performance, one major problem of these models lies in their limited generalizability since the side information is not always available in different KGs. Semi-supervised Entity Alignment. As seed entity alignment is usually limited in real scenarios, some approaches explore to label new alignment to augment training data iteratively. IPTransE (Zhu et al. 2017) conducts self-training to propose new alignment. However, it fails to achieve satisfying performance because it accumulates much noise data during iterations. Some work (Chen et al. 2018;Yang et al. 2020) uses a co-training mechanism to propagate new alignment from two orthogonal views (e.g., relational structures and attributes). However, the improvement is also limited because some entities do not have attributes. BootEA (Sun et al. 2018) implements a heuristic editing method to mitigate the error-propagation issue, bringing significant improvement. However, the new seed selection is also limited by the model performance. When the accumulated new pairs can be aligned successfully by the embedding module itself, the improvement would be smaller. Therefore, we aim to design an approach that iteratively labels reliable entity alignment as training data and accumulates the new entity alignment that one model can hardly find by itself based on cycle-teaching.
Embedding-based Entity Alignment
We define a KG as a 3-tuple, i.e., K = (E, R, T ). E and R denote entity and relation sets, respectively. T ⊆ E × R × E is the set of relational triples. Following (Sun et al. 2018), we consider entity alignment between a source KG K 1 = (E 1 , R 1 , T 1 ) and a target one K 2 = (E 2 , R 2 , T 2 ). Given a small set of seed entity alignment A train = {(e 1 , e 2 ) ∈ E 1 × E 2 e 1 ≡ e 2 } as training data, the task seeks to find the remaining entity alignment. For embedding-based approaches, the typical inference process is via the nearest neighbor search in the embedding space, i.e., given an aligned entity pair (x, y), embedding-based approaches seek to hold y = arg max where π(x, y) is a similarity measure to serve as the alignment confidence of entities and we use cosine in the paper.
Hereafter, we use bold-faced letters to denote embeddings, e.g, x and y are the embeddings of entities x and y, respectively. To achieve the goal as in Eq.
(1), an entity alignment framework usually employs two basic modules: knowledge embedding and alignment learning (Sun et al. 2020b).
Knowledge Embedding
This module seeks to learn an embedding function f to map an entity to its embedding, i.e., f (x) = x. TransE (Bordes et al. 2013), RSN4EA (Guo, Sun, and Hu 2019) and GNN (Kipf and Welling 2017) are three popular KG embedding techniques. In TransE, the embeddings are learned by minimizing a energy function over each triple (h, r, t): where · denotes L 1 or L 2 vector norm. KG embeddings can be learned by jointly optimizing the TransE's objective and the alignment learning objective (in the next section). RSN4EA (Guo, Sun, and Hu 2019) proposes a recurrent skip mechanism to capture the long-term semantic information. It uses a biased random walk to generate relation paths such as (x 1 , x 2 , . . . , x T ) with entities and relations in an alternating order. It encodes the paths as the output hidden state in RNN, i.e., o i = tanh (W 1 o i−1 + W 2 x i + b) at step i, where W 1 , W 2 are the weight matrices and b is the bias. The skipping connection is defined as follows to enhance the semantic representations between entities and relations: where S 1 and S 2 are weight matrices for entities and relations, respectively. For GNN, f is an aggregation function to combine the representations of central entity and neighbors: where N e are the embeddings of entity e's neighbors. Different aggregation strategies lead to different GNN variants. GNNs output entity representations for alignment learning.
All existing knowledge embedding techniques can be applied to our cycle-teaching framework. Specifically, TransE (e.g., AlignE (Sun et al. 2018)) captures the local semantics from relation triples, GNN (e.g., AliNet (Sun et al. 2020a)) models the global structure of KGs, and RSN4EA (Guo, Sun, and Hu 2019) leans the long-term semantic knowledge. In addition, other side information can also be considered by incorporating the attribute-enhanced aligners into the cycleteaching framework, which is left for future work.
Alignment Learning
To capture the alignment information, some models directly maximize the embedding similarities of pre-aligned entities, whose objective can be formulated as follows: Augmenting training data is our focus in this paper.
4 Cycle Teaching for Entity Alignment Figure 1 illustrates the cycle-teaching framework. At each iteration, if training has not been terminated, our framework would automatically compute an optimal cycle-teaching order (Sect.
Cycle-teaching Order Arrangement
There are multiple aligners in CycTEA. Intuitively, the adjacent aligners should have higher complementarity such that the successor can receive more reliable alignment beyond its capacity. Moreover, it is better to let the aligner with higher performance teach weaker aligners so that the student aligner (successor) can be promoted by the more excellent teacher aligner (predecessor). To this end, we formalize our order arrangement problem as a Travelling Salesman Problem (TSP). We first build a directed complete graph where each aligner works as a node, and the edge weight reveals how beneficial to connect the two nodes. Then, the task is to return the route starting from an aligner while ending with the same one and has the highest sum of edge weights. The resulted route indicates the order arrangement. The most important thing is to define the edge weight and we hereby consider two critical factors. The first factor is the complementarity of the alignment selection from M i to M j : where A i and A j denote the new reliable alignment sets of M i and M j at current iteration, respectively. It is noted that f com (M i , M j ) = f com (M j , M i ) as we measure the complementarity feature in an asymmetric way, which reflects the new alignment brought by aligner M i to M j . We also want the stronger aligner to teach the weaker one. Therefore, we define the weight of performance between aligners M i and M j as the current Hits@1 difference on valid dataset: i ) since the subtraction operation is not symmetric. The final edge weight from aligner M i to M j is the combination of the two factors: where is the combination weight. After calculating each edge weight in the aligner graph, we aim to find an optimal path to traverse the whole graph covering the maximized edge weights. This TSP problem is NP-hard as there are totally (k − 1)! choices of paths). We can utilize existing TSP approximate algorithms to derive sub-optimal solution when the number of aligners is very large. While in practice, we can enumerate all paths and pick the optimal one since the number of aligners is usually small.
Diversity-aware Alignment Selection
To pick out reliable entity alignment as new training data, we propose diversity-aware alignment selection, which considers both the embedding similarity and match diversity. Match Diversity. Entity alignment is a 1 : 1 matching task, i.e., a source entity can be matched with at most one target entity, and vice versa (Suchanek, Abiteboul, and Senellart 2011). We expect a source entity to have very high embedding similarity with its counterpart in the target KG. Existing methods use nearest neighbor (NN) search to retrieve entity alignment, while ignoring the similarity distribution over other entities. In contrast, we consider the match diversity (Gal, Roitman, and Sagi 2016), which measures how much a predicted alignment pair (x, y) deviates from other competing pairs like (x , y) and (x, y ). We compute the average similarity of all competing pairs as: where N x denotes the set of all the candidate target entities for the source entity x (including y), and N y is the set of all candidate source entities for the target entity y (including x). Given the average similarity, we define the match diversity of (x, y) as the difference between its similarity and the average: We expect the correct entity alignment pair has a high match diversity, which indicates that the pairs have a high embedding similarity while its competing pairs have low similarity. Alignment Selection via Stable Matching. We use match diversity as alignment confidence to select new alignment.
To guarantee the quality of the selected entity alignment and satisfy the constraint of 1 : 1 matching, we model alignment selection as a stable matching problem (SMP). We generate a sorted candidate list for each entity based on the alignment confidence. The SMP can be solved by the Gale-Shapley algorithm, which can produce a stable matching for all entities in time O(n 2 ) where n is the number of source entities.
Conflict Resolution via Re-matching
For each aligner, it may have conflict selection between its predecessor and itself. For example, source entity x may have two different "counterparts" y 1 and y 2 predicted by itself and its predecessor, respectively. Our solution is to let the two aligners cooperate to resolve the alignment conflicts. For the selected reliable entity alignment of two aligners, A 1 and A 2 , assuming that the conflict alignment set is C, we collect the entities appearing in C to conduct a re-matching process. Specifically, for the involved entities from the source KG X = {x|(x, y 1 , y 2 ) ∈ C, x ∈ E 1 , y 1 , y 2 ∈ E 2 }, and the entities that have not been matched from the target KG Y = {y|y ∈ E 2 , y ∈ A 1 ∪ A 2 }, we utilize them to build a bipartite graph with weights and conduct the alignment selection in 4.2 to select more reliable alignment pairs. Considering the conflict pairs are difficult, we combine the similarity of two aligners, to serve as bipartite graph edge weights: where α = valid 1 /(valid 1 +valid 2 ) is a balance weight based on the two aligners' validation performance (Hits@1). Compared with other possible conflict resolution strategies, such as majority vote and ensemble training, our method has the following advantages. The majority vote is limited to odd numbers of aligners, and it does not consider the similarity distribution, so the final selection is limited to the choices of aligners. For ensemble training, it outputs the same selection for all the aligners, and these aligners will become increasingly similar as the training process continues, resulting in lower robustness of alignment noise. In contrast, our re-matching strategy considers similarity distribution to repair incorrect alignment pairs, and the cycle propagation prevents all aligners to rapidly become similar.
Ensemble Entity Alignment Retrieval
To benefit from all aligners, we combine their embedding similarity to generate final alignment result. We firstly assign weights {α 1 , α 2 , . . . , α k } to aligners {M 1 , M 2 , . . . , M k } based on their Hits@1 on validation data: Then, the final similarity between entities is defined as the weighted sum of the similarity of each aligner: π(x, y) = i∈{1,...,k} Given the ensemble entity similarities, we obtain the candidate counterpart list for each source entity by the NN search.
Discussions
We compare CycTEA with two semi-supervised approaches in Figure 2. The self-training approach BootEA can be regarded as a special case of cycle-teaching with only one aligner. It directly feeds the selected entity alignment to itself. The noise is also transferred back to the aligner. The cotraining approach KDCoE utilizes two independent aligners to propose new alignment. The noisy data from both aligners is also accumulated together. Therefore, it still suffers from the problem that each aligner's noisy information is also transferred back to itself. By contrast, in our framework, a large part of the aligner's noisy alignment is not directly sent back to itself due to the alignment cycle propagation. Instead, the noisy pairs are fed into the other model. As different aligners generate embeddings from different perspectives, one aligner's noisy pairs may be easily handled by the other. In addition, we carefully design the training data accumulation procedure as a fine-tuned step by removing the negative sampling from alignment learning over selected new pairs, and set smaller semi-supervised training epochs. Therefore, the aligner will be adapted to the correct pairs from the other model firstly, and the influence of noisy data can be reduced.
Experiment
We build our framework on top of the OpenEA library (Sun et al. 2020b). We will release our source code in GitHub 1 .
Experimental Settings
Datasets. Current datasets such as DBP/DWY (Sun, Hu, and Li 2017) are quite different from real-word KGs (Sun et al. 2020b). Hence, we use the benchmark dataset released at the OpenEA library (Sun et al. 2020b) for evaluation, which follows the data distribution of real KGs. It contains two crosslingual settings (English-to-French and English-to-German) and two monolingual settings (DBpedia-to-Wikidata and DBpedia-to-YAGO). Each setting has two sizes with 15K and 100K pairs of reference entity alignment, respectively. We follow the dataset splits in OpenEA, where 20% reference alignment is for training, 10% for validation and 70% for test. Implementation Details. CycTEA can incorporate any number of aligners(k ≥ 2). We choose three structure-based models as the aligners, i.e., AlignE, AliNet and RSN4EA. We follow their implementations settings used in OpenEA for a fair comparison. The order arrangement parameter = 0.2. Performance is tested with 5-fold cross-validation to ensure an unbiased evaluation. Following the convention, we use Hits@1, Hits@5 and MRR as evaluation metrics, and higher scores of these metrics indicate better performance.
Baselines. Similarly, we compare with structure-based entity alignment models for a fair comparison: • Triple-based models that capture the local semantics information of relation triples based on TransE, including MTransE (Chen et al. 2017), AlignE and BootEA (Sun et al. 2018) as well as SEA (Pei et al. 2019). • Neighborhood-based models that use GNNs to exploit subgraph structures for entity alignment, including GCNAlign (Wang et al. 2018), AliNet (Sun et al. 2020a), HyperKA (Sun et al. 2020a) and KE-GCN (Yu et al. 2021). • Path-based models that explore the long-term dependency across relation paths, including IPTransE (Zhu et al. 2017) and RSN4EA (Guo, Sun, and Hu 2019).
We do not compare with some recent attribute-based models (Chen et al. 2018;Wu et al. 2019a;) since they require side information that our framework, as well as other baselines, do not use. In addition, as RREA (Mao et al. 2020) failed in OpenEA 100K datasets (Ge et al. 2021), we do not include it in the baselines.
Main Results
Tables 1 and 2 present the entity alignment results. CycTEA outperforms all baselines, and is 4% to 11% higher than the strongest baseline BootEA on Hits@1. e.g., CycTEA outperforms BootEA by 11.5% on EN-FR-15K. BootEA achieves second-best results due to its bootstrapping strategy, but the limited ability of self-training prevents its further improvement. In the supervised setting, KE-GCN, AliNet and RSN4EA all acquire satisfactory results, but lack of training data limits their performance. On 100K datasets, many baselines fail to achieve promising results due to more complex KG structures and larger alignment space, but CycTEA maintains the best performance, demonstrating its practicability. The variant of an aligner in CycTEA is denoted as "aligner+" (e.g., "AlignE+" refers to the "AlignE" in CycTEA). Three
Further Analyses
Effectiveness of Cycle-teaching. Table 3 compares CycTEA with other semi-supervised strategies. The first three model variants use self-training. CycTEA significantly outperforms the self-training models, because it can integrate knowledge from all aligners. Compared with other alignment selection strategies, i.e., keeping the alignment supported by all aligners (Intersection), or combining alignment from all aligners (Union), or selecting alignment supported by the highest number of aligners (Majority vote), CycTEA still achieves the best performance. The improvement achieved by "Intersection" is the smallest because entity pairs supported by all aligners are limited. But it has the best result in EN-FR by coincidence, as the noisy alignment proposed by all aligners is higher than other datasets, and "Intersection" can filter the noise thoroughly. "Union" achieves limited improvement since it involves much noise. "Majority vote" is more considerable than them but still lower than CycTEA as it inevitably removes some correct entity alignment pairs.
Effectiveness of Re-matching. Table 4 evaluates the effect of our re-matching strategy for conflict resolution (cr). We can see that re-matching can significantly improve the precision of new alignment by 4% to 9%, while with a slight decrease in recall. This is because the filter process of rematching inevitably breaks some correct entity pairs. Finally, re-matching can bring 1% to 3% improvement on F1-score to all aligners and further improve the overall performance. Robustness to Noisy Accumulated Entity Alignment. To evaluate the robustness of CycTEA to the accumulated noisy alignment, we evaluate the framework trained on different proportions of seed entity alignment from 10% to 20% to simulate this scenario, as less training data leads to worse performance and a larger ratio of noise in the proposed new alignment. Figure 3 depicts the trend of Hits@1 of three aligners in self-training (denoted as "aligner (semi)"). They all achieve better performance as the ratio increases. We can see AliNet (semi) is more sensitive to the training data ratio as its performance drops drastically when the training data size decreases. As GNNs capture the global structure information, the useful information it can capture reduces exponentially when the training data is meager. RSN4EA (semi) has poor robustness as it presents much worse results in the scenarios of heavy noise. CycTEA maintains promising performance over any training data ratio, which validates its superiority.
Effectiveness of Dynamic Order Arrangement. Figure 4 represents the performance with different cycle-teaching orders. There are two static orders in total given three aligners.
We can see the first order is superior to the second one, and our dynamic order exceeds both static orders. In particular, the performance with the order "AlignE-AliNet-RSN4EA" is only slightly worse than ours. This is because the order appears much more frequent than the other one in our dynamic order arrangement during iteration. This shows that our dynamic order arrangement can effectively capture the optimal order during training and result in better performance.
Effectiveness of Diversity-aware Alignment Selection. Table 5 gives the ablation study on our diversity-aware selection method. We report the results of AliNet in CycTEA due to space limitation, where "w/o daas" denotes the variant without using the matching diversity for alignment selection. We can see that our diversity-aware method can improve the pre- cision and F1-score of selected alignment and reduce noise. Different numbers of aligners in CycTEA. Our framework can be implemented to any number of aligners. Due to space limitation, we test its performance with k = 2, 3, 4 aligners, respectively. We choose AlignE and AliNet for k = 2, and add RSN4EA for k = 3, and introduce KE-GCN for k = 4. These aligners are well-performing structure-based models, and these combinations are the optimal settings given these four aligners. Figure 5 indicates that the performance increases when integrating more aligners. But in the main setting, we choose AlignE, AliNet and RSN4EA, the structurebased aligners with high complementarity and good performance, for a trade-off between effectiveness and efficiency.
Conclusion
We present a novel and practical cycle-teaching framework for entity alignment. It utilizes multiple entity alignment models and iteratively cycle-teach each model by transmitting their proposed reliable alignment. Cycle-teaching can primarily remedy the effect of noisy data in the accumulated new alignment and extend all models' alignment learning ability. Our diversity-aware alignment selection and re-matching based alignment conflict resolution strategies further improve the quality of the new alignment. We also consider the effect of teaching order and propose dynamic order arrangement. Experiments on benchmark datasets show that our approach can outperform SOTA methods and achieve promising results in the heavy noise-propagation scenario. For future work, we plan to extend our framework to multi-view features.
|
2022-03-15T01:16:15.281Z
|
2022-03-12T00:00:00.000
|
{
"year": 2022,
"sha1": "390313403c892f396380946ff04ffae297d7ef4e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "390313403c892f396380946ff04ffae297d7ef4e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
18300623
|
pes2o/s2orc
|
v3-fos-license
|
Kernel Ridge Regression via Partitioning
In this paper, we investigate a divide and conquer approach to Kernel Ridge Regression (KRR). Given n samples, the division step involves separating the points based on some underlying disjoint partition of the input space (possibly via clustering), and then computing a KRR estimate for each partition. The conquering step is simple: for each partition, we only consider its own local estimate for prediction. We establish conditions under which we can give generalization bounds for this estimator, as well as achieve optimal minimax rates. We also show that the approximation error component of the generalization error is lesser than when a single KRR estimate is fit on the data: thus providing both statistical and computational advantages over a single KRR estimate over the entire data (or an averaging over random partitions as in other recent work, [30]). Lastly, we provide experimental validation for our proposed estimator and our assumptions.
Introduction
Kernel methods find wide and varied applicability in machine learning. Kernelization of supervised/unsupervised learning algorithms allows an easy extension to operate them on implicitly infinite/high-dimensional feature representations. The use of kernel feature maps can also convert non-linearly separable data to be separable in the new feature space, thus resulting in good predictive performance. One such application of kernels is the problem of Kernel Ridge Regression (shortened as KRR). Given covariate-response pairs (x, y), the goal is to compute a kernel-based function f such that f (x) approximates y well on average. In this regard, several learning methods with different kernel classes have been shown to achieve good predictive performance. Despite their good generalization, kernel methods suffer from a computational drawback if the number of samples n is large -which is more so the case in modern settings. They require at least a computational cost of O(n 2 ), which is the time required to compute the kernel matrix, and O(n 3 ) time when the kernel matrix also has to be inverted, which is the case for KRR.
Several approaches have been proposed to mitigate this, including Nyström approximations [2,1,21], approximations via random features [16,17,7,27], and others [19,26]. While these approaches help computationally, they typically incur an error over-and-above the error incurred by a KRR estimate on the entire data. Another class of approaches that may not incur such an error are based on what we loosely characterize as divide-and-conquer approaches, wherein the data points are divided into smaller sets, and estimators trained on the divisions. These approaches may further be categorized into three main classes: division by uniform splitting [30], division by clustering [10,12] or division by partitioning [8]. The latter may also include local learning approaches, which are based on estimates using training points near a test point [3,28,22,11]. Given this considerable line of work, there is now an understanding that these divide-and-conquer approaches provide computational benefits, and yet have statistical performance that is either asymptotically equivalent, or at most slightly worse than that of the whole KRR estimator. Please see [30,12,8] and references therein for results reflecting this understanding for uniform splitting, clustering and partitioning respectively. However, these results have restrictive assumptions, applicability or other limitations, such as requiring the covariates/responses to be bounded [8], or only being applicable to specific kernels e.g. Gaussian [8] or linear [10], or only being targeted to classification [10,12], or providing error rates only on the training error [12]. Moreover, approaches based on uniform splitting, such as [30], can suffer from worse approximation error, as alluded to shortly.
In this paper, we consider a partitioning based divide-and-conquer approach to kernel ridge regression. We provide a refined analysis, applicable to general kernels, which leads us to this surprising conclusion: the partitioning based approach not only has computational benefits outlined in previous papers, but also has strong statistical benefits when compared to the whole KRR estimator. In other words, based on both a statistical and computational viewpoint, we are able to recommend the use of the partitioning based approach over the whole KRR approach.
The partitioning based approach is: Given n sample points, we divide them into m groups based on a fixed disjoint partitioning of input space X that the samples are drawn from. One way to obtain this partition is via clustering, however, in principle, any partition that satisfies certain assumptions (detailed in Section 4.1) would be acceptable. A primary intuition for considering partitioning is that the distribution within each partition may be localized with thin tails. Equivalently, the eigenspectrum of the covariance conditioned on the partition may decay sharply enough such that simply focusing on the local samples suffices to obtain a good approximation. This intuition is captured in our assumptions. So, once the samples have been divided, we learn a kernel ridge regression estimate for each partition using only its own samples. The conquering step i.e. computing the overall estimator,f C , is then simple: Each individual estimator is applied to its respective partition. Thus, to perform prediction for a new point, we simply identify its partition, and use the estimator for that partition. Now, partitioning has a clear computational advantage since each estimate is trained over only a fraction of the points. Moreover, partitioning may provide statistical advantages as well if there is an inherent approximation error in the problem i.e., the true regressor function, f * , lies outside the space of kernel-based functions. In this case, the KRR estimator on the whole data, sayf whole , or the KRR estimator based on uniform splitting, saŷ f avg , both may be viewed as estimating the best single kernel-based function that approximates f * . However, if we partition, then we are estimating the best m-piece-wise kernel-based function to approximate f * . Indeed, we can show that the approximation error forf C is lesser thanf avg , and corroborate this experimentally. The residual error terms on the other hand are typically of the same order, so that the overall generalization error for our method is lower. In addition, there is yet another potential computational advantage of partitioning: prediction is faster since for a new point, the kernel values must be computed w.r.t. only a fraction of the points (as opposed to all the points forf whole orf avg ).
Related Work
We briefly review some of the earlier mentioned work that provide theoretical analyses of divide and conquer approaches, based on clustering, uniform splitting, and partitioning. [10] have applied clustering to linear SVMs (instead of KRR, as in this work) with an additional global penalty to prevent over fitting, and derived simple generalization rates based on rademacher complexity estimates. [12] consider clustering for Kernel SVMs with a modified conquering step -solutions of the local SVM problems are combined to produce an initialization for a solver of the global SVM problem. Under this scheme, they analyze the so-called fixed design setting i.e. they bound the error on the training data as a function of the block diagonal approximation of the kernel. In contrast, in this work we consider the random design setting i.e. we bound the generalization/prediction error, and also for the slightly different (but related) problem of Kernel Ridge Regression. Perhaps, the approaches most closely related to our work are [30,8]. [30] analyze the uniform splitting approach where the samples are split uniformly at random, followed by an averaging of the KRR estimate of each split. The authors have derived generalization rates for this estimator, and matched optimal rates as long as the number of splits is not large, and the true function f * lies in the specified space of kernel-based functions. However, as mentioned previously, such an estimator can have worse approximation error than our estimator,f C , when the true function, f * , lies outside the space of kernel-based functions. [8] analyze a partition based approach as in our paper: their estimator works by partitioning the input space, and predicting using KRR/SVM estimates over each partition individually. For this estimator, [8] derive generalization rates when using Gaussian kernels, and under additional restrictions: they require bounded covariates, x ≤ B, bounded response, |y| ≤ M , and that each partition be bounded by a ball of suitable radius, R. 1 Given these restrictions, they show suitable choices for R and the Gaussian kernel scale γ which yield optimal rates when the true function f * lies in a smooth Sobolev/Besov space. In contrast, we provide a more general analysis that does not enforce a bound on the covariates, response, or the size of the partition. Moreover, we are able to apply it to kernels other than the Gaussian kernel, and achieve minimax optimal rates when the true function, f * , lies in the space of kernel-based functions. When it doesn't, we provide an oracle inequality similar to [8], which could then be specialized to obtain similar rates for their specific setting. More importantly, our analysis is also able to show that in general, the approximation component of this inequality is lesser than the approximation component of the whole KRR estimator, while the residual components can be of the same order.
Another line of work in the same spirit as this work is that of local learning approaches, for e.g. the early work of [3]. The main idea here is to select training samples near a given test sample, and only use those to train an SVM for the particular test sample's prediction. Several variants under this general scheme have been proposed [28,22,11]. However, since each test sample requires finding nearby training points and solving its own SVM, these approaches can be inefficient for prediction.
From a theoretical standpoint, the generalization error for KRR has been studied extensivelyan incomplete list includes [6,29,23,4,25,15,13,9]. While we shall not delve into the differences among the results derived in these articles, we refer the interested reader to [13,Section 2.5], [9,Section 3], and the references therein, for a more detailed comparison. Of particular relevance to our analysis is the approach in [13], wherein the generalization error is broken down into contributions of regularization, bias due to random design and variance due to noise, and consequently each of these is controlled separately w.h.p.. Moreover, the analysis of the related approach in [30] may be viewed as a moment version of the same strategy. We adopt a similar strategy to control the expected error of our estimator,f C .
The rest of this paper is organized as follows. Section 2 describes the KRR problem, and sets up some notation and mathematical prerequisites. Section 3 details the DC-estimatorf C , our partitioning based estimator. Section 4 presents the bounds on the generalization error off C , and the assumptions required to achieve them. Section 5 instantiates these bounds for three specific and commonly studied kernel classes. Finally, Section 6 provides empirical performance results. All proofs are provided in the appendix.
Preliminaries and Problem Setup
Reproducing Kernel Hilbert Spaces. Consider any set X . In machine learning applications, X is typically the space of the input data. A function K : X × X → R, is called a kernel function if it is continuous, symmetric, and positive definite. With any kernel function K, one can associate a unique Hilbert space called the Reproducing Kernel Hilbert Space of K (abbreviated as RKHS henceforth). For x ∈ X , let φ x : X → R be the function φ x (·) := K(x, ·). Then, the unique RKHS corresponding to kernel K, denoted as H, is a Hilbert space of functions from X to R defined as: Thus, any f ∈ H has the representation f product on H is given as: . The inner product also induces a norm on H, given as: Kernel Ridge Regression. We are given a training set of n samples, D = {(x 1 , y 1 ), . . . , (x n , y n )}, of the tuple (x, y) drawn i.i.d. from an unknown distribution P on X × Y. x (and x i ) is a random variable in the input space X , also called the covariate. y (and y i ) is a random variable in the output space Y, also called the response. We consider Y ⊆ R and assume an additive noise model for relating the response to the covariate: where η is the random noise variable and f * : X → R is an unknown mapping of covariates in X to responses in R. The goal of regression is to compute the function (or an approximation to) f * . We also assume that the noise has zero mean and bounded variance, E [η|x] = 0 and E η 2 |x ≤ σ 2 , and that f * is square integreable with respect to the measure on X . Equivalently, this means that A Kernel Ridge Regression (KRR) estimator approximates f * by a function in the RKHS space H (corresponding to kernel K). We require that the RKHS space H ⊂ L 2 (X , P) -which means ∀x, E y∼P [K(x, y) 2 ] < ∞ -which is always true for several kernel classes, including Gaussian, Laplacian, or any trace class kernel w.r.t. P. The KRR estimatef λ ∈ H is obtained by solving the following optimization problem: where λ > 0 is the regularization penalty. This is tractable since, by the representer theorem, we have the relationf λ = n i=1 α i φ x i , with α ∈ R n being the solution of the following problem: where G ∈ R n×n is the kernel matrix, with ). Eq. (4) has a closed form solution, given as: α = (G + nλI) −1 y.
Generalization/Prediction Error. For any estimatorf : X → R, the generalization error provides a metric of closeness to f * , by measuring the average squared error in prediction usingf . It is defined as: By quantifying Err(f ) to be small, we know thatf is a good approximation to f * . When the estimator is random, for example the KRR estimatef λ in Eq. (3) depends on random samples, we may quantify the average error over the randomness i.e. bound E D [Err(f λ )], where the expectation is taken over the samples D.
In this paper, we provide bounds on the quantity E D [Err(f C )], wheref C is the Divide-and-Conquer estimator (DC-estimator) described in Section 3.
Partition-specific notation. Since our estimator,f C , is based on partitioning, we setup some notation here for partition-specific quantities that play a role throughout the analysis. We say that the input space X has a disjoint partition {C 1 , . . . , C m } if: Given data D = {(x 1 , y 1 ), . . . , (x n , y n )}, we define a partition-based empirical covariance operator as: where 1 (·) denotes the indicator function and φ x ⊗ φ x denotes the operator φ x φ x , · H . We define its population counterpart as: Note the relation: is the overall covariance operator.
We let {λ i j , v i j } ∞ j=1 denote the collection of eigenvalue-eigenfunction pairs for Σ i . For any λ > 0, we define a spectral sum for Σ i : Similarly, letting {λ j , v j } ∞ j=1 be the eigenvalue-eigenfunction pairs for the overall covariance Σ, the corresponding sum for Σ is defined as: The quantity S(λ) has appeared in previous work on KRR [29,13,30], and is called the effective dimensionality of the kernel K (at scale λ). Typically, it plays the same role as dimension does in finite dimensional ridge regression. We shall refer to the quantity S i (λ) as the effective dimensionality of partition C i . Finally, we let p i = P(x ∈ C i ) denote the probability mass of partition C i .
3 The DC-estimator:f C When the number of samples n is large, solving Eq. (3) (through Eq. (4)) may be computationally prohibitive, requiring O(n 3 ) time in the worst case. A simple strategy to tackle this is by dividing the samples D into disjoint partitions, and computing an estimate separately for each partition.
In this work, we consider partitions of D which adhere to an underlying disjoint partition of the input space X . Suppose that the input space X has a disjoint partition {C 1 , . . . , C m }. Note that m denotes the number of partitions. Also, suppose that given any point x ∈ X , we can find the partition it belongs to from the set {C 1 , . . . , C m }. Note that m denotes the number of partitions. Also, suppose that given any point x ∈ X , we can find the partition it belongs to from the set {C 1 , . . . , C m }.
Now, we divide the data set D in agreement with this partitioning of X i.e. we split D = Then, for any partition i ∈ [m], we compute a local estimator using only the points in its partition: where λ > 0 is the regularization penalty. Finally, the overall estimator,f C , comprises of the local estimators applied to their corresponding partitions: In practice, one can use a clustering algorithm to cluster the points in D, as well as determine membership for new points x.
Generalization Error off C
In this section we quantify the error E D Err(f C ) , wheref C is the DC-estimator from Eq. 12. The analysis follows an integral-operator approach which has been frequently employed in deriving such bounds in learning theory, for e.g. in [23,13].
First, we observe that Err(f C ) can be decomposed as a sum of errors of the local estimators,f i,λ , on their corresponding partitions C i , i ∈ [m]. We have: where 1 (·) denotes the indicator function, and we have defined the partition-wise error: By linearity of expectation, . Therefore, to obtain a bound on f i,λ and f i,λ are the optimal population KRR estimates for partition C i , with regularization penalties λ and λ respectively.f i,λ is the expected value of the empirical KRR estimate from Eq. (11), with the expectation taken over the samples D. Note that there is no source of randomness in all of the above quantities, whereasf i,λ is a random quantity due to its dependence on the random samples D. Now, based on the above estimates, we define the following error terms: Regularization Error : Bias : Variance : The intent of f i,λ , in the above definition, is to correspond to the best kernel function that approximates f * in the partition C i . The choice of λ, that determines f i,λ , can be viewed as a small regularization penalty that trades-off the approximation error, Approx i (λ), to f i,λ H (which influences the remaining terms in Definition 1). Ideally, if the unknown regression function f * lies in the RKHS space H, then a choice of λ = 0 would suffice. In that case, we would have Now, the following lemma describes the decomposition of E D Err i (f i,λ ) in terms of the quantities in Definition 1.
Lemma 1 (Error Decomposition). For each partition
Thus, the overall error E D Err(f C ) can be decomposed as (for any λ ∈ [0, λ]): We note that while in the above decomposition we have considered the same choice of regularization penalty, λ (and λ), for all partitions i ∈ [m], a similar decomposition would hold even if we were to choose a different λ (and λ) for each partition.
To summarize, in Lemma 1, we have decomposed the overall error of our estimator, E D Err(f C ) , as a sum of individual errors for each partition. Furthermore, the individual errors for each partition have been decomposed into four components: Approximation, Regularization, Bias and Variance.
The rest of this section is devoted to bounding these terms for any partition. First, however, we require certain assumptions on the partitions. These are detailed in Section 4.1. Additionally, we need a supporting bound that controls the operator norm of the sample covariance error of each partition, under a suitable whitening. This is provided in Section 4.2. Finally, Section 4.3 presents the bounds on the component terms for each partition.
Assumptions
In this section, we describe three assumptions needed to bound the terms in Lemma 1. It may be useful at this point to recall partition-specific definitions from Section 2. We also remark that two of these assumptions are fairly standard (Assumption 1 and Assumption 2), and analogous versions have appeared in earlier work [13,30,8]. The last assumption, Assumption 3, is novel. However, we have validated it extensively on both real and synthetic data sets (see Section 6). Now, our first assumption concerns the existence of higher-order moments of the eigenfunctions, v i j .
where a 1 is a constant.
Note that we always have: always exists. Assumption 1 requires sufficiently many higher moments to exist. This assumption can also be interpreted as requiring partition-wise sub-Gaussian behaviour (up to 2k moments) in the RKHS space, given its primary application to the bounds (see Section 8.2, in the Appendix, for more details). Finally, we note that this assumption is similar to [30, Assumption A], but applied to each partition.
Our next assumption concerns the approximation variable (f * (x) − f i,λ (x)), requiring its fourth moment to be bounded.
where f i,λ is the solution of the optimization problem in Eq. 15.
We remark that while in the above we have specified A i (λ) to be a constant, a slow growing function of λ would also work in our bounds. Also, while this assumption is stated for any λ, we really only care about the actual λ used in Eq. 22. Thus, for example, if f * ∈ H, then as noted earlier, a choice of λ = 0 suffices -consequently Assumption 2 trivially holds with A i (λ) = 0 at λ = 0, since Our final assumption enforces that the sum of effective dimensionality over all the partitions be bounded in terms of the overall effective dimensionality. For this purpose, we define the goodness measure of a partition {C 1 , . . . , C m } as: Now, we have the following assumption. In Section 5, we show that if we have g(λ) = O(1) for a λ decaying suitably in terms of n, the DC-estimator can achieve optimal minimax rates. In other words, if the partitioning preserves the overall effective dimensionality, then there is no loss in the generalization error. We validate the above assumption (at suitable λ) by estimating g(λ) on real and synthetic data sets (see Section 6). From a practitioner's perspective, g(λ) may be viewed as a surrogate for the suitability of a partition for the DC-estimator, and can help guide the choice of partition.
Covariance Control
A key component to establish bounds on the terms in Lemma 1 involves controlling the moments of the operator norm of the sample covariance error, under a suitable whitening. Specifically, we need a bound on the quantity (for any i ∈ [m] and some k ≥ 2): where we use the shorthand: Σ i,λp i = (Σ i + λp i I).Σ i and Σ i are partition-wise empirical and population covariance operators respectively, as defined in Eqs. (7) and (8). The above term (in Eq. (27)) appears throughout in the bounds for Bias i (λ, n) and Var i (λ, D), and a general bound on this quantity can be found in Lemma 5 in the Appendix. While the expression in Lemma 5 is complicated, it can be specialized for specific kernels to obtain meaningful expressions. We state these expressions for three cases below. Their derivation can be found in Section 8.3.1 in the Appendix.
Finite Rank Kernels. Suppose kernel K has finite rank r -examples include the linear and polynomial kernels. Then, for any i ∈ [m] and k > 2, we get: Kernels with polynomial decay in eigenvalues. Suppose kernel K has polynomially decaying eigenvalues, λ j ≤ cj −v (∀j, and constants c > 0, v > 2) -examples here include sobolev kernels with different orders. Then, for any i ∈ [m], k > 2, and λp i ≥ 1 n α for some constant α < v 2 − 1, we get: Kernels with exponential decay in eigenvalues. Suppose kernel K has exponentially decaying eigenvalues, λ j ≤ c 1 exp(−c 2 j 2 ) (∀j, and constants c 1 , c 2 > 0) -an example here is the Gaussian kernel. Then, for any i ∈ [m], k > 2 and λp i ≥ poly(1/n), we get: Overall, it would be useful to think of CovErr i (λp i , n, k) to be scaling as O n −1/2 . Consequently, in the bounds to follow, there are terms of the form CovErr i (λp i , n, k) k -which scale as O(n −k/2 ), and become negligible for a sufficiently large k.
Bounds on Reg i , Bias i and Var i
We are now ready to provide bounds on the terms involved in Lemma 1. The following lemmas provide bounds on the Regularization error, Bias and Variance, for any partition i ∈ [m], as given in Definition 1. We only state the lemmas here using the O(·) notation. Precise statements can be found in the appendix. Recall that p i = P(x ∈ C i ) and f i,λ be the solution of Eq. 15. Additionally, we use the shorthand CE i to denote CovErr i (λp i , n, k).
Lemma 2 (Regularization Loss). Consider any partition i ∈ [m]. Then, Lemma 3 (Bias Loss). Let k ≥ 2 such that Assumption 1 holds for this k (with constant a 1 ), and Then, for any i ∈ [m], where we let Lemma 4 (Variance Loss). Let k ≥ 2 such that Assumption 1 holds for this k (with constant a 1 ), and Assumption 2 holds (with A i (λ) ≥ 0). Also, suppose p i satisfies (for any i ∈ [m]): p i = Ω (log n/n). Then, for any i ∈ [m], where we let Note that Lemma 3 and Lemma 4 have a minimum requirement on p i , namely: p i = Ω (log n/n). However, this is minor since this essentially corresponds to each partition having Ω(log n) samples.
We also remark that this requirement can be avoided under certain restrictions for e.g. if the unknown regression function f * is uniformly bounded i.e. |f * (x)| ≤ M ∀ x. Now, to interpret the above bounds, recall from Section 4.2 that CE i = CovErr i (d, λp i , n) can scale as O n −1/2 . Therefore, terms of the form CovErr i (d, λp i , n) k -which scale as O(n −k/2 ) -will be of lower order for a large enough k. Also note that the overall bias term gets multiplied with an O(n −1 ) factor. Indeed, in most cases, the bias term (Eq (32)) turns out to be of a much lower order than the variance term (Eq (34)). Moreover, the first two terms in the variance bound (Eq (34)), and the bound for Reg i (Eq (31)), become the overall dominating terms. Consequently, using Lemma 1, we have an overall scaling of:
Bounds under Specific Cases
In this section, using the bounds on regularization error, bias and variance from Section 4.3, we instantiate the overall error bounds for the kernel classes discussed in Section 4.2. We do this under the assumption that f * ∈ H. When f * / ∈ H, we provide an oracle inequality for the error term and contrast this with a similar inequality derived in [30]. Throughout this section, we assume that the conditions of Lemma 3 and Lemma 4 are satisfied.
Theorem 1 (Finite Rank Kernels). Let f * ∈ H and suppose kernel K has a finite rank r. Let m denote the number of partitions, and let k ≥ 2 such that Assumption 1 holds for this k. Then, the overall error for the DC-estimatorf C is given as: and Assumption 3 holds at λ = r/n, then the DC-estimatorf C achieves the optimal rate: E D Err(f C ) = O r n at λ = r/n.
Note that the requirement of m = O n (k−4) (r 2 log r) k in the above theorem is only meaningful for k ≥ 4 i.e. we require at least 4 moments of the quantity in Assumption 1 to exist. If this is true, and if Assumption 3 holds, Theorem 1 gives the rate E D Err(f C ) = O r n , which is known to be minimax optimal [30,18].
Theorem 2 (Kernels with polynomial eigenvalue decay). Let f * ∈ H and suppose kernel K has polynomially decaying eigenvalues : λ j ≤ cj −v (∀j, and constants c > 0, v > 2). Let m denote the number of partitions, and let k ≥ 2 such that Assumption 1 holds for this k. Also, suppose λp i ≥ 1 n α for some constant 0 < α < v 2 − 1, and ∀i ∈ [m]. Then, the overall error for the DC-estimatorf C is given as: Note that the requirement of in the latter part of the above theorem implicitly entails: α > v v+1 . This, when coupled with the requirement α < v 2 − 1 from the former part of the above theorem, can only be meaningful for v > 1 + √ 2 ≈ 2.44. Therefore, the latter part of Theorem 2 is only applicable to slightly stronger polynomial decays than the former part (which holds for v > 2). Now, assuming v > 1 + √ 2, the additional requirement of is only meaningful for a sufficiently large k. In particular, for k ≥ 4v 2 (v+1)(v−2(α+1)) . When this happens, Theorem 2 guarantees the optimal rate E D Err . Theorem 3 (Kernels with exponential eigenvalue decay). Let f * ∈ H and suppose kernel K has eigenvalues that decay as: λ j ≤ c 1 exp(−c 2 j 2 ). Let m denote the number of partitions, and let k ≥ 2 such that Assumption 1 holds for this k. Then, the overall error for the DC-estimatorf C is given as: and Assumption 3 holds at λ = 1/n, then the DC-estimatorf C achieves the optimal rate: log n n at λ = 1/n.
Here, as in the earlier two cases, the requirement on m above is only meaningful for a sufficiently large k, in particular k ≥ 4. In this case, Theorem 3 gives the minimax optimal rate E D Err(f C ) = O( √ log n n ).
f * / ∈ H -With approximation error
For the case when f * / ∈ H, note that we need not necessarily have Approx i (λ) = 0 for any λ > 0, i ∈ [m]. At λ = 0 we will always have Approx i (λ) = 0, however f i,λ may not be bounded (in other words, no element in H would achieve this approximation). One situation where we can still have Approx i (λ) = 0 with λ = 0, while having f i,λ to be bounded, is if f * is a piece-wise kernel function over our chosen partitions i.e. f * ( This would then be analogous to the previous section. In general, however, without enforcing further assumptions on f * , it is hard to give meaningful bounds on Approx i (·). While we can still proceed as in the previous section to obtain exact expressions for the regularization, bias and variance terms for E D Err(f C ) , in this situation it may be more instructive to compare our bounds with the bounds for the averaging estimator in [30]. Let us denote this estimator asf avg . To computef avg , the samples n are randomly split into m groups, and a KRR estimate is computed for each group.f avg is then simply the average of the estimates over all groups. In this case, we have from [30] (for any λ ∈ [0, λ]): where Approx(λ) corresponds to the overall approximation term and E(R, n, m, λ) is the residual error term. In particular, Approx(λ) = E (f * (x) − f λ (x)) 2 with f λ being the overall population KRR estimate: Also, under certain conditions and restrictions on the number of partitions m, [30] can establish the scaling: In comparison, for our DC-estimator, we can have a (potentially) different λ i for each partition, and get the decomposition (similar to Eq. (23)): Before comparing the bounds in Eq. (39) with Eq. 42, we require an additional definition. For any partition C i , i ∈ [m], let us define i.e. the error incurred by the global estimate f λ (Eq. (40)) in the i th partition. Note that, ApproxError i (f λ ) = Approx(λ). To avoid confusion, we would like to emphasize the distinction between ApproxError i (f λ ) and Approx i (λ i ). While the former is the local error (in the i th partition) incurred by solving a global problem with regularization λ, the latter is the local error incurred by solving a local problem with regularization λ i (as defined in Eq (18)). Now, to simplify presentation in the sequel, let us assume that CE i = CovErr i (λp i , n, k) = O n −1/2 -which was the case for the kernels discussed in Section 4.2. Also, suppose the quintuplet (n, m, k, p i , λ, λ), for any i ∈ [m], satisfies: The above restrictions essentially guarantee that all terms involving CE k i in Lemma 3 and Lemma 4 are of a lower order. Then, we have the following theorem: H , Assumption 1 holds, Assumption 2 holds (∀λ i ), Assumption 3 holds, and the quintuplet (n, m, k, p i , λ, λ) satisfies the restriction in Eq. 44, then The above theorem shows that the approximation error term in each partition of our estimator in E D Err(f C ) is lower than its counterpart in E D Err(f avg ) . Consequently, the overall approximation term is also lower. On the other hand, the residual estimation error terms can be of the same order. Intuitively, this makes sense since by partitioning the space, we are fitting piece-wise kernel functions, as opposed to just a single kernel function in the averaging case. We demonstrate this through experiments in the next section.
Experiments
In this section, we present experimental results of our DC-estimator (denoted DC-KRR), on both real and toy data sets. For comparison, we tested against the random splitting approach of [30]
Fig 1 shows a comparison of the functions obtained using DC-KRR (run with 3 partitions) and
Whole-KRR. We see that DC-KRR could approximate the true underlying function better than Whole-KRR, while still being computationally more efficient. Fig 2 shows the Test-RMSE with varying number of partitions for DC-KRR, Whole-KRR and Random-KRR. We observe that while Random-KRR had a similar performance to Whole-KRR, DC-KRR achieved lower error than both. This can be attributed to lower approximation error of piece-wise estimates.
Real Data sets:
We performed experiments on 6 real data sets from the UCI repository [14]. Data sets statistics are presented in Table 1. The data was normalized to have standard deviation 1. In all cases, we utilized a Gaussian kernel with kernel parameter γ chosen using cross-validation, as shown in Table 1. We varied the number of partitions, m, and the number of training points, n. When running DC-KRR, the partitions were determined using clustering, and we tested with k-means and Kernel k-means. Kernel k-means was run on a sub-sampled set of points for larger data sets. The regularization penalty for KRR was chosen as λ = 1/n. Results of these experiments are presented in Table 2 and Fig 3. In all cases, DC-KRR achieved lower test error than Random-KRR, while being comparable to Whole-KRR. Moreover, the training time for DC-KRR, when running via k-means, was similar to Random-KRR (due to the small overhead of clustering), but much faster than Whole-KRR. Interestingly, in two cases (Fig 3(a) and Fig3(b)), we found that DC-KRR also achieved lower test error than Whole-KRR. This can also be attributed to a lower approximation error due to piece-wise estimates.
Testing Goodness of Partitioning: We also estimated g(λ) (Eq. (26)) vs. a varying number of partitions, on both our real and toy data sets (shown in Fig 5 and Fig 4 respectively) to verify the validity of Assumption 3.
To estimate S(λ) and S i (λp i ), i ∈ [m] (which comprise g(λ)), we used an SVD to compute the eigenvalues of the kernel matrix on the training samples (respectively, the kernel matrix of the training samples in partition i) and normalized this with n, the training size (respectively, n i , the training size in partition i). In case of larger data sets, we did this on a sub-sampled version of the data set. It is known that the eigenvalues of K D /n, with K D being the kernel matrix on randomly sampled points D, converge to the eigenvalues of the covariance in the associated RKHS [20]. We used a Gaussian kernel and set λ = 1/n, the same as in our experiments, with n = total training size/sub-sample size.
On real data sets, we found that while g(λ) increases as the number of partitions increases, it continues to be a constant even for a large number of partitions in several cases, thereby justifying Assumption 3. On synthetic data sets, it seemed to grow at a somewhat faster rate. However, this could be attributed to lesser clustering structure, since the true number of clusters was only 3at which point g(λ) is still a small constant.
Comparison with [8]: We also performed additional empirical comparisons between the approach in [8] (denoted as VP-KRR), DC-KRR (with k-means and kernel k-means) and Random-KRR, on the cpusmall data set (see Table 1). The main algorithmic difference between DC-KRR and VP-KRR is that the latter proposes to obtain bounded partitions using a Voronoi partitioning of the input space, while in DC-KRR we use a clustering algorithm to obtain the partitions. The results of our tests are shown in Table3. We see that DC-KRR(with kernel k-means) was slightly better than VP-KRR in terms of Test RMSE, but also DC-KRR required much lesser training time than VP-KRR. A reason for this is that Voronoi partitioning tends to produce a very unbalanced clustering. For example, when using Voronoi partitioning to generate 9 clusters, we found that the first cluster had 6484 data points out of total 6553 data points in the dataset, and the remaining clusters had very few data points. Consequently, the training time for the one cluster was almost as huge as the time it would take to train Whole-KRR.
Conclusion
In this paper, we have provided conditions under which we can give generalization rates (and match minimax rates) for a partitioning based approach to Kernel Ridge Regression. Moreover, we have demonstrated potential statistical advantages as well for such an approach, as it allows for lower approximation error. We hope that this would encourage further investigation into partitioning based extensions of other kernel methods, both from a computational and statistical perspective.
Appendix
This section contains the proofs of all theorems, lemmas and corollaries presented in this paper, as well as some figures and tables. First, we summarize some definitions and notations in the following subsection.
Definitions and Notation
We are given n samples D = {(x 1 , y 1 ), . . . , (x n , y n )}, of the tuple (x, y) drawn i.i.d. from a distribution, P, on X × Y. x (and x i ) is a random vector in the input space X , also called the covariate. y (and y i ) is a random variable in the output space Y, also called the response. The collection of sets {C 1 , . . . , C m } is used to denote a disjoint partition of the covariate space: Additionally, we restrict Y ⊆ R and assume an additive noise model relating the response to the covariate i.e. for each i ∈ [n]: where f * : X → R is an unknown mapping of covariates in X to responses in R, and η i is the random noise corresponding to sample i. We assume that f * is square integreable with respect to the marginal of P on X . Equivalently, we can say f * lies in the space where P denotes the marginal of P on the input space X . The random noise is assumed to be zero mean with bounded variance i.e. E [η i |x i ] = 0 and E η 2 We are given a continuous, symmetric, positive definite kernel K : X × X → R. For any x ∈ X , we define φ x := K(x, ·). Then, the Reproducing Kernel Hilbert Space (RKHS) corresponding to kernel K is given as H = span{φ x , x ∈ X }, with inner product defined as We require that the RKHS space H ⊂ L 2 (X , P) -which means ∀x, E y∼P [K(x, y) 2 ] < ∞ -a condition which is always true for several kernel classes, including Gaussian, Laplacian, or any trace class kernel w.r.t. P.
The partition based empirical and population covariance operators are defined as (for partition C i ): where φ x ⊗ φ x denotes the operator φ x φ x , · H , and 1 (·) denotes the indicator function. Note that we have the relation: where Σ = E [φ x ⊗ φ x ], the overall covariance operator.
We let {λ i j , v i j } j=1,...,∞ be the collection of eigenvalue-eigenfunction pairs for Σ i . Then, For any d ∈ N, d ≥ 1, we define P d as the projection operator onto the first d eigenfunctions of Σ i . Thus, We denote byΣ d i and Σ d i , the projected low-rank empirical and population covariances (with rank = d), obtained using the operator P d . Thus, For any λ > 0, we define the following spectral sums: Finally, we also introduce the shorthand:
Bound on
In this section, we show how Assumption 1 guarantees a bound on Consider any i ∈ [m]. Let us assume that Assumption 1 holds with parameters a 1 and k(≥ 2). Now, note that for any x ∈ X , we have: where we have (a) using Jensen's inequality. Thus, we have where we have used Assumption 1 in the last step.
Moments of the operator norm for Covariance operators
In this section, we state a lemma providing a bound on the quantity E Σ for some constant k ≥ 2. Note that the norm here, · , corresponds to the operator norm. This quantity appears repeatedly in other bounds, and therefore it is useful to have a lemma recording its bound, as stated below. The proof can be found in Section 8.10. First, we introduce the following notion of truncated spectral sums for Σ i . For any d ≥ 1, we let: Note that for any d ≥ 1, we have: Now, we have the following lemma providing the required bound.
Lemma 5. Consider any d ∈ N, d ≥ 1. Also, let k ≥ 2 such that Assumption 1 holds for this k (with constant a 1 ). Then, we have where we have the following expression for CovErr i (d, λ, n, k): Using the above lemma and applying Markov's inequality, we get the following simple corollary.
Corollary 1. Consider any d ∈ N, d ≥ 1, and let k ≥ 2 such that Assumption 1 holds for this k (with constant a 1 ). Then, we have
Bounds on CovErr i (d, λp i , n, k) for specific cases
While the expression in Eq. 64 may seem complicated, it is possible to obtain concrete expressions for specific kernels through an appropriate choice of d, similar to the approach in [30]. The idea is to choose a d which makes the L i (d, λp i ) terms negligible in Eq. 64. We do this for a few cases below.
Finite Rank Kernels. Suppose kernel K has finite rank r -examples include the linear and polynomial kernels. Then, for any i ∈ [m], the partitionwise covariance operator Σ i is also finite rank. Thus, we can pick d = r (in Eq. 64), which gives L i (d, λp i ) = 0 and λ i d+1 = 0. Also, U i (d, λp i ) = S i (λp i ) ≤ r. Plugging these into Eq. 64, we get: Kernels with polynomial decay in eigenvalues. Suppose kernel K has polynomially decaying eigenvalues, λ j ≤ cj −v (∀j, and constants c > 0, v > 2) -examples here include sobolev kernels with different orders. Now, since we have Σ = m i=1 Σ i being a sum of psd operators, the minimax characterization of eigenvalues yields: λ i j ≤ λ j ∀j and any i ∈ [m]. As a consequence, we have: L i (d, λ) ≤ j>d λ j λ j +λ and S i (λ) ≤ S(λ). Then, following the same approach as [30] i.e. choosing d = n C/(v−1) for some constant C > 0, we get: and, U i (d, λp i ) ≤ d = n C/(v−1) . Consequently, for v > 2 and λp i ≥ 1 n C v v−1 −1 , we get: Kernels with exponential decay in eigenvalues. Suppose kernel K has exponentially decaying eigenvalues, λ j ≤ c 1 exp(−c 2 j 2 ) (∀j, and constants c 1 , c 2 > 0) -an example here is the Gaussian kernel. Again, since Σ = m i=1 Σ i , the minimax characterization of eigenvalues yields: λ i j ≤ λ j ∀j and any i ∈ [m]. Thus: L i (d, λ) ≤ j>d λ j λ j +λ and S i (λ) ≤ S(λ). Choosing d = C √ log n/ √ c 2 for some constant C, we get: Consequently, as long as λp i ≥ poly(1/n), we can choose a sufficiently large C to make the terms involving λ i d+1 and L i (d, λp i ) negligible. Thus, we get:
Proof of Lemma 1
The proof is as follows: where we have (a) since (a + b + c) 2 ≤ 2(a 2 + 2b 2 + 2c 2 ). Now, following a standard bias-variance decomposition, we have: Combining the above expressions, we get:
Proof of Theorems 1 and 3
The theorems are a simple consequence of combining Lemmas 2, 3, and 4 via Lemma 1, plugging A i (λ) = 0 with λ = 0, ignoring the bias terms which are of a lower order, and using the expressions for CE i = CovErr i (λp i , n, k) discussed in Section 4.2.
Proof of Theorem 4
Consider any λ > 0, and let f λ be the solution of Eq. (40). Now, for any partition i ∈ [m], consider the following optimization problem: being the solution of Eq. (15). Now, by the optimality of f i,λ i , we have: and Then, we know that Approx i (λ) ≤ ApproxError(f λ ), since decreasing the regularization penalty from λ i to λ would only decrease the approximation error. Moreover, using the fact that the following function is a monotonically increasing function of λ [24], we have: . Therefore, the result holds with λ i = min(λ, λ i ).
The bound on the estimation error E C is a simple consequence of the fact that, under the conditions assumed, all terms in Lemma 3 and Lemma 4 involving (CE) k i are of a lower order, and that the condition Approx(λ) = O(λ f λ 2 H ) guarantees that: Then, combining Lemma 2, Lemma 3 and Lemma 4 via Lemma 1 gives us the required scaling.
Regularization Bound
In this section we provide a proof of Lemma 2. The lemma is restated below for convenience.
Proof of Lemma 2
Proof. We want to bound Using first order conditions for the optimality of f i,λ and f i,λ , we have
Bias Bound
In this section we provide a proof of Lemma 3. The lemma is restated below.
where we let
Proof of Lemma 3
Proof. We want to bound Bias i (λ, n), where Combining the above, we get Rearranging and multiplying Σ where we let X denote the set {x 1 , . . . , x n } i.e. the covariates in the data D. So, where we have (a) using the fact that u, Σ i u H < u, (Σ i + λp i I)u H ∀u ∈ H, (b) by Jensen's inequality, (c) by the definition of the operator norm, (d) by the Cauchy-Schwarz inequality. Thus, Now, Lemma 5 provides a bound for E Σ . For the remainder of the proof, we provide the bound for E Σ . Combining these bounds will yield the main statement of the lemma.
From first order conditions again (Eq. (87)), we have Multiplying by Σ −1/2 i,λp i on both sides and rewriting differently, we get Let us define the event E cov = Σ To control E Σ Overall Bound. Combining the above bounds with the terms in Eq. (95), we have and Now, where we have (a) using Cauchy-Schwarz, (b) using n i = n j=1 1 (x j ∈ C i ), independence of Consequently, we have where we let
Variance Bound
In this section we provide a proof of Lemma 4. First, we restate the lemma below.
Lemma. Consider any d ∈ N, d ≥ 1, and k ≥ 2. Suppose Assumption 1 holds for this k (with constant a 1 ), and Assumption 2 holds. Also, suppose ∀i ∈ [m], p i satisfies: p i = Ω (log n/n). Then we have where we let for f ∈ H, we can get: where f i,λ is the solution of (15).
Now, from first order optimality conditions for Eq (3), we have Subtracting (Σ + λp i I)f i,λ from the above, we get, Thus, Now, we can control each of the component terms in the above inequality as follows: where we have (a) using independence of {x 1 , . . . , x n } and 0 (via first order optimality conditions for f i,λ ) , (b)using Cauchy-Schwarz and ignoring the negative quantity, (c) using Assumption 2 and E Σ Plugging the above back into Eq. (119), we get where we have (a) using the same sequence of inequalities employed in Eq. (101).
Proof of Lemma 5
Proof. Using the triangle inequality, we obtain the decomposition Bound on T 1 . Consider the term Σ . Using the definition ofΣ i andΣ d i from Eqs. (51) and (56), and then applying the triangle inequality, we have Now, recall that for any x ∈ X , we let Σ where we have (a) using u ⊗ v + v ⊗ u = ( v, u H + u H v H ), and (b) using the triangle inequality.
Plugging this back into Eq. (123), we get Taking expectation of the k th power on both sides, and using the triangle inequality again, we get where we have (a) using the Cauchy-Schwarz inequality. Now, as a consequence of the reproducing property of kernels, we note that φ x , for any x ∈ X , has the representation: Thus, where we have (a) using Jensen's inequality.
Therefore, using Assumption 1, we get Similarly, we can obtain Combining these bounds gives Bound on T 2 . We want to bound the quantity E Σ where φ x = Σ −1/2 i,λ φ x , for any x ∈ X . Now, as seen in Eq. 128, we have the representation: Also, using the definition of Σ d i from Eq. 57, we have the relation: Now, let A j ∈ R d×d be a matrix such that For m = n, A j (m, n) = v i m (x j )v i n (x j )1 (x j ∈ C i ) / (λ i m + λ)(λ i n + λ) Also, let B = n j=1 A j /n. Then, So, we get where · 2 corresponds to the usual spectral norm for finite dimensional matrices. where we have (a) using the triangle inequality for the spectral norm and the fact that and D = diag {λ i m /(λ i m + λ)} d m=1 , (b) using the inequality (a + b) k ≤ 2 k (a k + b k ), and (c) using Jensen's inequality and Assumption 1. Thus, Plugging these bounds into Eq. (140), we finally have Bound on T 3 . We wish to bound Σ . Using the definition of Σ d i from Eq. (57), we can get Thus, Overall Bound. Combining the bounds on the terms T 1 , T 2 and T 3 , we get the final bound in the lemma.
|
2016-08-05T19:02:19.000Z
|
2016-08-05T00:00:00.000
|
{
"year": 2016,
"sha1": "749d9a4f0d1217bb8d01e7f4ecf54d0b885afde8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "749d9a4f0d1217bb8d01e7f4ecf54d0b885afde8",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
111420354
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of the Sol-gel Spincoating Deposition Technique on the Memristive Behaviour of ZnO-based Memristive Device
This paper presents the memristive behavior of zinc oxide thin films deposited on ITO substrate by sol-gel spin coating technique. The spin coating speed was varied from 1000 rpm to 5000 rpm to study the effect it has on the memristive device fabricated. The electrical properties were characterized by using a two-point probe IV (current-voltage) measurement system (Keithley 2400). The thicknesses of the thin films were measured by Veeco Dektak 150 Surface Profiler and it shows that the thickness decreased with the spin coat speed. The lowest thickness was obtained from thin film deposited at 5000 rpm which is 17.47 nm. The highest resistance Roff/Ron ratio was obtained from thin film spin coated at 3000 rpm which is 1.346 with visible ZnO nanoparticle characterized by FESEM (JEOL JSM 6701F). This indicated that the optimum spin coat speed for the zinc oxide-based memristive device is 3000 rpm as it exhibited the best switching behavior.
Introduction
Memristor is a two terminal device that is memory dependent and non-volatile, which means it has the ability to store resistance value indefinitely. It was proposed by Professor Leon Chua back in 1971 [1]. Since the thickness of the active layer needs to be in nanometer scale, its physical state only been discovered in HP lab in 2008 with advancing nanotechnology that enabled the fabrication of nanoscale devices [2]. It is also reported to be viable to use in microscale as well [3]. Memristor is considered as the fourth fundamental circuit element. It extends the fundamental passive circuit elements which are revolved around capacitors, resistors and inductors. The memristive device is defined by its IV characteristic that exhibits a hysteresis loop in a shape of bow tie and that loop is necessarily crossing the origin point [4].
The most commonly used material as the active layer of the memristive device is titanium dioxide (TiO 2 ) since it is compatible with standard complementary (CMOS) technology. For this study, zinc oxide is chosen instead as an active layer for the memristive device. It has similar bandgap compared to TiO 2 but it is a direct semiconductor. ZnO has shown to be a promising material in memristive device as it has been widely used in other applications such as blue and ultraviolet light emitter, solar cell windows, photovoltaic device [5], flexible memory application (RRAM) [6] and surface acoustic wave device [7]. In this paper, ZnO solution was prepared by using sol-gel spin coating technique to deposit the ZnO solution onto the ITO substrate. The spin coating speed is varied from 1000 rpm, 3000 rpm and 5000 rpm to study its effect on the memristive behavior. Sol-gel is chosen as it is cheap and easy to coat the substrate in whichever shape and area [8].
Memristor Fabrication
The ITO substrate as a bottom electrode would first go through the cleaning process. Acetone, methanol and deionized water are used to clean the substrate, followed by drying it off with nitrogen gas. This cleaning process is crucial to get rid of all the contamination that could affect the properties of the thin films. The sol-gel was prepared by mixing together zinc acetate dehydrate (Zn(CH 3 COO) 2 H 2 O) as a precursor material, 2-methoxyethanol (C 3 H 8 O 2 ) as solvent and monoethanolamine (C 2 H 7 NO, MEA) as stabilizer. The solution was then sonicated for 30 minutes under 50 o C temperature. After that it will be stirred on the hot plate stirrer for 3 hours at 80 o C. It was then left to age for 24 hours to produce the clear homogeneous solution. The thin film deposition is then carried out by spin coating technique. This is where the speed of spin coater is varied. The thin films are spin coated at 1000rpm, 3000rpm, and 5000rpm in 1 minute each while 10 drops of ZnO solution is dropped. After the spin coating process, the thin films are left to dry for 10 minutes at 150 o C to evaporate the solvent and remove the residuals. Finally the thin films were then annealed in the furnace for 1 hour at 350 o C temperature
Memristor Characterization
The memristive device was characterized for both electrical and physical properties. For the physical properties, the thin films thickness was characterized by using Surface Profiler, the surface morphology was characterized by FESEM. The electrical characterization for current-voltage measurement was done by using two-point probe Keithley 4200 semiconductor characterization system connected to the probe station. The main purpose of this characterization was to obtain the memristive behavior of ZnO thin films by sweeping the voltage from 0V to -5V, -5V to 5V, 5V to 0V. The Figure 1 shows the memristive device structure for its IV measurement technique used in this work. From Figure 2(a) the thin film annealed at 1000rpm, the IV characteristic graph resulted in asymmetrical IV characteristic with only hysteresis loop exists in negative biased. This can be observed clearly from the semilog asymmetrical IV characteristic graph as shown in Figure 3(a).
The result shows that the ZnO spin coated at 3000rpm and 5000rpm show memristive behavior with resistance ratio of 1.346 and 1.125 respectively as tabulated in Table 1. This switching behavior may be resulted from the filament formation within the oxide layer that contains movement of oxygen vacancies that allows the electrons to drift towards the cathode [9]. In Figure 2(b) it can be said that the thin film's conductivity is increasing compared to thin film in Figure 2(a) as a result of better oxygen vacancies created in the ZnO active layer [10]. Further increment of spin coat speed to 5000rpm as shown in Figure 2(c) resulted in thinning loops of memristive behavior and decreasing resistance ratio to 1.125. The symmetrical semilog IV characteristic graph as shown in Figure 3(b) and 3(c) shown the existent of hysteresis loops in both negative and positive biased. Spin speed is somewhat contributed to the change in film thickness. It was proven that the forming of metallic gold filaments passing through the thin ZnO layer contributed to an ohmic resistance switches for sample spincoated at 5000 rpm as shown in Figure 3(c). These filaments could be disrupted by Joule heating.
The gold was also reported to give a fuse-like behaviour, which contributed to poor memristive behaviour [11].
The higher the spincoating speed, the thinner thin film gets, causing the ZnO particles to be scattered in the substrate. This is proven in Table 1. Thinnest film makes contributes better resistance ratio with memristive behavior. As seen from Figure 2, although 5000 rpm has lowest resistance ratio compared to samples spincoated at 1000rpm and 3000rpm, it shows symmetrical memristive behavior with loops on negative and positive sides. The sample spin coated at 1000rpm is not symmetry, Figure 3(a) only shows loop on the negative side, since it has the thickest film. It was reported that the increase in film thickness contributed to defects in oxygen-related [12]. It is suspected that the fall in resistance is due to disruption in gold bridge [11]. The resistance ratios were calculated from the IV characteristic graph as shown in Figure 4 by using the formula (1) and the results are tabulated in Table 1 Based on the thin film thicknesses tabulated in Table 1, it can be seen that the thicknesses decreased as the spin coating speed increasing [13]. As the spin coating speed increased, the more ZnO particles will be scattered away from the ITO substrate, thus leaving the thinner films [14]. figure 5(b), the ZnO thin film spin coated at 3000 rpm shows less dense structure compared to ZnO thin film spin coated at 1000rpm. However, the grain becomes more uniform, denser and reduced grain size for ZnO thin film annealed at 5000 rpm as it has the thinnest layer as proven in Figure 5(c).
Conclusion
The work was carried out to study the effect of spin coating speed on the resistive switching behavior of ZnO thin films. It is known that the film thickness decreases as the spin coating speed increasing. The higher spin coat speed is what improving the grain size of the ZnO nanoparticles by reducing it and making it uniform. The spin coat speed of 3000 rpm has the memristive behavior with the highest resistance ratio of 1.346 and also the optimum spin coating speed for this study. For thin film spin coated at 5000 rpm speed was seen to drop slightly in term of the resistance ratio by 1.125 due to the metallic gold filaments forming passing through the thin ZnO layer as it is proven to have the thinnest, densest structure of ZnO nanoparticles for the surface morphology characteristic.
|
2019-04-14T13:04:52.265Z
|
2015-11-19T00:00:00.000
|
{
"year": 2015,
"sha1": "de37cb0b8830996a848a59ecc08c2e6825f65815",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/99/1/012022",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "abbc2c96e91e2209045fb67bddcdac9dd604ba30",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
219503299
|
pes2o/s2orc
|
v3-fos-license
|
Analysis toward relationship between mathematical literacy and creative thinking abilities of students
The skills that need to be possessed by teachers, especially mathematics teachers in the revolution 4.0 to deal with 21st century students include critical, creative, collaborative and communicative. Another skill needed is mathematical literacy. The aim of this research is to examine how the relationship between mathematical literacy and student`s creative thinking abilities. In this study, mathematical literacy has three aspects, namely interpret problems, formulate problems, and use mathematics inside solve the problem. And, creative thinking abilities has 4 aspect, namely fluency, flexibility, originality and elaboration. The type of the research using correlation ex-post facto research which the researcher does not give treatment to the respondent so that this study only reveals the variable as it is without connecting with other variables. Population used is all first semester students in the 2018/2019 academic year at Ahmad Dahlan University, which numbered 315 people with a total sample of 206 people. Technique data collection carried out in this study is data on mathematical literacy and student’s creative thinking abilities taken by distributing test instruments and interview. Test description is tested as much as 5 items which is a matter of mathematical literacy and creative thinking ability. Data analysis using regresion analysis and correlation coefficient test using Pearson product moment correlation technique. The results of the study indicates there is significant relationship between literacy skills mathematics with creative thinking of students.
Introduction
Mathematics is one of the lessons students learn from elementary school to college and In daily life students are faced with problems related to the application of mathematics [1]. The National Research Council purported that students learn mathematics well only when they construct their own mathematical understanding and that this understanding requires them to examine, represent, transform, solve, apply, prove, and communicate [2][3]. To solve mathematical problems, students must first master mathematical concepts [4]. More and more problems are faced and can be resolved in everyday life, requires some level of understanding of mathematics, mathematical reasoning before the problem can be fully understood and handled. Mathematics is an important tool for students when they face problems and challenges in various aspects of life [5][6][7].
The demands of students' ability to learn mathematics not only have numeracy skills, but logical and critical reasoning skills in problem solving. Solving this problem is not merely a problem in the form of a routine problem but rather a problem faced daily [8][9]. Such mathematical abilities are known as mathematical literacy abilities. Futhermore, there is a positive relationship between attitude and mathematical literation [10][11].
In addition to mathematical literacy skills, the ability to think creatively in mathematics is a part of life skills that need to be developed especially in the face of the information age and increasingly fierce competition. The development of creative activities is to involve imagination, intuition and discovery by developing divergent, original thinking, curiosity, making predictions and predictions and experimenting. [10] The problem is that the mathematical literacy abilities of Indonesian students, from the results of an international ranking of mathematics, are very poor compared to other countries. This was revealed by PISA (The Program for International Student Assessment) which divides the achievement of students' literacy skills in six levels of skills, from level 1 (lowest) to level 6 (highest) for mathematics. In addition to low literacy skills, students' mathematical creative thinking ability is relatively low. Based on the results of the Trend International Mathematics and Science Study (TIMMS), the level of students' creative thinking ability in Indonesia is relatively low, because only 2% of Indonesian students can work on high and advanced categories of questions that require creative thinking skills in solving them [12].
Method
This research is correlational research. The type of research using correlation ex-post facto research, which the researcher does not provide treatment for this study, only reveals the variable as it is without connecting with other variables [13]. The research aims to find the relationship between mathematical literacy skills and creative thinking. The research instrument used was in the form of an instrument with an indicator of testing mathematical literacy skills and thinking creative, followed by interviews. The subjects in this study were first semester students of the 2018/2019 school year Primary School Teacher Education Study Program at Ahmad Dahlan University which numbered 310 people with a total sample of 206 people. Technique used to take samples is by technique Simple Random Sampling that is taking sample members from the population done randomly regardless of the strata in the population. The instrument used is five objective test which contains aspects of creative thinking and mathematics literation.
Result and Discussion
Efforts to develop abilities and skills in learning activities have been carried out by improving the quality of learning both in terms of mastery of the material, the use of methods, the use of media and classroom management that is conducive [14][15][16]. Learning in schools must be interactive, inspiring, fun, challenging, and motivating students to participate actively, and provide sufficient space for initiative, creativity, and independence in accordance with their talents, interests and physical development psychological learners [17][18].
Creative thinking skills must also be developed one of them is through learning activities at school. The ability to think creatively is also an important competency that students must possess. This is because the ability to think creatively will be able to help students in making decisions that make sense in their lives [19][20]. Creative thinking is also not a hereditary factor, so it can be developed and taught with methods and strategies certain learning that can support the development of the ability to think creatively. Thinking ability Creative students cannot develop with both if in the learning process the teacher does not actively involve students in concept formation, the learning methods used in schools are still conventional, namely learning that is still centered on teacher [21]. Such learning can inhibit student creativity and activity development as in terms of communicating ideas and idea. So that this situation is no longer appropriate with the targets and objectives of learning mathematics [22].
The result of this research about descriptive statictic about creative thinking abilities and mathematics literation can presented in table 1. Based on the table 1 above it is known mathematical literacy and thinking abilities creative there is no difference because the results of descriptive data obtained does not exist significant differences namely the ability of mathematical literacy to obtain an average value 2.59 and creative thinking skills obtain an average value 2.86. This is in accordance with previous research which said that it is known mathematical literacy and thinking abilities creative there is no difference [23][24].
Testing the hypothesis in this study using a simple regression test, regression simple to use to measure relationships one variable with another variable. Hypothesis testing uses simple regression analysis obtained by calculating the SPSS Statictics 24.0 for Windows program. Hypothesis test results can be seen in the following table 2. Based on the table 1 above it is known mathematical literacy and thinking abilities creative there is significant and positif. because correlation coefficient (r count) between x to y (rxy) of 0.671. Coefficient rxy correlation is positive, then there is a correlation the positive is 0.671. the same thing has been done which shows that it is significant and positively related [24][25][26].
Moreover, based on table 2, coefficient value determination (r 2 xy) sebesar 0,465. r 2 xy can it means that mathematical literacy can influence 46.5% changes in creative thinking skills r 2 value xy shows that there are still 53.5% factors or other variables influence mathematical literacy in addition to the ability to think creatively. Creative thinking process becomes one of the factors of student learning achievement [27][28].
Testing the significance of aiming for knowing the significance between creative thinking and mathematics literation. Relationship the significance of the research hypothesis is known by t test, if p value is greater than sig (0.05) then independent variables have a significant effect to the dependent variable. Based on table 2 p value for t test is greater than sig so can be included there is between creative thinking and mathematics literation [29].
The values for coefficient mathematical literation is 0.546 and constanta 7.891. regression equation can be written 4 The equation shows that the value of the x coefficient is 0.546 meaning that if mathematical literacy (x) increases by one point then the ability to think creatively (y) will increase by 0.546.
Based on the description of the regression test results simple, it can be concluded that there is a significant influence between creative thinking ability and mathematics literation. Creative thinking occurs when learners are involved with what is they know in such a way as to change it, meaning that students are able to change and create the knowledge they know and produce something that is new. Through creative thinking students will be able to distinguish ideas or ideas clearly, arguing well, able to solve problems, able constructing explanations, being able to hypothesize and understand complex things it becomes clearer, where this ability clearly shows how participants are reasoning students. As with literacy, mathematical literacy and creative thinking abilities are not only limited to numeracy skills, but also how to apply mathematics in everyday life to solve a problem, how to communicate it, thus it can be seen how the students mathematical thinking process.
Conclusion
Based on the results of research and discussion, it can be concluded that there is significant relationship between mathematical literacy and student`s creative thinking abilities with the influence of literacy mathematics for creative thinking skills is 46.5% and the rest is influenced by other factors which was not measured in this study. Students who have high mathematical literacy reletively have ability to think creatively. Meanwhile, students who have high mathematical literacy relatively have high ability to think creatively.
|
2020-05-28T09:16:02.866Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "12ab4d9ab2826fbec6c14e8ee495e01a2b631cfb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1742-6596/1521/3/032104",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "76e5caefd84552d440674b0cac9bda9a2762c7ed",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Physics"
]
}
|
136730935
|
pes2o/s2orc
|
v3-fos-license
|
Solar Cells on the Base of Semiconductor- Insulator-Semiconductor Structures
The conventional energy production is not based on sustainable methods, hence exhausting the existing natural resources of oil, gas, coal, nuclear fuel. The conventional energy systems also cause the majority of environmental problems. Only renewable energy systems can meet, in a sustainable way, the growing energy demands without detriment to the environment. The photovoltaic conversion of solar energy, which is a direct conversion of radiation energy into electricity, is one of the main ways to solve the above-mentioned problem. The first PV cells were fabricated in 1954 at Bell Telephone Laboratories (Chapin et al., 1954); the first applications for space exploration were made in the USA and the former USSR in 1956. The first commercial applications for terrestrial use of PV cells were ten years later. The oil crisis of 1972 stimulated the research programs on PV all over the word and in 1975 the terrestrial market exceeds the spatial one 10 times. Besides classical solar cells (SC) based on p-n junctions new types of SC were elaborated and investigated: photoelectrochemical cells, SC based on Schottky diodes or MIS structures and semiconductor-insulator-semiconductor (SIS) structures, SC for concentrated radiation, bifacial SC. Currently, researchers are focusing their attention on lowering the cost of electrical energy produced by PV modules. In this regard, SC on the base of SIS structures are very promising, and recently the SIS structures have been recommended as low cost photovoltaic solar energy converters. For their fabrication, it is not necessary to obtain a p-n junction because the separation of the charge carriers generated by solar radiation is realized by an electric field at the insulatorsemiconductor interface. Such SIS structures are obtained by the deposition of thin films of transparent conductor oxides (TCO) on the oxidized silicon surface. A overview on this subject was presented in (Malik et al., 2009). Basic investigations of the ITO/Si SIS structures have been carried out and published in the USA (DuBow et al., 1976; Mizrah et al., 1976; Shewchun et al., 1978; Shewchun et al, 1979) Theoretical and experimental aspects of the processes that take place in these structures are examined in those papers. Later on the investigations of SC based on SIS structures using, as an absorber component, Si, InP and other semiconductor materials have been continued in Japan (Nagatomo et al., 1982; Kobayashi, et al., 1991), India (Vasu & Subrahmanyam, 1992; Vasu et al., 1993), France (Manifacier & Szepessy, 1977; Caldererer et al., 1979), Ukraine
Introduction
The conventional energy production is not based on sustainable methods, hence exhausting the existing natural resources of oil, gas, coal, nuclear fuel. The conventional energy systems also cause the majority of environmental problems. Only renewable energy systems can meet, in a sustainable way, the growing energy demands without detriment to the environment. The photovoltaic conversion of solar energy, which is a direct conversion of radiation energy into electricity, is one of the main ways to solve the above-mentioned problem. The first PV cells were fabricated in 1954 at Bell Telephone Laboratories (Chapin et al., 1954); the first applications for space exploration were made in the USA and the former USSR in 1956. The first commercial applications for terrestrial use of PV cells were ten years later. The oil crisis of 1972 stimulated the research programs on PV all over the word and in 1975 the terrestrial market exceeds the spatial one 10 times. Besides classical solar cells (SC) based on p-n junctions new types of SC were elaborated and investigated: photoelectrochemical cells, SC based on Schottky diodes or MIS structures and semiconductor-insulator-semiconductor (SIS) structures, SC for concentrated radiation, bifacial SC. Currently, researchers are focusing their attention on lowering the cost of electrical energy produced by PV modules. In this regard, SC on the base of SIS structures are very promising, and recently the SIS structures have been recommended as low cost photovoltaic solar energy converters. For their fabrication, it is not necessary to obtain a p-n junction because the separation of the charge carriers generated by solar radiation is realized by an electric field at the insulatorsemiconductor interface. Such SIS structures are obtained by the deposition of thin films of transparent conductor oxides (TCO) on the oxidized silicon surface. A overview on this subject was presented in (Malik et al., 2009). Basic investigations of the ITO/Si SIS structures have been carried out and published in the USA (DuBow et al., 1976;Mizrah et al., 1976;Shewchun et al., 1978; Theoretical and experimental aspects of the processes that take place in these structures are examined in those papers. Later on the investigations of SC based on SIS structures using, as an absorber component, Si, InP and other semiconductor materials have been continued in Japan (Nagatomo et al., 1982;Kobayashi, et al., 1991), India (Vasu & Subrahmanyam, 1992;Vasu et al., 1993), France (Manifacier & Szepessy, 1977;Caldererer et al., 1979), Ukraine 300 (Malik et al., 1979;Malik et al., 1980), Russia (Untila et al., 1998), the USA (Shewchun et al., 1980;Gessert et al., 1990;Gessert et al., 1991), Brasil (Marques & Chambouleyron, 1986) and the Republic of Moldova (Adeeb et al., 1987;Botnariuc et al., 1990;Gagara et al., 1996;Simashkevich et al., 1999). The results of SIS structures fabrication by different methods, especially by pyrolitic pulverization and radiofrequency sputtering, are discussed in those papers. The investigation of electrical and photoelectrical properties of the Si based SIS structures shows that their efficiency is of the order of 10% for laboratory-produced samples with an active area that does not exceed a few square centimeters. The spray deposition method of ITO layer onto the silicon crystal surface results in an efficient junction only in the case of n-type Si crystals, whereas in the case of p-type silicon crystals radiofrequency sputtering must be used to obtain good results. Bifacial solar cells (BSC) are promising devices because they are able to convert solar energy coming from both sides of the cell, thus increasing its efficiency. Different constructions of BSC have been proposed and investigated. In the framework of the classification suggested in (Cuevas, 2005) the BSC structures could be divided into groups according to the number of junctions: a) two p-n junctions, b) one p-n junction and one high-low junction, and c) just one p-n junction. In all those types of BSC are based on a heteropolar p-n junction. In this case, it is necessary to obtain two junctions: a heteropolar p-n junction at the frontal side of the silicon wafer and a homopolar n/n + or p/p + junction at its rear side. Usually these junctions are fabricated by impurity diffusion in the silicon wafer. The diffusion takes place at temperatures higher than 800 0 C and requires special conditions and strict control. In the case of the back surface field (BSF) fabrication, these difficulties increase since it is necessary to carry out the simultaneous diffusion of impurities that have an opposite influence on the silicon properties. Therefore the problem arises concerning the protection of silicon surface from undesirable impurities. The main purpose of this overview is to demonstrate the possibility to manufacture, on the base of nSi, monofacial as well as a novel type of bifacial solar cells with efficiencies over 10%, containing only homopolar junctions with an enlarged active area, using spray pyrolysis technique, the simplest method of obtaining SIS structures with a shallow junction. The utilization of such structures removes a considerable part of the abovementioned problems in BSC fabrication. The results of the investigations of ITO/pInP SC obtained by spray pyrolysis are also discussed.
The history of semiconductor-insulator-semiconductor solar cells
First, it must be noted that SC obtained on the base of MIS and SIS structures are practically the same type of SC, even though they are sometimes considered as being different devices. The similarity of these structures was demonstrated experimentally and theoretically for two of the most common systems, Al/SiO x /pSi and ITO/SiO x /pSi (Schewchun et al, 1980). The tunnel current through the insulator layer at the interface is the transport mechanism between the metal or oxide semiconductor and the radiation-absorbing semiconductor, silicon in this case. One of the main advantages of SIS based SC is the elimination of high temperature diffusion process from the technological chain, the maximum temperature at the SIS structure fabrication not being higher than 450 o C. The films can be deposited by a variety of techniques among which the spray deposition method is particularly attractive since it is simple, relatively fast, and vacuumless (Chopra et al., 1983). Besides, the superficial layer of the silicon wafer where the electrical field is localized is not affected by the impurity diffusion. The TCO films with the band gap of the order of 3.3-3.7eV are transparent in the whole region of solar spectrum, especially in the blue and ultraviolet regions, which increase the photoresponce comparative to the traditional SC. The TCO layer assists with the collection of separated charge carriers and at the same time is an antireflection coating. In SC fabrication the most utilized TCO materials are tin oxide, indium oxide and their mixture known as indium tin oxide (ITO). Thin ITO layers have been deposited onto different semiconductors to obtain SIS structures: Si (Malik et al., 1979), InP (Botnariuc et al., 1990), CdTe (Adeeb et al., 1987), GaAs (Simashkevich et al., 1992). Therefore, solar cells fabricated on the base of SIS structures have been recommended as low cost photovoltaic solar energy converters. The reduction in cost of such solar cells is due to the simple technology used for the junction fabrication. The separation of light generated carriers is achieved by a space charge region that in the basic semiconductor is near the insulator layer. The number of publications concerning the fabrication and investigation of SIS structures is very big, therefore we will limited our consideration of the given structures only to those on the base of the most widespread solar materials -silicon and indium phosphide. To be exact, main attention will be focused on SC on the base of ITO/nSi and ITO/pInP.
SIS structures on the base of silicon crystals
As shown above, one of the ways to solve the problem of the cost reduction of the electrical energy provided by SC is to use SIS structures. First publications regarding the obtaining and investigation of ITO/nSi structures appeared in 1976. (Mizrah & Adler, 1976). Power conversion efficiencies of 1% were reported for an ITO/nSi cell, obtained by the magnetron dispersion of ITO layers on the surface of nSi crystals with an active area of 0.13 cm 2 . The data obtained from the investigated I-V dark characteristics and known band gaps and the work functions of ITO and Si allows to make the band diagram of these structures (Fig. 1). The efficiency of 10% was observed for ITO/nSi cells, obtained by the spray deposition of ITO layers onto nSi crystals with the area of 0.1 cm 2 (Manifacier & Szepessy 1977;Calderer et al., 1979). ITO/nSi SC with the power conversion efficiencies of 10% were fabricated by deposition onto n-type Si crystals by the electron-beam evaporation of a mixture of 90:10 molar % In 2 O 3 : SnO 2 powder (Feng et al., 1979). The results of those works have been analyzed in detail (Shewchun et al., 1978; from both experimental and theoretical points of view. Given the general theory of heterojunctions is incomprehensible, how they can work as effective SC formed by materials with different crystalline types and lattice constants, when an intermediate layer with many defects appears at the interface. It is intriguing to note here that various authors have received quite contradictory results. Examining these data, authors in concluded that the performance of those SC depended on the intermediate thin insulator layer. Its main function is the compensation of the defects due to the mismatches of the crystalline lattices. Its thickness is not greater than 30Å, which ensures the tunnel transport of the carriers through the barrier. The theoretical analysis of ITO/nSi solar cell has shown that they are similar to MIS structures: their parameters depend on the thickness of the insulating layer at the interface, the substrate doping level, concentration of surface states, oxide electric charge and temperature. The optimization of these parameters can provide 20% efficiency. In this issue was examined in terms of energy losses during conversion of sunlight into electricity. Different mechanisms of energy loss that limit ITO/nSi solar cell efficiency are probably valid for other SIS structures too. Dark currentvoltage characteristics were used as experimental material and it was shown that after a certain threshold of direct voltage these characteristics do not differ from similar characteristics of p-n junctions in silicon, and the current is controlled by diffusion processes in silicon volume. Different mechanisms of energy loss that limit ITO/nSi solar cell efficiency are presented in Table 1 . An increase of the conversion efficiency of SC based on ITO/nSi structures can be achieved by the optimization of the thickness of the frontal ITO layer and of the insulator SiO 2 layer; the optimization of the concentration of electrons in absorbing Si wafers; the texturing of the Si wafers surface. The thickness of the frontal ITO layer is a very important factor because it affects the quantity of the absorbed solar radiation depending on both absorption and reflection. It is necessary to select such ITO layer thickness that determines a large minimum of the reflection in the region of maximum sensitivity of the n + ITO/SiO 2 /nSi SC. At the same time, the thickness of the frontal ITO layer determines their electrical resistance and, therefore, the value of the photocurrent, but for all that, the growing of the ITO layer thickness has an contrary effect on the solar cells efficiency, diminishing the absorption and increasing the photocurrent. At the same time, the thickness of the frontal ITO layer determines the efficiency of this layer as an anti reflection coating. The properties of the SIS structures, largely, also depend on the thickness of the SiO 2 insulator layer at ITO/Si interface. This SiO 2 layer increases the height of the junction potential barrier and diminishes the saturation current. Besides, the insulator SiO 2 layer must be tunnel transparent for charge carrier transport. The optimal SiO 2 insulator layer thickness must be not more than some tens of Å. All silicon wafers must be oriented in the (100) plane because only such crystallographic orientation could be used to get a potential barrier by ITO spray deposition. Single crystalline Si wafers with different carrier concentration from 10 15 cm -3 up to 10 18 cm -3 have been used to fabricate ITO/nSi SIS structures by spray deposition. The influence of the structural state of the Si single crystalline wafers on the conversion efficiency will be discussed in the next section of this overview. The paper (Feng et al., 1979) studied the current transport mechanism of ITO/Si structures, the TCO layer beings obtained by evaporation under the action of an electron beam. Pretreatment of Si crystals with Cl 2 has led to the increased yield from 2.3% to 5.5%. In this case the current transport mechanism was dominated by recombination in the space charge layer, while there is the thermo emission over the potential barrier in the absence of Cl 2 . Systematical studies of the properties of the ITO/nSi structures, obtained by spray pyrolysis, were carried out in 1980 (Ashok et al., 1980). The optical and electrical characteristics of the IT0 layer as well as the thickness of the insulator layer have been optimized to yield the following photovoltaic parameters on 0.5Ohm·cm nSi: V oc = 0.52V, J sc =31.5mA/cm 2 , FF=0.70, conversion efficiency is 11.5%. The dark I-V and C-V characteristics have also been evaluated to identify the mechanisms of barrier formation and current flow. C-V data indicate an abrupt heterojuncton, while dark I-V characteristics are suggestive of a tunneling process to determine current flow in these devices in conformity with the Riben and Feucht model (Riben & Feucht, 1966). A comparison of spray deposited ITO/nSi and SnO 2 /nSi was presented by Japanese researchers (Nagatomo et al. 1982). The diode and photovoltaic properties of these structures are very similar, but the conversion efficiency of ITO/nSi is higher, up to 11-13%, whereas for SnO 2 /nSi these values do not exceed 7.2% (Nagatomo et al., 1979). As is reported in the paper (Malik et al., 2008;Malik et al., 2009), the authors fabricated ITO/nSi solar cells using n-type single crystalline silicon wafers with a 10Ohm·cm resistivity and an 80nm thick ITO film with a sheet resistance of 30Ohm/□ that was deposited by spray pyrolysis on the silicon substrate treated in the H 2 O 2 solution. This ITO thickness was chosen in order to obtain an effective antireflection action of the film. The cells obtained in such a way can be considered as structures presenting an inverted p-n junction (Fig. 2). Under the AM0 and AM1.5 solar illumination conditions, the efficiency is 10.8% and 12.2%, respectively. The theoretical modeling based on p-n solar cells shows an excellent agreement between the theoretical and the experimental results. It is also shown that using 1Ω·cm silicon substrates is a promising alternative for obtaining solar cells with 14% efficiency under AM1.5 illumination conditions. Various models for energetic band diagrams and the carrier transport mechanism in SIS ITO/nSi cells have been proposed so far. Among them are the thermo ionic emission as the dominant charge transport mechanism in the SC obtained by spray deposition of SnO 2 onto nSi crystals (Kato et al., 1975), and the recombination current in the depletion layer for the CVD deposited ITO/nSi junction (Varma et al., 1984). Majority of authors suggested that trap -assisted multi step tunneling through the depletion layer is the determinant current flow mechanism (Ashok et al., 1980;Saim & Campbell, 1987;Kobayashi et al., 1990;Simashkevich et al., 2009).
The mechanism of the current transport through the potential barrier is determined by the energetic band diagram and the height of the barrier. When the later is very high, a physical p-n junction is formed in Si crystals near the surface (Fig. 2). Otherwise, the ITO/nSi SC operate as MIS structures or Schottky diodes (Fig. 1). Some data about the efficiencies of ITO/nSi SC are presented in Table 2. The analysis of the works referred to shows that the conversion efficiency of ITO/nSi solar cells obtained by various methods is about 10% and in some cases reaches 12%. Their active area is not more than a few square centimeters, which is not enough for practical application.
ITO/nSi solar cells with textured surface of Si crystalls
As can be seen from Table 1, the optical losses of ITO/nSI solar cells are up to 8%, other estimates show that they can exceed 10% (Garcia et al., 1982). Those losses depend on the surface state of silicon wafers and can be minimized by creating a textured surface of the light absorbing semiconductor material, thus reducing the reflection and increasing the absorption. The texturization leads to the enlargement of the junction area of a photovoltaic cell and to the increase of the conversion efficiency. The enlargement of the junction area in the case of silicon crystals is usually achieved by means of selective chemical etching in KOH (Bobeico et al., 2001;Dikusar et al., 2008;Simashkevich et al., 2011). As a result, pyramids or truncated cones with the base dimensions of 5 mx5 m or with a diameter of 10 m on the Si surface are formed. The efficiency of 12.6% under AM1 simulated irradiation was obtained for SnO2: P/SiO 2 /nSi SC with the active area of 2cm 2 (Wishwakarma et al., 1993) Those cells were fabricated by deposition of SnO 2 layers doped with P by CVD method on the textured surface of the Si crystals with resistivity of 0.1 Ohm.cm. SiO 2 insulating layer was obtained by chemical methods. The textured surface of the Si crystals reduces the frontal reflectivity, and consequently increases the short circuit current by around 10%. ITO/nSI obtained by spray deposition of ITO layers on nSi wafers oriented in (100) plane, were obtained in Japan (Kobayashi et al, 1990). The final size of the active area of the cell was 0.9cm x 0.9cm. Mat-textured Si surfaces were produced by the immersion of the Si wafers in NaOH solution at 85 0 C. For so treated specimens the solar energy conversion efficiency of 13% was attained under AM1 illumination. The paper (Simashkevich et al., 2011) studied the properties of ITO/nSi SC with improved parameters. The performed optimization consists in the following: the optimization of the thickness of the frontal ITO layer and of the thickness of the insulator SiO 2 layer; the optimization of the concentration of the electrons in absorbing Si wafers; the texturing of the Si wafers surface. The performed investigations make it possible to come to the following conclusions. The optimum thickness of the frontal ITO layer was determined experimentally from the photoelectric investigations and is equal to 0.5 m. The SiO 2 layer can be obtained by different methods. In the case of fabrication n + ITO/SiO 2 /nSi solar cells by spray pyrolysis, the optimal SiO 2 layer thickness was obtained by a combined thermo chemical method selecting the temperature regime and the speed of the gas flow during ITO layer deposition. The optimal SiO 2 insulator layer thickness, measured by means of ellipsometric method, is about 30-40Å.
To determine the optimal electron concentration ITO/nSi SIS structures were investigated obtained by ITO spray deposition on the surface of phosphor and antimony doped single crystalline Si wafers with different carrier concentrations: 10 15 cm -3 , 5·10 15 cm -3 , 6·10 16 cm -3 , and 2·10 18 cm -3 , produced in Russia (STB Telecom) and Germany (Siltronix, Semirep). The investigation of the electrical properties of n + ITO/SiO 2 /nSi SC shows that the optimum values of the barrier height, equal to 0.53eV and the space charge region thickness equal to W=0.36 m, have been obtained in the case of Si crystals with the electron concentration 5·10 15 cm -3 . Carrier diffusion length (L) is one of the main parameters for bifacial solar cells. For this silicon crystal L is about 200 m. The BSF region at the rear side of the cell was obtained by phosphor diffusion.
To enlarge the active area and reduce optical losses due to radiation reflection, the active area of Si wafer, oriented in a plane (100), was exposed to the anisotropic etching. The etching was spent by two expedients for reception of the irregular and regular landform. In both cases, the boiling 50% aqueous solution of KOH was used as the etching agent. The processing time was 60 -80s. In the first case, the etching process was yielded without initially making a landform on the silicon wafer surface for the subsequent orientation of the etching process. Fig. 3a shows that the landform of the silicon surface is irregular and unequal in depth. The depth of poles of etching is within the limits of 2-3 m.
In the second case, the method of making the ranked landform in the form of an inverse pyramid was applied. The chemical micro structurisation of the silicon wafer surface was carried out in the following order: the deposition of a SiO 2 thin film with 0.1 m thickness by electron beam method; the deposition on the SiO 2 thin film of a photo resists layer and its exposure to an ultraviolet radiation through a special mask; removal of the irradiated photo resist and etching SiO 2 with HF through the formed windows; removal of the remaining photo resist thin film. The anisotropic etching of the silicon surface through the windows in SiO 2 thin film was carried out. The result of this type of etching is shown in Fig. 3b. www.intechopen.com It is evident that the micro structured surface represents a plane with a hexagonal ornament formed by inverse quadrangular pyramids with 4 m base and 2-3 m depth. After the deposition of ITO layers on the both types of the textured surfaces of the silicon wafers ( Fig. 3) and Cu evaporated grid on the frontal side and continuous Cu layer on the rear side, two types of the optimized structures have been fabricated (Fig. 4). For samples obtained on textured Si wafers with irregular landform (Fig. 5), the efficiency and other photoelectric parameters increased in comparison with the SC described earlier (Gagara et al, 1996, Simashkevich et al, 1999. Besides, the results improved when Si wafers with regular landform (Fig.3b) were used for ITO/SiO 2 /nSi solar cell fabrication. The respective load I-V characteristic is presented in Fig. 6. The summary data regarding the methods of ITO layer deposition onto the textured silicon wafers and the obtained efficiencies are presented in Table 3.
SIS structures on the base of InP and other crystals
Indium phosphide is known to be one of the most preferable materials for the fabrication of solar cells due to its optimum band gap; therefore, it is possible to obtain solar energy conversion into electric power with high efficiency. On the base of InP, SC have been fabricated with the efficiency of more than 20 % (Gessert, et al, 1990). In addition, InP based SC are stable under harsh radiation conditions. It was shown (Botnaryuk, Gorchiak et al., 1990, Yamamoto et al, 1984, Horvath et al, 1998) that the efficiency of these SC after proton and electron irradiation decreases less than in the case of Si or GaAs based SC. However, due to the high price of InP wafers, in terrestrial applications, indium phosphide based SC could not be competitive with SC fabricated on other existing semiconductor solar materials such as silicon.
Fabrication of ITO/InP photovoltaic devices
Let us consider the fabrication process of ITO/InP photovoltaic devices. Two main methods of the ITO layer deposition onto InP crystals are used. The first method consists in the utilization of an ion beam sputtering system (Aharoni et al., 1986). The fabrication process of InP photovoltaic devices using this method and the obtained results are described in detail elsewhere (Gessert et al., 1990;Aharoni et al., 1999). A schematic diagram of the ITO/InP solar cell fabricated by the above-mentioned method is presented in Fig. 7. The operation of solar cells shown in Fig. 7 can be attributed to two possible mechanisms. One is that the conductive ITO and the substrate form an nITO/pInP Schottky type barrier junction. The second is the formation of a homojunction due to the formation of a "dead" layer (thickness -d) at the top of the InP substrate. This "dead" layer is caused by the crystal damage, which results from the impingement of the particles sputtered from the target on the InP top surface. The "dead" layer volume is characterized by extremely short free carrier's life times, i.e. high carrier recombination rates, with respect to the underlying InP crystal. Accordingly, it forms the "n" side of a homojunction with the "p" type underlying InP. The formation of an n-p junction in InP may be due to tin diffusion from the ITO into the InP, where tin acts as a substitution donor on In sites. The record efficiency of 18.9% was obtained in (Li et al., 1989) for ITO/InP structures, when the ITO layer was deposited by magnetron dispersion on p + p/InP treated preliminary in Ar/O 2 plasma. Using the above-described sputtering process, a small-scale production of 4cm 2 ITO/InP photovoltaic solar cells has been organized at Solar Energy Research Institute (now National Renewable Energy Laboratory), Golden, Colorado, the USA (Gessert et al., 1991). Although only a small number of the 4cm 2 ITO/InP cells (approximately 10 cells total) were fabricated, the average cell efficiency is determined to be 15.5%, the highest cell performance being 16.1% AM0. Dark I-V data analysis indicates that the cells demonstrate near-ideal characteristics, with a diode ideality factor and reverse saturation current density of 1.02 and 1.1·10 -12 mA/cm 2 , respectively (Gessert et al, 1990). The second, a simpler, method of ITO/InP photovoltaic devices fabrication consists in spray-pyrolitic deposition of ITO layers onto InP substrates (Andronic et al., 1998;Simashkevich et al., 1999;Gagara et al., 1986;Vasu et al, 1993). ITO layers were deposited on the surface of InP wafers by spraying an alcoholic solution of InCl 3 and SnCl 4 in different proportions. The following chemical reactions took place on the heated up substrate: SnCl 4 + O 2 = SnO 2 + 2Cl 2 ITO thin films with the thickness of 150-250nm were deposited by the above-mentioned spray method in various gaseous environments: O 2 , Ar, or air atmosphere. When the inert gas was carrier gas, the installation could be completely isolated from the environment that allowed obtaining the structures in the atmosphere without oxygen. A thin insulator layer with the thickness up to 10nm is formed on InP surface due to the oxidation of the substrate during spraying. The oxidation of InP wafers in HNO 3 for 20-30s was realized in the case of inert gas atmosphere. In the case of InP crystals, a thin insulator P 2 O 5 layer with the thickness 3-4 nm was formed on InP wafer surface during the ITO layers deposition. Ohmic contacts to pInP were obtained by thermal vacuum evaporation of 95 % Ag and 5 % Zn alloy on the previously polished rear surface of the wafer.
Structures with different crystallographic orientation and holes concentration in the InP substrates were obtained. The optimum concentration of the charge carriers in plnP substrates was 10 16 cm -3 , but the InP wafers with these carrier concentrations and the thickness of 400 nm had a high resistance. For this reason, p/p + InP substrates were used in order to obtain efficient solar cells with a low series resistance. In some cases a plnP layer with the thickness up to 4 µm and concentration p = (3...30)·10 16 cm -3 was deposited by the gas epitaxy method from the InPCl 3 H 2 system on the (100) oriented surface of InP heavily doped substrate with the concentrations p + = (1...3)·10 18 cm -3 for the fabrication of ITO/pInP/p + InP structures. Ag and 5 % Zn alloy evaporated in a vacuum through a special mask were used as an ohmic contact to the ITO and to InP crystal. A schematic diagram of ITO/p/p + InP structure obtained by spray pyrolitic method is presented in Fig. 8. Fig. 8. Schematic diagram of the ITO/InP structure obtained by spray pyrolitic method.
Electrical properties of ITO/InP solar cells
The energy band diagram of the ITO/pInP structure proposed in (Botnariuc et al., 1990) is presented in Fig. 9. The current flow mechanism of the ITO/InP structures, obtained in different fabrication conditions, was clarified in (Andronic et al., 1998) on the base of the energy band diagram below. One can suppose the existence of two channels of carriers transport through the structure interface (insertion in Fig. 10a). The first channel is the following: the majority carriers from InP are tunneling through the barrier at the interface and then recombining step by step with electrons from ITO conduction band (Riben & Feucht, 1966). According to this model, the I-V characteristic slope should not depend on temperature. The second channel appears at the direct bias of more than 0.6V and is determined by the emission of electrons from the ГГО conduction band to the InP conduction band This emission should occur by changing the I-V curves slope at different temperatures. As one can see from the experimental data, these two channels are displayed by two segments on I-V characteristics. Fig. 10b shows the I-V characteristics of the ITO/InP structures achieved in an oxygen environment or under substrate oxidation. In this case, the presence of the insulator layer on the interface could be expected. The ITO/InP structures capacity-voltage measurements confirm this supposition. During the fabrication of the ITO/InP structure in oxygen atmosphere, a thin insulator layer on the interface is obtained. Changing the segment II from the ITO/InP structure I-V characteristics shows the presence of a thin insulator layer. The insulator presence leads to changing the process of electron emission from the ITO conduction band to the InP conduction band on the tunneling process through this insulator layer. Thus, the form of segment II on the I-V characteristics becomes similar to the segment I form.
Photoelectric properties of ITO/InP solar cells
where I L -light induced current, I s -the saturation current, T-temperature. The dependence of ITO/InP cells parameters in AM 0 conditions versus InP substrate orientation and hole concentration was studied. InP wafers with the orientation in (100) The photo sensibility spectral distribution of the p + /pInP(100) structure is presented in Fig. 11. The region of the spectral sensibility of Сu/nITO/pInP/Ag:Zn structure is situated between 400 -50 nm.
The minimum efficiency was observed when solar cells were obtained by deposition of ITO layers onto InP wafers oriented in (111) A direction. To increase the efficiency, those solar cells were thermally treated in H 2 atmosphere at the temperature of 350 0 C during 10 minutes to reduce the series resistance . It was shown that before the thermal treatment the following parameters had been obtained under AM 1.5 illumination conditions: U oc = 0.651 V, I sc = 18.12 mA/cm 2 , FF = 58 %, Eff. = 6.84 % (Fig. 12, curve 1). After the thermal treatment the parameters were: U oc = 0.658 V, I sc = 20.13 mA/cm 2 , FF = 58%, Eff.= 7.68 % (Fig.12, curve 2). The photoelectric parameters of the SC received on InP wafers with the concentration p = 3.10 17 cm -3 after the thermal treatment were U oc =0.626 V, I sc = 22.72 mA/cm 2 , FF = 71 %, Eff.= 10.1 % (Fig. 12, curve 3), that is better than for analogous SC without treatment in H 2 . The thermal treatment in H 2 leads to the undesirable decrease of the photo sensibility in the short wave region of the spectrum (Fig.13). The highest sensibility is observed at 870nm, which indicates that the maximum contribution in the photo sensibility is due to the absorption in InP. ITO/InP structures grown by spray pyrolysis were also investigated in Semiconductor Laboratory of the Indian Institute of Technology, Madras (Vasu & Subrahmanyam, 1992;Vasu, et al., 1993). The maximum efficiency of 10.7% was achieved under 100mW/cm 2 illumination for junctions having 5% by weight of tin in the ITO films. The texturing of the InP crystal surface in ITO/InP SC (Jenkins et al., 1992) reduces the surface reflection. These cells showed improvement in both short circuit current and fill factor, the efficiency can be increased by 6.74%. The texturing reduces the need for an optimum antireflection coating.
SIS structures for SC fabrication were also obtained on the base of other semiconductor materials besides Si and InP. ITO/CdTe (Adeeb et al., 1987) and ITO/GaAs (Simashkevich et al., 1992) structures were obtained by spray pyrolysis of ITO layers on pCdTe and pGaAs crystals. For ITO/CdTe the efficiency was 6%, for ITO/GaAs SC it did not exceed 2.5%.
Degradation of photoelectric parameters of ITO/InP solar cells exposed to ionizing radiation
The degradation of photoelectric parameters of ITO/InP solar cells after their irradiation by protons with energies E p =20.6MeV and flux density up to F p =10 13 cm -2 and by electrons with E c =1MeV and F e ≤10 15 cm -2 was investigated (Andronic et al., 1998). The results of the photoelectrical parameter measurements at AM0 conditions after the irradiation are presented in Fig. 14.
Higher efficiency of 11.6% is obtained if the InP substrate is oriented in [100] plane. We notice that after the irradiation of ITO/InP solar cells with an integral proton flux of 10 13 cm -2 , their efficiency decreases by 26%, that is less than in the case of Si and GaAs based solar cells. In the spectral characteristics of ITO/pInP solar cells after proton irradiation a small decrease of the photosensitivity in the long wavelength region of the spectrum was observed due to the decrease of the diffusion length.
Comparing the results of the radiation stability study of ITO/InP SC, fabricated by spray pyrolysis, with the results of similar investigations of other InP based structures, it is possible to conclude that in this case the radiation stability is also determined by the low efficiency of radiation defects generation and, hence, by the low concentration of deep recombination centers, reducing the efficiency of solar energy conversion in electric power.
Fabrication of ITO/nSi solar cells with enlarged area by spray pyrolisys
From the brief discussion above it can be concluded that the deposition of ITO layers by spray pyrolysis on the surface of different semiconductor materials allows manufacturing SC through a simple and less expensive technology. The most effective are ITO/InP SC but because of a very high cost of the InP crystals they cannot be widely used in terrestrial applications. To this effect ITO/nSi SC with the efficiency higher than 10% may be used, but it is necessary to develop the technology for SC fabrication with the active area enlarged up to 70-80 cm 2 as is the case of traditional silicon SC with p-n junction.
Deposition of ITO layers on enlarged silicon wafers
ITO layers are deposited on the nSi crystals surface using the specially designed installation (Simashkevich et al., 2004;Simashkevich et al., 2005) (Fig. 15) that has four main units: the spraying system (7), the system of displacement and rotation of the support on which the substrate is fixed (4, 5), the system of heating the substrate, and the system of the evacuation of the residual products of the pyrolysis (8). The heating system consists of an electric furnace (2) and a device for automatic regulation of the substrate temperature with the thermocouple (3). The rest of the installation parts are: the power unit (1), the cover (10), and the shielding plate (12). Silicon wafers (11) are located on the support (9) and with the displacement mechanism are moved into the deposition zone of the electric furnace (6). The construction of this mechanism provides the rotation of the support with the velocity of 60 rotations per minute, the speed necessary for the obtaining of thin films with uniform thickness on the all wafer surface. The alcoholic solution of the mixture SnCl 4 + InCl 3 is sprayed with compressed oxygen into the stove on the silicon wafer substrate, where the ITO thin film is formed due to thermal decomposition of the solution and the oxidation reaction. On the heated up substrate there are the chemical reactions describe above in formulas (1) and (2). The BSF n/n + junction was fabricated on the rear side of the wafer by a diffusion process starting from POCl 3 gas mixture. The junction formation ended with a wet chemical etching of POCl 3 residual in a 10% HF bath. A junction depth of 1 m was chosen in order to minimize recombination. To reduce the surface recombination velocity the wafers were thermally oxidized at the temperature of 850 o C. The main steps of the fabrication of BSC are schematized in Fig. 16.
Properties of ITO layers
The properties of the thus obtained ITO films depend on the concentration of indium chloride and tin chloride in the solution, the temperature of the substrate, the time of www.intechopen.com Solar Cells on the Base of Semiconductor-Insulator-Semiconductor Structures 317 spraying and the deposition speed. ITO films had a microcrystalline structure that was influenced by the crystal lattice of the support as the X-ray analysis showed. They had cubic structure with the lattice constant 10.14Å ). The SEM image of such an ITO film is presented in Fig. 17.
(a) (b) Fig. 15. Schematic a) and real b) view of the installation for ITO thin films deposition ITO/SiO 2 /nSi solar cells with the active area of 8.1cm 2 and 48.6cm 2 were fabricated. In some cases a BSF region was obtained at the rear contact by phosphor diffusion. Fig. 17 it is clear that the ITO film with the thickness of 400nm has a columnar structure, the column height being about 300nm and the width 50-100nm. ITO films with the maximum conductivity 4.7·10 3 Om -1 cm -1 , the electron concentration (3.5÷15)·10 21 cm -3 , , the mobility (15÷30)cm 2 /(V·s). and maximum transmission coefficient in the visible range of the spectrum (87 %) were obtained from solutions containing 90 % InCl 3 and 10 % SnCl 4 at the substrate temperature 450°C, deposition rate 100 Å/min, spraying time 45 s. ITO layers with the thickness 0.2mm to 0.7mm and uniform properties on the surface up to 75cm 2 were obtained. The dependence of the electrical parameters of ITO layers as a function of their composition is given in Table 5.
Parameters
Ratio of InCl 3 :SnCl 4 :C 2 H 5 OH component in the solution 10:0:10 9.5:0.5:10 9:1:10 8.5:1.5:10 8:2:10 0:10:10 , S·cm -1 2.6·10 2 2.6·10 3 4.7·10 3 2.6·10 3 1.3·10 3 42.4 n, cm -3 1.1·10 20 5.5·10 20 1.1·10 21 6.5·10 20 5.8·10 20 5.3·10 19 μ, cm -2 /(V·s) 15 29 27 25 14 5 Table 5. The dependence of the electrical parameters of ITO layers as a function of their composition The band gap width determined from the spectral dependence of the transmission coefficient is equal to 3.90eV and changes only for the content of 90-100% of InCl 3 in the spraying solution. If the content of InCl 3 is less than 90% the band gap remains constant and equal to 3.44eV. The optical transmission and reflectance spectra of the deposited on the glass substrate ITO thin films (Simashkevich et al., 2004) shows that the transparence in the visible range of spectrum is about 80%, 20% of the incident radiation is reflected. The ITO thin film thickness was varied by changing the quantity of the sprayed solution and it was evaluated from the reflectance spectrum (Simashkevich et al., 2004). The thickness of the layer was determined using the relationship (Moss et al., 1973): where: n-refraction index equal to 1.8 for ITO (Chopra et al., 1983); -the wavelengths for two neighboring maximum and minimum; d-the thickness of the ITO layer. Using this relation the thickness of ITO layers deposited on the nSi wafer surface in dependence on the quantity of the pulverized solution has been determined. This relation is linear and the layer thickness varies from 0.35 m up to 0.5 m.
Obtaining of ITO/nSi structures
The nSi wafers oriented in the (100) plane with resistivity 1.0 Ohm.cm and 4.5 Ohm.cm (concentrations 5·10 15 cm -3 and 1·10 15 cm -3 ) were used for the fabrication of SIS structures. Insulator layers were obtained on the wafers surface by different methods: anodic, thermal or chemical oxidation. The best results have been obtained at the utilization of the two last methods. The chemical oxidation of the silicon surface was realized by immersing the silicon wafer into the concentrated nitric acid for 15 seconds. A tunnel transparent for minority carriers insulator layers at the ITO/Si interface have been obtained thermally, if the deposition occurs in an oxygen containing atmosphere. Ellipsometrical measurement showed that the thickness of the SiO 2 insulator layer varies from 30 Å to 60 Å. The frontal grid was obtained by Cu vacuum evaporation. The investigation of the electrical properties of the obtained SIS structures demonstrates that these insulator layers are tunnel transparent for the current carriers. Thereby the obtained ITO/nSi SIS structures represent asymmetrical doped barrier structures in which a wide band gap oxide semiconductor plays the role of the transparent metal.
Electric properties
Current-voltage characteristics in the temperature range 293K-413K were studied. The general behavior of the I-V curves of directly biased devices in Fig. 18 is characterized by the presence of two straight-line regions with different slopes (Simashkevich et al., 2009). Two regions with different behavior could be observed from this figure In the first region, at external voltages lower than 0.3 V, the I-V curves are parallel, i.e., the angle of their inclination is constant. In this case, according to (Riben & Feucht, 1966), the charge carrier transport through the potential barrier is implemented through the tunnel recombination processes in the space charge region, and the current-voltage dependence could be described by the relation: where A and B are constant and do not depend on voltage and temperature, respectively. The numerical value of the constant A, determined from dependences presented in Fig. 18 is equal to 15 V -1 . The value of the constant B, which is equal to 0.045 K -1 , was calculated from the same dependences that have been re-plotted as lnI = f(T). In (Riben & Feucht, 1966) the constant A is expressed by the relation: where m٭ e -is the electron effective mass (in Si in the case considered); ε s -the dielectric permeability of the silicon, and S represents the relative change of the electron energy after each step of the tunneling process. Note that 1/S represents the number of tunneling steps.
(a) (b) Fig. 19. The energy band diagram for: a) biases lower than 0.3 V (the region 1 in Fig. 18), b) biases higher than 0.3 V (region 2 in Fig. 18) The numerical value of A is easily calculated, since the other parameters in the respective expression represent fundamental constants or Si physical parameters. Hence, the mechanism of the charge carrier transport at direct biases of less than 0.3 V could be interpreted as multi-step tunnel recombination transitions of electrons from the silicon conduction band into the ITO conduction band (see the energy band diagram in Fig.19a), the number of steps being about 100. At voltages higher than 0.3 V (see different slope region in Fig. 18) the current flow mechanism through the ITO/nSi structure changes. The slopes of the I-V curves become temperature dependent that is confirmed by the constant value n about 1.6 of the parameter n in the relation: where C is a constant depending on the flux current model (emission or diffusion) (Milnes & Feucht, 1972).
Such an I-V dependence expressed by relations (7) and (8) is typical for transport mechanisms involving emission of electrons over potential barriers (Fig. 19b). Thus, at temperatures higher than 20°C, an initial voltage that stimulates the electron emission from Si into ITO over the potential barrier at the Si/ITO interface in n + ITO/SiO 2 /nSi structures is of about 0.3 V. From lnI = f (1/kT) it is possible to determine the height of the potential barrier φ B in ITO/nSi structures because the slope of the above-mentioned dependence is equal to φ B -qV a . The calculated value of φ B is 0.65eV, which is in correlation with the experimental data. A close value of the height of the potential barrier φ B equal to 0.68 eV was determined also from relation (8) (Simashkevich et al., 2009). To sum up, in n + ITO/SiO 2 /nSi structures two mechanisms of the direct current flow are observed: (i) tunneling recombination at direct voltages of less than 0.3 V and (ii) over barrier emission at voltages higher than 0.3 V. In the former case, the direct current flow could be interpreted as multi-step tunnel recombination transitions of electrons from the silicon conduction band into the ITO conduction band, the number of steps being of about 100. The reduction of the influence of the former as well as a fine adjustment of the SiO 2 thickness in investigated structures will lead to an increased efficiency of converting solar energy into electric energy.
Photoelectric properties
The spectral distribution of the quantum efficiency as well as the photosensitivity of the obtained PV cells have been studied (Simashkevich et al., 2004). The monochromatic light from the spectrograph is falling on a semitransparent mirror and is divided into two equal fluxes. One flux fall on the surface of a calibrated solar cell for the determination of the incident flux energy and the number (N) of incident photons. The second flux falls on the surface of the analyzed sample and the short circuit current Jsc is measured, thus permitting the calculation of the number of charge carriers, generated by the light and separated by the junction, and then the quantum efficiency for each wavelength (Fig. 20). The reproducibility of the process and the performances of the devices during samples realization were checked in each batch of samples as well as batch-to-batch. The enlargement of the area of the solar cells up to 48.6cm 2 leads to the increasing of the series resistance and to the diminishing of the efficiency down to 7%. Thus, the method of obtaining n + ITO/SiO 2 /nSi structures based on the thin In 2 O 3 : Sn layers, which are formed on the surface of Si wafers, traditionally chemically treated, passivated and heated to the temperature of 450°C, by spraying chemical solutions of indium tin chloride was elaborated. Solar cells based on n + ITO/SiO 2 /nSi structures with an active surface up to 48.6cm 2 have been fabricated. Maximum efficiency of 10.52% is obtained in the case of (100) crystallographic orientation of Si wafer with BSF region at the rear surface and active area of 8.1 cm 2 , ITO thickness 0.3mm, SiO 2 thickness -30Å and the concentration of charge carriers (electrons) in silicon (1-5)×10 15 cm -3 (Fig. 21). The developed technology demonstrates the viability of manufacturing solar cells based on n + ITO/SiO 2 /nSi junctions by assembling two 15W and two 30W power solar panels (Fig. 22) (Usatii, 2011).
Bifacial n + Si/nSi/SiO 2 /n + ITO solar cells
For the first time BSC that are able to convert the solar radiation incident of both sides of the cell into electric power have been produced and investigated fifty years ago (Mori, 1960). This type of SC has potential advantages over traditional monofacial SC. First, there is the possibility of producing more electric power due to the absorption of solar energy by the frontal and rear sides of the device, next, they do not have a continuous metallic rear contact, therefore they are transparent to the infrared radiation, which warms www.intechopen.com Solar Cells on the Base of Semiconductor-Insulator-Semiconductor Structures 323 the monofacial SC and reduces their efficiency. As was presented in (Cuevas, 2005), different types of BSC have been fabricated since then, but all those BSC are based on p-n junctions fabricated by impurity diffusion in the silicon wafer. In case of BSF fabrication, these difficulties increase since it is necessary to realize the simultaneous diffusion of different impurities, which have an adverse influence on the silicon properties. Therefore, the problem of protecting the silicon surface from the undesirable impurities appears. A novel type of BSC formed only by isotype junctions was proposed in , where the possibility was demonstrated to build BSC on the base of nSi crystals and indium tin oxide mixture (ITO) layers obtained by spraying that contain only homopolar junctions with a n + /n/n + structure The utilization of such structures removes a considerable part of the above-mentioned problems of BSC fabrication because a single diffusion process is carried out.
Si bifacial solar cells
In the work the results are presented of producing and investigating the silicon based BSC only on majority carriers. The first frontal junction is a SIS structure formed by an ITO layer deposited on the surface of n-type silicon crystal. The starting material is an n-type doped (0.7-4.5Ohm·cm) single crystalline (100) oriented Cz-Silicon 375 m thick nSi wafer with the diameter of 4 inches. The electron concentrations were 10 15 cm -3 -10 17 cm -3 . An usual BSF structure consisting of a highly doped nSi layer obtained by phosphorus diffusion was fabricated on the topside of the wafer by a diffusion process starting from POCl 3 gas mixture. The rear n/n + junction formation ends with a wet chemical etching of POCl 3 residual in a 10 % HF bath. A junction depth of 1 m has been chosen in order to minimize recombination. To reduce the surface recombination velocity the wafers have been thermally oxidized at a temperature of 850 o C. Grids obtained by Cu evaporation in vacuum were deposited on the frontal and back surfaces for BSC fabrication. The schematic view of the bifacial ITO/nSi solar cell is presented in Fig. 23. The photoelectric parameters of the elaborated BSC have been determined in standard AM1,5 conditions: for the frontal side V oc =0.425V, J sc =32.63mA/cm 2 , FF=68.29%, Eff.=9.47%, R ser =2.08Ohm, R sh =6.7·10 3 Ohm; for the back side V oc =0.392V, J sc =13.23mA/cm 2 , FF=69.28%, Eff.=3.6%, R ser =3.40Ohm, R sh =1.26·10 4 Ohm. The summary efficiency of the BSC is equal to 13.07%.
Si bifacial solar cells with textured surface of Si crystals
Using the method of n + ITO/SiO 2 /n/n + Si bifacial solar cells fabrication described in with improved parameters in conformity with p.2 of this communication, in (Simashkevich et al., 2011) two types of bifacial solar cells have been obtained which have different profiles of silicon wafer surface ( Fig. 26 and Fig. 27). It is seen from these data that the effected technology optimization allows to increase of the summary efficiency from 13.07% to 15.73% in the case of irregular etching of the silicon surface and to 20.89% in the case of regular etching. The bifaciality ratio also increases from 0.38 up to 0.75. On the basis of physical parameters of the silicon wafer, ITO layers and of the results of our experiments, the energy band diagram of the n + Si/nSi/SiO 2 /n + ITO structure was proposed . The light generated carriers are separated by the nSi/SiO 2 /ITO junction. The BSF of the n + Si/nSi junction facilitate the transport of the carriers to the back contact. The same processes take place at the illumination through the rear contact.
Conclusion
SC fabricated on the basis of semiconductor-insulator-semiconductor structures, obtained by deposition of TCO films on the surface of different semiconductor solar materials (Si, InP, CdTe etc) are promising devices for solar energy conversion due to the simplicity of their fabrication and relatively low cost. One of the main advantages of SIS based SC is the elimination of the high temperature diffusion process from the technological chain, which is necessary for obtaining p-n junctions, the maximum temperature at the SIS structure fabrication being not higher than 450 o C. The TCO films can be deposited by a variety of techniques among which the spray deposition method is particularly attractive since it is simple, relatively fast and vacuum less. Between different TCO materials, the ITO layers are the most suitable for the fabrication of SIS structures based solar cells. Silicon remains the most utilized absorbing semiconductor material for fabrication by spray pyrolysis of such type of SC. The maximum efficiency of ITO/nSi SC is 10-12%, but in the case of textured surface of Si crystals the efficiency reaches more than 15%. ITO/nSi SC with enlarged area up to 48 cm 2 have been obtained by the spray method, the efficiency is 10.58% for cells with area of 8.1cm 2 .
InP based SIS structures fabricated by deposition of ITO layers onto pInP crystal surfaces have high efficiencies, at the same time they are more simple to fabricate in comparison with diffusion junction cells. The efficiency of ITO/InP solar cells obtained by spray pyrolisis depends on the crystallographic orientation of the InP wafers, The maximum efficiency of 11.6% was obtained in the case of fabrication of ITO/pInP/p + InP structures using InP wafers oriented in the (110) plane. ITO/InP SC, obtained by spray pyrolysis demonstrates radiation stability. After the irradiation of ITO/InP solar cells with an integral proton flux of 10 13 cm -2 , their efficiency decreases by 26%, that is less than in the case of Si and GaAs based solar cells. A new type of bifacial solar cells n + Si/nSi/SiO 2 /n + ITO based only on isotype junctions was elaborated and fabricated. It was demonstrated that the simultaneous illumination of both frontal and rear surfaces of the structures allow to obtain a summary current. The technological process of manufacturing such solar cells does not require sophisticated equipment. Bifacial solar cells with summary efficiency of 21% and 65% bifaciality coefficient have been obtained using as an absorbent material of single crystalline silicon with a textured surface.
|
2017-09-14T04:00:15.490Z
|
2011-11-02T00:00:00.000
|
{
"year": 2011,
"sha1": "e3d694c4340621bf23171b69fb582b24af7e8e6d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/22755",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "361b05039c2aba406dac4d0825f09ac8601cfe19",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
45923947
|
pes2o/s2orc
|
v3-fos-license
|
Investigations on the Formation of 4-Aminobicyclo[2.2.2]-octanones
Benzylidene acetone reacts with thiocyanates derived from secondary amines in a one-pot reaction to give 4-aminobicyclo[2.2.2]octan-2-ones. The reaction mixture was investigated for the presence of possible intermediates using GC-MS. These intermediates – diketones and enamines – were prepared and exposed to the same reaction conditions to examine the reaction mechanism. The reaction of ethyl styryl ketone with thiocyanates of secondary amines yielded cyclohexanone derivatives instead of the expected bicyclo-octanones. Their structures were established by means of a single crystal structure analysis.
Introduction
Ammonium thiocyanates and benzylidene acetone have already been cyclized to products having the bicyclo [2.2.2]octan-2-one structure, which are useful precursors for compounds with antimalarial or antitrypanosomal activity [1]. In order to confirm the reaction mechanism authentic samples of proposed intermediates -diketones and enamines -were synthesized and the reaction mixtures then analyzed for their presence using GC-MS methods. In addition the reaction of ammonium thiocyanates with ethyl styryl ketone was investigated.
Results and Discussion
We have previously reported [2] the formation of 4-aminobicyclo [2.2.2]octan-2-ones from ammonium thiocyanates and benzylidene acetone in a one-pot reaction, proposing the following mechanism for this transformation: benzylidene acetone (1) reacts with dialkylammonium thiocyanates 2a-d in an initial step to give the enammonium salts 3a-d, followed by a Diels-Alder reaction of the latter with unreacted 1. The thus formed 4-acetylcyclohex-1-enylammonium salts 4a-d then cyclize to give the bicyclic compounds 5a-d (Scheme 1).
Scheme 1.
Ph Recently, Ramachary et al. [3] have reported amine-catalyzed self-Diels-Alder reactions of α,β-unsaturated ketones giving cyclohexanones. Therefore, the following alternative mechanism also ought to be considered to account for the formation of compounds 5a-d: the first step in this case is an amine catalyzed Diels-Alder reaction to give cyclohexanone 6, followed by the formation of enamine salts 4a-d which subsequently cyclize as mentioned above to afford the final bicyclo [2.2.2]octan-2-one products 5a-d (Scheme 2). We started our investigations with the synthesis of diketone 6 via an amine-catalyzed Diels-Alder reaction [3] giving selectively the symmetric diketone 7. Its diastereoisomer 8 was obtained by the reaction of 1 with 2-trimethylsilyloxy-4-phenyl-1,3-butadiene (9) [4]. The regioselective formation of enamines 10b-d, 11b and 12b was observed for the reaction of both diketones 7 and 8 with secondary amines by standard methods. In the case of diketone 8, an unseparable mixture of compounds 11b and 12b was produced (Scheme 3).
Scheme 2.
The enamine 10b and the mixture of 11b and 12b were exposed to the reaction conditions which are described by Morita and Kobayashi for the formation of 1-methyl-4-morpholinobicyclo- [2.2.2]octan-2-one from 4-acetyl-4-methyl-1-morpholino-1-cyclohexene [5]. However, the enamine 10b decomposed to the diketone 7 whereas the mixture of enamines 11b and 12b decomposed to a mixture of unseparable products. No 4-aminobicyclooctanone derivatives were detectable in these reaction mixtures by NMR experiments.
The diketone 7 was next refluxed with dimethylammonium thiocyanate in toluene at 160°C or in dimethylformamide at 200°C at a water separator but no reaction was observed. For the reaction of 7 with morpholinium thiocyanate in refluxing toluene we observed no formation of a bicyclic compound either. Diketone 8 reacts with morpholinium thiocyanate under the same conditions to give small but detectable amounts of 5b. The reaction of diketone 8 with morpholine in refluxing benzene under the catalysis of 4-toluenesulfonic acid yielded a mixture of 11b and 12b accompanied by small amounts of 5b. We monitored the reaction of benzylidene acetone with morpholinium thiocyanate in toluene using GC-MS methods. Each hour, we took a sample which was extracted with 2N NaOH and water to remove salts. The concentrations of benzylidene acetone and 5b formed during the progress of the reaction were calculated as the areas under the respective curves and are shown in Figure 1.
Scheme 3
Ph CH CH C O CH 3 Ph Ph Obviously the concentration of 5b increases parallel to the decrease of the concentration of 1. After 3.5 hours the maximum amount of 5b is reached and after this point, decomposition of 5b takes place. In addition to that, we found only very small amounts of 7, 8, 10b and 11b/12b and no significant change of concentrations was observed for these compounds during the course of the reaction. From these results, we assume that a formation of 5b via diketone 8 is possible.
Besides, the diketones 8 the ammonium salts 3a-d might be key intermediates during the formation of bicyclo-octanones. 4-Phenyl-3-buten-2-one-N-phenylimine was prepared by Brady et al. [6] by refluxing benzylidene acetone with aniline in benzene catalyzed by zinc chloride. The formation of cyclic products was not reported. However, when we replaced aniline by morpholine, we detected moderate amounts of 5b and small amounts of the diketones 7 and 8 instead of the expected imine. When ethyl styryl ketone (13) is used instead of benzylidene acetone (1) the reaction with morpholine under the same conditions gives compound 14, which is not stable, especially in an alkaline medium. The reaction of 13 with dimethylammonium thiocyanate in refluxing toluene yielded compounds 15 and 16, which were isolated by sequential crystallization from ethanol (Scheme 5). The structure of 16 was elucidated with the aid of a single crystal structure analysis (Figures 2 and 3). The probability ellipsoids are drawn at the 50% probability level. Besides, 10b-d were reduced regioselectively with Pd on charcoal to a mixture of 4-aminocyclohexanones 17b-d and 18b-d. Only compounds 17b-d were isolated in pure form by crystallization from ethanol (Scheme 3). The structure of 17b-d was determined with the aid of NMR spectroscopy. Small coupling constants (3Hz) of H-5 ax and H-3 ax to H-4 in the 1 H spectra of 17b indicate the equatorial position of H-4. Furthermore a NOE of 8.6% was observed from H-1 to H-3 ax and H-5 ax indicating the equatorial position of the acetyl group ( Figure 4).
Conclusions
Usually 4-aminobicyclo [2.2.2]octan-2-ones are prepared from benzylidene acetone and dialkylammonium thiocyanates in a one-pot reaction. During our investigations of the reaction mechanism we synthesized possible intermediates which were detected in the reaction mixtures by GC-MS methods. When one of them, a cyclic diketone, was used as starting material instead of benzylidene acetone the synthesis of the corresponding 4-aminobicyclo [2.2.2]octan-2-one was successful, but since only small amounts of the bicyclic compound were found we assume that this is not the main reaction path.
Acknowledgments
This work was supported by the Fonds zur Förderung der wissenschaftlichen Forschung (Austrian Science Fund, grant no. P-15928).
General
Melting points were obtained on an Electrothermal IA 9200 digital melting point apparatus and are uncorrected. IR spectra: infrared spectrometer system 2000 FT (Perkin Elmer). UV/VIS: Lambda 17 UV/VIS-spectrometer (Perkin Elmer). NMR spectra: Varian Inova 400 (300 K), 5 mm tubes, TMS resonance as internal standard. 1 H-and 13 C-resonances were assigned using 1 H, 1 H-and 1 H, 13 C-correlation spectra. HMBC spectra were optimized for 8 Hz. For NOE measurements oxygen was carefully removed by bubbling Ar through the solutions. 1 Benzylidene acetone (1, 46 g, 0.31 mol) and morpholine (27.4 g, 0.31 mol) were dissolved in benzene (125 mL) and zinc chloride (200 mg) was added. The mixture was refluxed at a water separator at 140°C over night, cooled to room temperature and filtered. The solvent was evaporated in vacuo giving a residue which was further purified by use of CC (eluent: 8:8:1 benzene/ chloroform/ethanol) affording 5b (15.4 g, 13.5%) as a yellowish resin. Spectral data corresponded well with those reported [2]. (7).
X-ray diffraction data of 16
All the measurements were performed using graphite-monochromatized Mo K α radiation at 95(2)K: C 24 H 30 NO + SCN -, M r 406.57, orthorhombic, space group P b c a, a = 9.777(2)Å, b = 15.746(3)Å, c = 28.318(5)Å, V = 4359.5(14)Å 3 , Z = 8, d calc = 1.239g cm -3 , µ = 0.167mm -1 . A total of 4771 reflections were collected (Θ max = 26.0°), from which 4272 were unique (R int = 0.0360), with 2847 having I > 2σ(I). The structure was solved by direct methods (SHELXS-97) [10] and refined by full-matrix leastsquares techniques against F 2 (SHELXL-97) [11]. The non-hydrogen atoms were refined with anisotropic displacement parameters without any constraints. The H atoms were refined with common isotropic displacement parameters for the H atoms bonded to the same acyclic C atom or to the same ring. The H atoms of the tertiary C-H groups were refined with all X-C-H angles equal at a C-H distance of 1.00Å. The H atoms of the CH 2 groups were refined with idealized geometry with approximately tetrahedral angles and C-H distances of 0.99Å. The H atoms of the methyl groups were refined with idealized geometry with tetrahedral angles, enabling rotation around the X-C bond, and C-H distances of 0.98 Å. The H atoms of the phenyl rings were put at the external bisector of the C-C-C angle at a C-H distance of 0.95Å. For 274 parameters final R indices of R = 0.0652 and wR 2 = 0.1325 (GOF = 1.050) were obtained. The largest peak in a difference Fourier map was 0.234eÅ -3 . The final atomic parameters, as well as bond lengths and angles are deposited at the Cambridge Crystallographic Data Centre (CCDC 231557). These data can be obtained free of charge from the Director, CCDC, 12 Union Road, Cambridge CB2 1EZ, UK (Fax: +44-1223-336033; e-mail: deposit@ccdc.cam.ac.uk or www: http://www.ccdc.cam.ac.uk).
|
2017-05-28T01:26:10.787Z
|
2005-05-01T00:00:00.000
|
{
"year": 2005,
"sha1": "600fe0490c6f69915188654b02f8c36246f07a17",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/10/3/521/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "600fe0490c6f69915188654b02f8c36246f07a17",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
221491396
|
pes2o/s2orc
|
v3-fos-license
|
Image analysis data for the study of the reactivity of the phases in Nd-Fe-B magnets etched with HCl-saturated Cyphos IL 101
Three phases can be distinguished in Nd‒Fe‒B permanent magnets: a Nd2Fe14B matrix grain phase, a Nd-rich grain boundary phase and a Nd-oxide phases. Common reaction models for leaching, such as the shrinking-particle model, cannot simply be applied to composite Nd‒Fe‒B permanent magnets because of the different chemical reactivities of the crystalline phases mentioned above. Etching the surface of a Nd‒Fe‒B magnet to expose its microstructure to electron microscopy is a necessary practice to correlate the microstructure itself to the specific properties of the magnets. Aqueous solutions of mineral acids are often used for etching purposes. However, these solutions are too low viscous to easily control the etching front and they show little selectivity in the etching process. In our work, the ionic liquid Cyphos IL 101 was used to etch bulk magnets instead of aqueous HCl solutions. The bulk Nd‒Fe‒B magnets were first polished, then exposed to a solution of 3 M HCl in Cyphos IL 101 for different times and at different temperatures. Afterwards, the etched Nd‒Fe‒B magnets were washed with ethanol and acetone. The results were examined via scanning-electron microscopy and image analysis. A commercial software, ImageJ®, was employed for image analysis. The latter technique was used to correlate the etched area (%area) or the grain and oxide size to the etching temperature or the etching time. The grain or the oxide size were calculated as Feret diameter. Image analysis revealed to be a necessary tool to support and correct the findings first suggested by the simple scanning-electron microscopy. The data presented in this article might be reused to corroborate a new reactivity order of the three Nd‒Fe‒B phases, different from that traditionally reported in literature, which is – from the most to the least reactive – grain boundary > oxides > the Nd2Fe14B grain phase.
a b s t r a c t
Three phases can be distinguished in Nd-Fe-B permanent magnets: a Nd 2 Fe 14 B matrix grain phase, a Nd-rich grain boundary phase and a Nd-oxide phases. Common reaction models for leaching, such as the shrinking-particle model, cannot simply be applied to composite Nd-Fe-B permanent magnets because of the different chemical reactivities of the crystalline phases mentioned above. Etching the surface of a Nd-Fe-B magnet to expose its microstructure to electron microscopy is a necessary practice to correlate the microstructure itself to the specific properties of the magnets. Aqueous solutions of mineral acids are often used for etching purposes. However, these solutions are too low viscous to easily control the etching front and they show little selectivity in the etching process. In our work, the ionic liquid Cyphos IL 101 was used to etch bulk magnets instead of aqueous HCl solutions. The bulk Nd-Fe-B magnets were first polished, then exposed to a solution of 3 M HCl in Cyphos IL 101 for different times and at different temperatures. Afterwards, the etched Nd-Fe-B magnets were washed with ethanol and acetone. The results were examined via scanning-electron microscopy and image analysis. A commercial software, Im-ageJ®, was employed for image analysis. The latter technique was used to correlate the etched area (%area) or the grain and oxide size to the etching temperature or the etching time. The grain or the oxide size were calculated as Feret diameter. Image analysis revealed to be a necessary tool to support and correct the findings first suggested by the simple scanning-electron microscopy. The data presented in this article might be reused to corroborate a new reactivity order of the three Nd-Fe-B phases, different from that traditionally reported in literature, which is -from the most to the least reactive -grain boundary > oxides > the Nd 2
Value of the Data
These data give a new insight in the reactivity of the three phases of Nd-Fe-B magnets, the stoichiometric grain phase, the oxide phase and the grain boundary phase. Materials scientists and engineers involved in the production of Nd-Fe-B magnets can benefit of these data for the design of new magnets and/or for the study of Nd-Fe-B magnets corrosion. These data can be integrated with similar ones from the corrosion of Nd-Fe-B magnets in other media, such as nitric acid, aqueous saline solutions, to give a general scheme for the reactivity of the three phases of Nd-Fe-B magnets. A technique cheaper than those commonly used to reveal the Nd-Fe-B magnet surface has been proposed in this paper.
Data Description
A simple scheme of the microstructure of Nd-Fe-B magnets is shown in Fig. 1: the grain boundary η (red), the Nd-rich oxides n (white) and the stoichiometric Nd 2 Fe 14 B phase ϕ (grey).
The elemental composition of the treated Nd-Fe-B magnets has been previously obtained [1] . Table 1 reports the composition per major elements of the phases ϕ (Fe, Nd and O -the latter being a contamination) and n (Nd, Dy and O), obtained via EDS; Fig. 2 is a back-scattered electron image of an unetched Nd-Fe-B magnet.
• Chemicals
The ionic liquid trihexyl(tetradecyl)phosphonium chloride (Cyphos® IL 101, [C101][Cl]) was purchased from Cytec (Brussels, Belgium). Concentrated HCl (37 wt.%) was purchased from VWR (Haasrode, Belgium), while D 2 O (99.99%) from Sigma-Aldrich (Diegem, Belgium). Technical grade ethanol, propanol and acetone were purchased from Merck (Darmstadt, Germany). Nd-Fe-B magnets were kindly provided by Magneti Ljubljana. They were never magnetized because re- [2] , with the exception that isostatic pressing is not applied and that the sintering is carried out together with a heating treatment. • Etching tests Cyphos IL 101 saturated with HCl served as the etching agent. The solution was prepared by contacting 50 mL of HCl (37%) with 5 g of Cyphos IL 101. This mixture was biphasic and part of the HCl was extracted to the ionic liquid (IL) with a final concentration of 3.0 M acid in the IL phase. This phase was separated and used as etching agent. The effect of the acid concentration was studied by also testing Cyphos IL 101 containing 1.2 M HCl. In the latter case, the etching agent was prepared by volumetrically diluting 10 times 37 wt. % HCl in the IL. In this way, a homogeneous mixture is obtained, which is applied as etching agent. Besides the effects of the concentration of the acid, the effect of the time and of the temperature on the etching mechanism were also investigated.
Before etching, the magnets were polished using standard metallographic methods to expose the microstructure to the etching agent and to the microscope. The surface was first scratched with a Struers Rotopol-15 machine using Struers waterproof SiC polishing papers with grit (in order of use): P#220, P#600, P#1000, P#2400 and P#4000. A step of finer polishing was then performed, using a Struers Labopol-5 and a diamond DP-paste P of grain size (in order of use) 3, 1 and ¼ μm. The conditions applied with both devices were: water-off polishing, ω = 150 rpm and propanol as lubricant. Whole magnets were polished without any further treatment, since they are big enough to be easily handled. To minimize the oxidation of the surface by air in between the different steps, the samples were kept under vacuum.
For the etching tests, the polished surface of a magnet was put into contact with a droplet of the etching agent, Cyphos IL 101 saturated with HCl, in a Petri dish for a fixed time (from 1.5 min to 4 h, at room temperature) and then washed with acetone. For the temperature tests, the Petri dish was heated to 40 or 60 °C for 3 min, controlled by a thermometer. The samples were analyzed via electron microscopy and image analysis. Scanning electron microscope (SEM) pictures and energy-dispersed spectra (EDS) were collected with a JEOL JSM 5800 microscope, operating at 20 kV. The polished samples were made conductive by spraying a carbon layer on them using a Balzer SCD 050 sputter coater. The EDS analysis were collected as average on at least 3 points per each SEM picture.
• Image analysis Image analysis supported the mere observation of SEM pictures and the EDS analysis to understand the reactivity of the three phases η, n and ϕ of the Nd-Fe-B magnets. Interestingly, the literature never considered image analysis to validate or discuss the data about corrosion of Nd-Fe-B magnets obtained from electron microscopy. A commercial software, ImageJ®, was used for the image analysis [3] . Two data were analysed: the Feret diameter and the percent- age of etched area, %area. The Feret diameter is defined as the distance between the two parallel planes that delimit the object and are perpendicular to a specific direction, with respect to which the measure is given. It is determined using the projections of a three-dimensional object on a two-dimensional plane. A graphical explanation of the Feret diameter is reported in Fig. 10 . The SEM images were converted to 8-bit grayscale, from 0 to 255 number of grey ranges. Simple linear scaling was applied. A threshold was set to define the area to analyse: when the Feret diameter was measured, the threshold referred to the particles, whereas when the %area was measured the threshold referred to the void spaces. In the latter case, dimples that might have been originated by the polishing rather than by the etching were unavoidably considered as well. The error was accounted to be mostly ≤5%. However, for measurements uniformity, the dimples area was excluded using the "Set Measurements" options. Three SEM pictures for each set of temperature or time conditions were examined to minimize errors due to the quality of the pictures or to the sensitivity of the operator. Moreover, two operators carried out the image analysis and their results were averaged.
The scale set for the measurement was based on the scale of the SEM analysis, paying attention on the fact that the images were in 1:1 aspect ratio. Area fraction images in threshold mode is the fraction, in percentage, of pixels highlighted by the threshold command. The fitting of the curves Feret diameter or %area vs time or temperature was done using the Origin® software. The curves were split in two fragments, based also on the SEM images, to better fit them with a super-linear, power law or a linear law.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships which have, or could be perceived to have, influenced the work reported in this article.
|
2020-08-27T09:05:18.112Z
|
2020-08-20T00:00:00.000
|
{
"year": 2020,
"sha1": "fe56994c593b5982d825892bf6262f6bf8086e67",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2020.106203",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "759974054afcd10db9513ec125ef75a49e0f1235",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
234086077
|
pes2o/s2orc
|
v3-fos-license
|
On Sustainable Development of Rural Tourism in Jiangxi Province from the Perspective of All-for-one Tourism
A good ecological environment is the greatest advantage and precious wealth of rural areas. Developing rural tourism energetically can not only promote the development of rural economy, but also promote the transformation and upgrading of rural industrial structure. At present, under the background of all-for-one tourism development and policy support, the countryside has become the best carrier to undertake the quality leisure vacation demand of urban residents, and developing rural tourism has also become an important way to promote the development of all-for-one tourism. On the basis of combing and analyzing present situation of rural tourism development in Jiangxi Province and combining with background of all-for-one tourism, this essay puts forward paths and countermeasures of rural tourism development in Jiangxi Province aiming at providing insight and reference for transformation and upgrading of rural tourism in Jiangxi.
Introduction
Jiangxi Province, which is a traditional agricultural province with rich agricultural resources and profound humanistic feelings, has a great prospect and space for development. In recent years, Jiangxi Province has always adhered to the goal of speeding up the development of modern agriculture and revitalizing the agricultural rural economy, and optimizes the agricultural industrial structure constantly. Leisure agriculture and rural tourism industry present a good development pattern of scale expansion, multiple types, layout optimization and quality improvement, which plays an important role in contributing to rural revitalization and increasing farmers' income. In recent years, the development scale of rural tourism in Jiangxi has been expanding which has formed into several types, such as Agritainment Tourism, Rural Sightseeing Tourism, Ancient Town and Village Cultural Tourism, Fishing Sightseeing Tourism, New Countryside Leisure Tourism, TCM Health Tourism and Sports Recreation Tourism. Under the background of development of all-for-one tourism, Jiangxi Province should seize the opportunity to find a suitable development path, further promote the high quality development of leisure agriculture and rural tourism, and work out a path of rural tourism development with Jiangxi characteristics.
All-for-one Tourism
The all-for-one tourism is a new concept and model that takes the whole region as the tourism destination to develop. It is in a certain region, taking mass leisure tourism as the background, relying on the industrial tourism, through the economic and social resources in the region, especially tourism resources, industrial management, ecological environment, public services, institutional mechanisms, policies and regulations, civilized quality and so on to carry on the all-round, systematic optimization promotion. The all-for-one tourism aims to realize the organic integration of regional resources, industrial integration and development, social co-construction and sharing, and drive and promote the coordinated economic and social development by tourism.
Developing the all-for-one tourism is to take a specific region as a complete tourist destination for the overall planning layout, comprehensive overall management and integrated marketing promotion to promote the development of tourism throughout the region, total factor and the whole industry chain, and realize the development model of tourism co-construction, global integration, global sharing, universal participation and comprehensive meeting the needs of tourists. Its purpose is to make the tourism industry move forward the whole society, multi-field and comprehensive direction, and let tourism integrate into the overall situation of economic and social development.
Rural Tourism
Rural tourism refers to a kind of tourism mode which is based on the countryside, takes the farmer as the main body, takes the unique natural environment, pastoral scenery, ecological form, folk customs, farming culture, rural settlement and so on as the main attraction, and to satisfy the tourists' purpose of sightseeing, leisure, vacation, experience, fitness, entertainment, accommodation, shopping and so on. It is a new tourism model and industrial form, which has typical characteristics of rural nature, participation and difference.
Rural tourism allows urban residents to return to nature, in addition to enjoying the pastoral scenery, but also to experience the activities of farmers. At the same time, the content of rural tourism will be different because of the different climate conditions, natural resources and traditional customs. Rural tourism mainly occurs in rural areas, where agricultural tourism resources are rich. So agricultural resources can be directly put into use with less investment and speedy effectiveness.
Rural Tourism Resources and its Development Status in Jiangxi Province
The ecological environment of Jiangxi is beautiful. Ecological environment is the basis of regional sustainable development. Ecology is the biggest advantage of Jiangxi, and green is the largest brand in Jiangxi. The excellent and good rate of air quality reaches 83.9% in the whole province. And the excellent and good rate of water quality reaches 92%. The forest coverage rate stabilizes at 63.1%, and the wetland conservation remains 910,000 hectares. Jiangxi Province's ecological environment quality steadily ranks in the forefront of the country. There are 191 nature reserves, 182 forest parks and 93 wetland parks. Poyang Lake is the largest freshwater lake in China, and it is also the most important wetland in the world and the largest winter migratory bird habitat in Asia.
Jiangxi province has a large number of tourism resources, with complete types, of high taste and profound cultural heritage. Jiangxi Province has 153 of the 155 basic landscapes in eight categories of national tourism resources standard classification. There are not only Mt. Lushan, Mt.Jinggang, Mt.Sanqing and other foothills landscape, but also Peak Gui, Mt.Longhu and other Danxia landform landscapes, even Poyang Lake, Fairy Lake, Zhelin Lake and other rivers and lakes landscapes. Jiangxi has obvious advantages in agricultural resources. Jiangxi is a big agricultural province, with rural population accounting for 80% of the province's population. Jiangxi has rich rural tourism resources, including agricultural idyllic scenery, agricultural modern industrial tourism destinations, residential buildings, folk culture, agricultural products, etc., which lays a good foundation for the development of leisure agriculture and rural tourism.
In recent years, Jiangxi province has made full use of available tourism resources and given full play to its innovation ability. Rural tourism with local characteristics, differentiation and personalization has emerged constantly. In 2018, rural tourism in Jiangxi received 360 million tourists, with a total tourism revenue of more than 340 billion yuan, accounting for 52.2% of the total number of tourists and 41.7% of the total tourism revenue of the province respectively. Up to now, Jiangxi has 15 AAAAA rural tourism spots, 139 AAAA rural tourism spots, 45 tourist towns, 20 National Leisure Agriculture and Rural Tourism Demonstration Sites, and 64 National-and Provincial-level Beautiful Villages.
Suggestions and Countermeasures for Development of Rural Tourism in Jiangxi Province in the All-for-one Tourism Era
With the transformation and upgrading of tourism industry from traditional sightseeing tourism to leisure vacation, and the development requirements from scenic spots to tourist destinations, all-for-one tourism is a new concept of regional tourism development in a new orientation, which gives rural tourism development a new driving force.
Strengthen Characteristics and Enhance Attractiveness of Rural Tourism Products
In the development process of local tourism, the key element is to take advantage of local resources, give full play to local tourism characteristics, attract the attention of tourists, so as to further expand the attractions of tourist destinations and the scale of tourists. In the process of developing rural tourism, Jiangxi province should give full play to the advantages of local natural resources and make good use of the beautiful natural environment and rich ecological resources in rural areas. At the same time, it should fully combine its own long history, profound cultural deposits and unique folk customs, reflecting the regional characteristics. In the process of developing rural tourism, Jiangxi province must further give play to the characteristics of its tourism products, combine them closely with culture, and design cultural tourism products with rich connotation, diversified forms and distinct regional characteristics. At the same time, rural areas should be combined with their own characteristics and integrated into the overall environment of regional tourism development.
Fully Integrate Resources and Build Rural Tourism Industry System
Tourism resources are the core of tourism development. In the process of tourism development, it will be affected by various factors. Therefore, it is necessary to analyze the development of local tourism from a more systematic perspective. Only through industrial cooperation can a win-win situation be achieved. In the process of development of rural tourism in Jiangxi province, it is essential to fully integrate all kinds of resources in tourism, deepen" rural tourism+"and fully integrate the rural tourism and agriculture, sports, health, culture, industry, so as to cultivate sightseeing, leisure, experience, holidays, festivals and other characteristic forms and push industry chain extension.
Strengthen Facilities Construction and Improve Quality of Rural Tourism
Infrastructure construction can provide the most basic conditions for the further development of tourism and attract more tourists. With the continuous development of rural tourism in Jiangxi province, its demand for infrastructure construction is increasing. According to the actual situation of each region in Jiangxi province, the development of rural tourism resources should be accompanied by continuous improvement of infrastructure construction, such as transportation and accommodation conditions, so as to provide convenient transportation for tourists and smooth sales channels for tourism products. Secondly, the construction of other facilities should be carried out simultaneously, such as the renovation of farmers' own family houses and the provision of beautiful and comfortable guest rooms so that tourists can experience the folk customs of the countryside at a leisurely pace. Only by improving the infrastructure can tourists be motivated to carry out tourism activities in rural areas, thus effectively driving economic development.
Care for Ecological and Environmental Protection to Achieve the Balance between Development and Protection
Tourism resources are the foundation of tourism development and utilization. Tourism resources will be affected and destroyed to different degrees in the development process, thus weakening its attraction to the tourism market and tourists. Therefore, in the development of rural tourism projects, we must always adhere to the will to protect the local ecological environment to ensure the sustainable development of rural tourism. The government should vigorously promote ecological protection, promulgate relevant laws and regulations, strengthen environmental protection awareness education among local residents, and strictly punish acts such as garbage dumping, sewage disposal, destruction of good farmland, and damage to ancient buildings. Tourism developers must follow the principle of "no waste, no following the trend, no pollution and no destruction" in their development, and deepen their exploration of local characteristics and good social customs. It is strictly forbidden to demolish and reform wildly. They also should strengthen the protection of the natural ecological environment, cultural landscape and etc.
Conclusion
Jiangxi is rich in rural tourism resources, large in size, and has advantages in resource endowment. In recent years, advantaged rural resources, beautiful and excellent ecological environment, perfect greenway network and a series of policy dividends have promoted rural tourism in Jiangxi to enter the fast track of development. The development of rural tourism in Jiangxi province should firmly grasp the opportunity, actively integrate into the all-for-one tourism development, further integrate resources, optimize industrial layout, promote the integrated development of rural tourism and other industries, and extend the industrial chain. At the same time, rural tourism products should be developed according to local conditions, focusing on its culture, characteristics and differences, and constantly enhancing the attraction of rural tourism. In addition, in the process of development, efforts should be made to protect ecological environment, so as to achieve both protection and development, and achieve a win-win situation between ecology and development, so as to smoothly realize the healthy and sustainable development of rural tourism in Jiangxi province.
|
2021-05-10T00:03:38.455Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "12d64df5a545ab54f899555bb0970b59f25e0f39",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/651/3/032109",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4a6441d168a40fbde84901ef9ae849b816fae996",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
}
|
83667149
|
pes2o/s2orc
|
v3-fos-license
|
Forensic analysis of mitochondrial DNA hypervariable region HVII ( encompassing nucleotide positions 37 to 340 ) and HVIII ( encompassing nucleotide positions 438-574 ) and evaluation of the importance of these variable positions for forensic genetic purposes
The first objective of this study was the detection of mitochondrial hypervariable HVII and HVIII regions. Secondly, the study evaluates the importance of these positions for forensic genetic purposes and establishes the degree of variation characteristic of a fragment. Blood samples were collected from 270 healthy unrelated male living in Middle and South of Iraq. FTA® Technology was utilized to extract DNA. A portion of a noncoding region encompassing positions 37 to 340 for HVII and encompassing positions 438 to 574 for HVIII, was amplified in accordance with the Anderson reference sequence. By using EZ-10 spin column the PCR products were purified, sequenced and detected by using the ABI 3730xL Genetic Analyzer. New polymorphic positions 57, 63, 101, 469 and 482 are described that may be very important for forensic identification purpose in the future. This study shows the importance of the adoption of mitochondria in forensic medicine and criminal diagnosis and a private Iraqi society was discovered as the study sites. Further study on larger number of samples from different Iraqi ethics groups is suggested to confirm the results obtained by this study.
INTRODUCTION
The introduction of DNA fingerprinting by an English scientist, Sir Alec Jeffreys in 1985 has had an enormous impact in forensic science (Jeffreys et al., 1985).Mammalian cells possess two different types and interdependent genomes, the nuclear genome and mitochondrial genome.Human DNA is basically composed of the coding and non-coding regions.The coding region only makes up about 3% of human genomic *Corresponding author.E-mail address: imad_dna@yahoo.com.Tel: 006-017-3642869.
Author(s) agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License DNA.Mitochondria are semi-autonomously functioning organelles containing a resident genome that undergoes replication, translation and transcription of their own DNA.Mitochondrial DNA comprising of about 37 genes coding for 22 tRNAs, two rRNAs and 13 mRNAs are a small circle of DNA (Helgason et al., 2003).MtDNA is only passed on from mother to child, it does not recombine and therefore there is no change between parent and child, unlike nuclear DNA (Ingman and Gyllensten, 2003;Ukhee et al., 2005;Imad et al., 2014a).There is more sequence divergence in mitochondrial than in nuclear DNA (Brown et al., 1993;Giulietta et al., 2000).
Each mitochondrion contains its own DNA, with many copies of the circular mitochondrial DNA in every cell.It is thought that each mitochondrion contains between 1 and 15, with an average of 4 to 5, copies of the DNA (Reynolds, 2000) and there are hundreds, sometimes thousands, of mitochondria per cell.The result is that there are many thousands of copies of the mitochondrial DNA in every cell.This compares with only two copies of nuclear DNA.The mitochondrion also has a strong protein coat that protects the mitochondrial DNA from degradation by bacterial enzymes.This compares to the nuclear envelope that is relatively weak and liable to degradation.DNA alterations (mutations) occur in a number of ways.One of the most common ways by which mutations occur is during DNA replication.An incorrect DNA base may be added; for example, a C is added instead of a G.This creates a single base change, or polymorphism, resulting in a new form.These single base mutations are rare but occur once every 1,200 bases in the human genome.The result is that the rate of change, or evolutionary rate, of mitochondrial DNA is about five times greater than nuclear DNA (Bar, 2000;Imad et al., 2014b).This is important in species testing, as even species thought to be closely related may in time accumulate differences in the mitochondrial DNA but show little difference in the nuclear DNA.A further reason for the use of mitochondrial DNA in species testing, and in forensic science, is its mode of inheritance.Mitochondria exist within the cytoplasm of cells, including the egg cells.
Spermatozoa do not normally pass on mitochondria and only pass on their nuclear DNA.The resulting embryo inherits all its mitochondria from its mother (Brown, 2002a,b;Tully, 2004;Imad et al., 2014c).This polymorphism allows scientists to compare mtDNA from crime scenes to mtDNA from given individuals to ascertain whether the tested individuals are within the maternal line (or another coincidentally matching maternal line) of people who could have been the source of the trace evidence.
Genetic studies of middle and south of Iraq by the use of molecular markers of mitochondrial DNA (mtDNA) have attracted the interest of population geneticists (Al-Zahery et al., 2003;Nadia et al., 2011).Sequence analysis of the HV1 and HV2 fragments of mitochondrial DNA (mtDNA) is today a routine method applied to forensic identification in cases where evidence specimens are not suitable for STR analysis.
Population
Two hundred and seventy (270) healthy, randomly chosen individuals derived from the middle and south of Iraq provinces (Baghdad, Babil, Diwania, and Basrah).The number and ethicity of individuals were chosen in order to obtain a population sample to achieve the highest possible representation of the major ethnoreligious and tribal groups of the Country living in these central and southern areas.
DNA extraction and PCR primers
DNA was extracted from all dried blood samples on FTA cards following the manufacture's procedure as described in Whatman FTA Protocol BD01 except that the Whatman FTA purification reagent was modified to half the volume (Dobbs et al., 2002).A 1.2 mm diameter disc was punched from each FTA card with a puncher.The discs were transferred to new eppendorf tubes and washed three times in 100μl Whatman FTA purification reagent.Each wash was incubated for 5 minutes at room temperature with moderate manual mixing and the reagent was discarded between washing steps.The discs were then washed twice in 200 μl TE buffer (10 mM Tris-HCl, 0.1 mM EDTA, pH 8.0), the buffer was discarded and the discs were left to dry at room temperature for 1 h.
The primers were designed manually by using The Cambridge Reference Sequence.Each primer was diluted to a final concentration of 100 pm/μl and kept at -20°C for longer storage.A portion of a noncoding region encompassing positions from 37 to 340 for HVII amplified in accordance with the Anderson reference sequence (Anderson et al., 1981) GenBank: J01415.In MtDNA-HVII the portion of DNA was amplified in two primers: the first one is HVII-F (37-58) 5'-CATTCTCATAATCGCCCACGG-3' and the second HVII-R has a position (320-340) 5'-CCCCCCATCCTTACCACCCTC-3'.A portion of a noncoding region encompassing positions from 438 to 574 for HVIII was amplified in accordance with the Anderson reference sequence (Anderson et al., 1981) GenBank: J01415.In MtDNA-HVIII the portion of DNA was amplified in two primers: the first one is HVIII-F (438-459) 5'-CAACTAACACATTATTTTCCCC-3' and the second HVIII-R has position (574-555) 5'-AACCCCAAAGACACCCCCCA-3'.PCR reaction was done in 0.2 ml PCR tubes with the following mixtures: 1 μl of each forward and reverse primer (10 pm/μl), 2jl of DNA template (5 ng/41) and 46 μl of PCR ReddyRunTM Master Mix.The following PCR condition was used: 94°C for 5 min, 30 cycles of 94°C for 30 s, 54°C for 30 s, 72°C for 45 s and final extension step at 72°C for 7 min.PCR products were kept at 4°C in a separate fridge from the pre-PCR components to avoid contamination.
Sequencing reaction of the PCR product
Purification and sequencing reaction of the PCR product was performed by EZ10-spin column DNA cleanup kit 100 prep EZ-10 spin column purification kits.PCR fragment was sequenced using ABI Prism Big Dye® Terminator Cycle Sequencing Kit on an ABI 377 sequencer.Each sequence obtained was then aligned with the Cambridge Reference Sequence.
Statistical analysis
The pattern of inheritance had made statistical analysis of mtDNA type much easier than any other genetic marker.
Since mtDNA is presented in each human being as haploid, determination of mtDNA type did not require the prerequisite of Hardy-Weinberg equilibrium for statistical analysis.Genetic diversity was calculated according to the formula: . Where, n is sample size and xi is the frequency of i-th mtDNA type) (Gu et al., 2001).The probability of two randomly selected individuals was from a population having identical mtDNA types.
Where p, frequencies of the observed Haplotypes (Jones, 1972).
Hypervariable region (HVIII) sequence variance and mtDNA haplotypes
The study enabled identification of 86 different haplotypes and 16 polymorphic nucleotide Positions in HVIII Table 3.Among these 16 variations, there were 11(69%) variation between T and C and 4 variations (25 %) between A and G. and just one position (6 %) between T and A. Three polymorphic positions, 447, 453, and 469 H*, Haplotype; G, guanine; T, thiamine; C, cytosine; A, adenine.
have transverse substitution (Table 4).Genetic diversity for the analysed DNA fragment was calculated according to the formula: D= 1-∑ p² and recorded 0.950% and 0.965% for HVII and HVIII respectively.The relatively high gene diversity and a relatively low random match probability were observed in this study.
Comparative analysis of our results with previously published Iraq data (Pastore, 1994;Yang and Yoder, 1999;Muhanned et al., 2015), revealed significant differences in SNP patterns.Haplotypes detected in this study group have been compared with other global populations: German (n = 200) (Lutz et al., 1998), US Caucasian (n = 604), Africa(n = 111), Malaysia (n = 195) (Budowle et al., 1999) and India (n = 98) (Mountain et al., 1995) (Table 5).Walsh et al. (1991) and Tang (2002) show that the polymorphism of mtDNA coding area is less than that of mtDNA control region.Therefore, more efficient polymorphic sites should be used to provide an improved discrimination power for forensic mtDNA testing (Imad et al., 2014c).
However, mtDNA data on Iraqi population is very limited.This had limited the application of mtDNA in forensic cases and study of mtDNA population genetics in Iraq.In future, development of more multiplexes targeting mtDNA polymorphisms within the control and coding regions might reduce the matching probability of mtDNA type and increased the utility of mtDNA in forensic cases.New*: New polymorphic positions; Genetic diversity* Genetic diversity for the analysed DNA fragment was calculated according to the formula: D= 1-∑ p².Note: % of transitions and transversions were calculated as number of observations divided by total substitution times. 1 This study, 2 Reference: [Mountain et al., 1995], 3 Reference: [Budowle et al., 1999], 4 Reference: [Budowle et al., 1999], 5 Reference: [Lutz et al., 1998], 6 Reference: [Budowle et al., 1999].
Table 2 .
Types of mutations in variable positions for HVII.
Table 4 .
Types of mutations in variable positions for HVIII.
Table 5 .
Comparisons of the characteristics across D-loop region in different human population groups.
|
2017-10-11T00:45:08.382Z
|
2015-02-04T00:00:00.000
|
{
"year": 2015,
"sha1": "b74aff60cbf994e225945804905745ed65316a8b",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/70378E150277.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b74aff60cbf994e225945804905745ed65316a8b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
295480
|
pes2o/s2orc
|
v3-fos-license
|
Cultural sexual selection in monogamous human populations
In humans, both sexes sometimes show peculiar mating preferences that do not appear to increase their fitness either directly or indirectly. As humans may transmit their preferences and target culturally, and these may be artificially modifiable, I develop theoretical models where a preference and/or a trait are culturally transmitted with a restriction of the trait modification. I assume a monogamous population where some individuals fail to find a mate, and this affects the preference and the trait in the next time step. I show that a strong aversion to, or high tolerance of, failed individuals are necessary for the evolution of irrational preferences that neither seek good genes nor any direct benefit. This evolution is more likely to occur when the preference and/or the trait are cultural rather than genetic. These results may partly explain why humans sometimes show mating preferences for exaggerated physical and cultural traits.
Introduction
Many mating behaviours and the associated physical traits are considered to have evolved through sexual selection. The most studied are the exaggerated male traits, such as the plumage of the peacock, as well as the female mating preferences for such traits. The theory of sexual selection was first proposed by Darwin [1], who argued that exaggerated male traits could evolve if female mating preferences led to more charming males, with exaggerated traits having more mates and more offspring, but he failed to explain why such preferences were adaptive. Fisher [2] suggested that a joint evolution of female mating preference and male trait would occur because of a genetic correlation of the two; this is known as the 'runaway process' or the 'sexy son hypothesis' [3], where females who prefer attractive males may produce attractive sons with a higher fitness than non-attractive males and, thus, the female preference for the attractive male trait increases in frequency. Zahavi [4] considered that males take the risk of displaying exaggerated traits to advertise their quality and females prefer more exaggerated males, who will 2017 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited. produce higher quality offspring; this is the 'handicap hypothesis'. Empirical studies and theoretical models have tested these hypotheses and confirmed that sexual selection may have played an important role in the evolution of many animal traits [5][6][7].
However, theoretical models of sexual selection for non-human animals do not apply to humans without modification because of the unique characteristics of human mating behaviour. Although many human behaviours are unique, this study focuses on only three characteristics: monogamy, male mating preference and cultural transmission. As some animals also display these characteristics, how each one affects the sexual selection process has partly been studied, in theory, but there are no models that simultaneously assume these three characteristics.
Although polygynous marriage is not totally prohibited in many (traditional) human societies, there are limited numbers of males with more than one wife, and the number of wives is often limited [8]. Strong monogamous marriage norms have spread across Westernized societies, probably because cultural evolution has favoured normative monogamy [9]. Humans have smaller testes than chimpanzees, which form promiscuous multi-male-multi-female groups [10], implying that human (hominid) societies have long been semi-monogamous [11,12]. In monogamous species, attractive (popular) males have little reproductive advantage because they mate with, at most, one female, thus limiting the number of offspring, and choosy females are maladaptive because they have to compete intensely with other females for attractive males who may have many candidates as a mate, suggesting that exaggerated male traits hardly evolve through sexual selection compared with polygynous species. In fact, theoretical models have shown that exaggerated male traits may evolve only when more attractive males can mate earlier, when there is a shortage of available female partners, or when variance in female quality is large, i.e. only when attractive males have a sufficiently large advantage [13,14]. In short, previous theoretical models have suggested that sexual selection by female choice has not strongly influenced the evolution of human-male traits compared with polygynous species.
As the number of offspring is limited in (mammalian) females (the 'Bateman's principle' [15]), attractive (popular) females have little reproductive advantage, and choosy males are maladaptive because of intense intrasexual competition for attractive females, resulting in exaggerated female traits hardly evolving through sexual selection compared with exaggerated male traits. In fact, males are generally more decorative than females in many polygynous species, although there is increasing evidence of male mate choice for female traits [16][17][18]. As the situation is similar to monogamous male traits, theoretical models have shown that exaggerated female traits may evolve (i.e. a runaway process may occur) when there is a shortage of available male partners or when the variance in male quality is large [14,19]. Moreover, there must be a disagreement between the fertility optimum and the viability optimum of the female trait for male choosiness to evolve, and then males tend to prefer more fertile females who have more feminine traits (less resemblance to males), because choosy males can obtain a direct benefit through mating with more fertile females [19]. These theoretical results may explain many human-male mating preferences for more feminine female traits such as larger breasts [20], a lower waistto-hip ratio [21], a more feminized face [22] and lighter-than-average skin colour [23], although whether these preferences are adaptive is controversial [24]. However, although most male mating preferences may be associated with female fertility, males sometimes show seemingly irrational preferences, which is difficult to explain based on previous theoretical models.
Human behaviour is often influenced by culture, which is one of our most unique characteristics and forms the basis of our great prosperity. It may be difficult to identify a human complex behaviour that is completely independent of culture or social learning. Human mating behaviour is no exception; e.g. we dress in fashionable clothes to attract the opposite sex. Foot binding in ancient China and neck rings in the Kayan are extreme examples of this. Since so-called modern behaviour is often characterized by artistic ornaments, such as shell beads and the use of ochre [25][26][27], mating behaviours of early modern humans and possibly Neanderthals may have been influenced by cultural transmission, although the evolutionary cause and actual uses of these cultural traits remain controversial [28][29][30][31][32][33][34]. As cultural preferences are affected by our social learning tendencies and cultural traits are artificially modifiable, their coevolutionary processes should differ greatly from the genetic processes. Although some theoretical models have assumed cultural preferences [35,36] and traits [37][38][39], they have considered the coevolution of female preference and male trait in polygynous species. Therefore, these models do not explain why male preferences for exaggerated female traits sometimes evolve in monogamous human populations.
Here, I formulate and analyse models of sexual selection in monogamous species where a trait and a preference are continuously distributed and culturally transmitted. I assume that some individuals do not have a mate during each breeding season because of a shortage of available mating partners, and this affects the expression of trait and preference in the next time step (generation). I also consider the case when a trait or a preference is genetically transmitted. I assume that the mating system of the population is completely monogamous to ensure that the model applies to both female preferences for male traits and male preferences for female traits (but the model may apply to polygynous mating systems with minor modifications; see Discussion). Although a monogamous human population is implicitly assumed, the model may also apply to other animals, such as monogamous birds with culturally transmitted songs that attract the opposite sex. The model may also apply to human fashion cycles.
Cultural trait and cultural preference
In the first model, we assume that both a mating preference and a target trait are culturally transmitted. The preference, y, is expressed in only a 'choosy sex' and the trait, z, is present in only a 'decorative sex.' Here, we do not specify which sex (male or female) is choosy or decorative because completely monogamous species are considered. The assumption of the mating system is explained later.
On an appropriate scale of measurement, the preference and the (matured) trait are assumed to have normal distributions, p(y) and t(z), with the meansȳ andz, and variances τ 2 and σ 2 , respectively. Formally, the distribution of the preference is and that of the trait is The mating preference of a choosy individual with preference y for a decorative individual with trait z is denoted as ψ(z|y). In this model, we assume the absolute preference [40], This implies that a y choosy individual prefers a decorative individual with trait value of y, and the preference decreases as the deviation from y increases. The choosiness is milder when ν 2 is larger. The following mating system is assumed. In each breeding season, individuals of the decorative sex display their traits to attract individuals of the choosy sex. Each choosy individual observes the traits of decorative individuals and courts one individual depending on his/her mating preference. Each decorative individual randomly selects one mating partner from the choosy individuals who court him/her, but if no individuals court him/her, he/she has no mates during that breeding season. Changes seen in the results when some individuals can have multiple mates will be discussed later.
Then the probability that a choosy individual with preference y courts a decorative individual with trait z relative to all the decorative individuals is the probability that a z individual has a mate is Assuming that decorative individuals differ little in their popularity, i.e. U(z) ≈ 1, we can approximate This approximation is justified in appendix A. Individuals of the decorative sex can be classified into two groups, those with and those without a mate in the breeding season. These are the 'popular' and 'unpopular' individuals, respectively. The average trait of popular individuals (popular trait) is and that of unpopular individuals (unpopular trait) is (2.11) The proportion of popular individuals is and that of unpopular individuals is Individuals of the decorative sex in the next generation (or the next breeding season) observe the traits of popular and unpopular individuals and gain their 'ideal' traits. Assuming that the ideal trait of each individual is the weighted mean of the observed popular and unpopular traits, the ideal trait of individual i is wherez i M andz i L are the average popular and unpopular traits observed by individual i, respectively. Parameter a describes the effect of each average trait (a > 0). When 0 < a ≤ 1, as a increases, the ideal trait approaches the average popular trait from the average unpopular trait. When a > 1, as a increases, the ideal trait deviates from the average popular trait to the opposite side of the average unpopular trait. In other words, parameter a describes an aversion to unpopular individuals.
Although all individuals observe the traits in the same population, they may sample different individuals and incur an observation error. Therefore, the ideal traits for decorative individuals may distribute normally as where κ reflects the sampling and observation errors. A small κ also implies a strong conformist bias in social learning, because conformity should decrease the variance.
If there is a restriction to decorate (modify) the trait, the difficulty increases as the deviation from the plain trait θ (the easiest trait to express) increases. Formally, the ease with which to express z is The restriction is weaker when ω 2 is larger. In other words, each individual originally (genetically) expresses trait θ and must make an effort to attain the ideal trait (in §2.3, we consider a model in which the genetic trait also coevolves). This restriction is similar to viability selection in genetic sexual selection models. As shown in appendix B, assuming κ 2 ω 2 /(κ 2 + ω 2 ) = σ 2 , the average trait in the next generation (breeding season) isz (2.18) Note that the average ideal trait is the average trait of decorative individuals when a = P M = r/(r + R).
Similarly, individuals of the choosy sex in the next generation (breeding season) observe the traits of popular and unpopular individuals and gain the ideal traits for courting. Note that the preferences of choosy individuals are invisible, so only the traits of the decorative sex affect the preferences in the next generation. As humans sometimes show so-called mate choice copying [41], and a popular mate is sometimes regarded as a status symbol, choosy individuals may prefer popular traits rather than unpopular ones. Here, we assume that choosy individual i has the ideal trait to court as In contrast with the trait expression, there is no restriction in preference, e.g. choosy individuals can prefer 'Prince Charming'. Therefore, as shown in appendix B, assuming that sampling and observation errors entail variance τ 2 , the average preference in the next generation (breeding season) is (2.20) Note that the average trait of decorative individuals is ideal for choosy individuals when b = r/(r + R). From (2.17) and (2.20), a recursive matrix can be drawn The equilibrium is (z,ȳ) = (θ , θ ), i.e. average decorative individuals show the easiest trait to express, and this trait is the most preferred by average choosy individuals. The equilibrium is stable when and only when are satisfied. Figure 1 shows the area where the equilibrium is stable in the (a,b)-parameter space, and figure 2 shows the evolutionary trajectories under several conditions in the (z,ȳ)-parameter space. When choosy individuals show a sufficiently strong aversion to unpopular individuals (large b), the equilibrium is unstable regardless of the properties of decorative individuals, and exaggerated traits and preferences evolve eventually. In this case, the equilibrium is more likely to be unstable as r increases when s ≤ 1/2. Even when 0 ≤ a ≤ 1 and 0
Genetic trait and cultural preference
Next, we consider a cultural mating preference for a genetic trait. The fitness (relative number of offspring) of a decorative individual without a mate in the breeding season (initially) is 1 -c (0 < c ≤ 1), i.e. even if he/she has a mate later on, the number of offspring is smaller than an individual with a mate acquired earlier, because of the late mating. Then, as shown in appendix D, the average trait of decorative individuals in the next generation before viability selection is where h 2 t is the (narrow-sense) heritability of the trait (0 < h 2 t ≤ 1). When the viability of a z individual is the model is equivalent to the previous model if we denote . (2.26) Then, as 1 − e −r < a < 1, i.e.z <z I <z M orz M <z I <z, the equilibrium (z,ȳ) = (θ, θ) is stable, provided choosy individuals have the ideal trait that lies between the average trait and the average popular trait, i.e.z ≤ȳ ≤z M orz M ≤ȳ ≤z.
Genetic-cultural trait and cultural preference
Third, we consider a cultural mating preference for a trait that is affected by both genes and culture. For example, skin colour is genetically transmitted, but it can be (slightly) modified by avoiding sunlight or by applying powder/cream. In this model, we assume the order of life-history events such as (i) birth, (ii) genetic trait expression, (iii) viability selection, (iv) trait modification, (v) mating and (vi) reproduction. The distribution of genetic trait (after viability selection) is (2.28) Assuming a restriction to modify the trait, the ease of expression of trait z from trait x is (2.29) Other assumptions remain the same as in the first and second models. Then, as shown in appendix E, the equilibrium (z,ȳ,x) = (θ, θ , θ ) is stable, provided both choosy and decorative individuals have the ideal trait that lies between the average trait and the average popular trait, i.e.z ≤z I ≤z M andz ≤ȳ ≤z M if z ≤z M , orz M ≤z I ≤z andz M ≤ȳ ≤z ifz M <z.
Cultural trait and genetic preference
Fourth, we consider a genetic mating preference for a cultural trait with assumptions as in the second model. As shown in appendix F, the average preference of individuals in the next generation is where h 2 p is the (narrow-sense) heritability of the preference (0 < h 2 p ≤ 1). The model is equivalent to the first model if we denote
Genetic trait and genetic preference
Fifth, we consider a genetic mating preference for a genetic trait, which is equivalent to [19]. In this case, although both 1 − e −r < a < 1 and 1 − e −r < b < (1 + sr)/s(R + r) are always satisfied, the equilibrium (z,ȳ) = (θ, θ ) can be unstable because of the genetic covariance between the preference and the trait, which can be regarded as a runaway process. As shown in [19], a runaway process is more likely to occur when ω 2 is large, ν 2 is small, h 2 t and h 2 p are large and r is small, i.e. when viability selection is weak, choosiness is strong, heritability of the trait and the preference are large, and the relative number of choosy individuals is small. Interestingly, the effect of the relative number of choosy individuals on the stability of the equilibrium is opposite to that in the cultural model.
Discussion
Ever since the time of Darwin [1], human sexual selection has been hotly debated but, unfortunately, many arguments are theoretically invalid and some are simply wrong. For example, although sexual selection should increase the sexual dimorphism of the target trait [19], sexual selection theory has been applied to many sexually monomorphic human traits (abilities), such as bipedalism, intelligence, creativity, language, art and music [42][43][44]. Many arguments have also implicitly assumed 'everlasting' mating preferences for target traits and have not explained the evolution of these preferences, i.e. the same mistake made by Darwin [1] initially. As mate choice cannot evolve if it is costly without compensating benefits [45], and both male and female choosiness should be costly in semi-monogamous human populations because of mate competition, a direct or indirect benefit of choosiness that counterbalances cost is necessary for the evolution of preferences. Although choosy individuals may obtain a benefit if the trait honestly signals the bearers' quality, costly preferences never evolve unless there is a mechanism that maintains the honesty of signals, such as resistance to parasites [46], mutation bias [45] and a disagreement between fertility and viability optimums of the trait [19]. Therefore, even if a human-specific trait seems to contribute to mate acquisition, it cannot be concluded that the trait evolved through sexual selection. Nevertheless, we cannot simply consider that sexual selection has had little effect on human evolution because human behaviour is often culturally transmitted, which may change the results of general sexual selection theories that assume genetic traits and preferences.
In this study, I have theoretically analysed sexual selection models where a trait and/or a preference are culturally transmitted. Although humans adopt various social learning strategies [47], the most often observed tendencies may be success bias (copying successful individuals) and conformist bias (copying the majority), which may have evolved as an adaptation to temporally and spatially varying environments [48,49]. As these biases are also observed in human mating preferences [41,50], it is assumed that each individual of the choosy and decorative sexes uses the information about who succeeded or failed to find a mate during the breeding season, and adopts the weighted average trait as the ideal trait. Success bias is strong when an aversion to unsuccessful individuals is strong (large a and b) and conformist bias is strong when the trait and the preference distribute narrowly (small σ 2 and τ 2 ). It has also been assumed that there is a restriction on modifying the plain (genetic) trait (i.e. the easiest trait to express). Note that when a trait is culturally transmitted, reproductive success of individuals does not directly affect the generational change of the trait, but we can regard the process as sexual selection because mating success of individuals has the main influence on the trait evolution.
The presence of a unique equilibrium has been demonstrated where both the average trait of the decorative sex and the most preferable trait for the choosy sex are plain. Therefore, when this equilibrium is stable, the trait never evolves to be exaggerated. In other words, exaggerated traits and peculiar preferences can be observed only when sexual selection destabilizes the equilibrium favoured by natural (non-sexual) selection. As expected, when the restriction of trait modification is weak (ω 2 is large) and choosiness is strong (ν 2 is small), the equilibrium is more likely to be unstable. However, the equilibrium is always stable, provided that both the choosy and decorative sexes have the ideal trait that lies between the average trait and the average popular trait (average trait of individuals with a mate). In other words, a strong aversion to, or high tolerance of, unsuccessful individuals is necessary for exaggerated traits and peculiar preferences to evolve through cultural sexual selection. This result is unchanged even if we assume that either the trait or the preference is genetically transmitted.
The most important difference between genetic and cultural transmission is the restriction on the range of trait and preference values in the next generation (time step). When the trait or the preference is genetically transmitted, it should lie within a certain range in the next generation, while there is no restriction in cultural transmission. Therefore, we can suppose that genetic traits and preferences are less likely to be exaggerated (peculiar) compared with cultural traits. In humans, peculiar mating preferences, such as those for small feet (foot binding) or long necks (neck rings), are sometimes observed, and these may be transmitted culturally. On the other hand, many globally observed mating preferences, such as male preferences for female traits indicating reproductive capacity, are probably genetic (instinctive) and can be regarded as the equilibrium in genetic models [19]. In short, irrational preferences may seldom evolve genetically but sometimes arise culturally and, therefore, it is important to dismiss the preconception that every human mating preference aims at gaining good genes or a direct benefit.
Humans often give greater weight to negative entities than positive ones, known as 'negativity bias' [51]. This bias may cause the coevolution of exaggerated traits and peculiar preferences because a strong aversion to unsuccessful individuals destabilizes the equilibrium. In other words, once a trait accidentally becomes unpopular, it would be more unpopular in the next time step because individuals dislike unpopular traits, which favours exaggerated traits that are dissimilar to the unpopular trait. It should be noted that this acceleration occurs regardless of any learning bias of the decorative sex if the choosy sex has a sufficiently strong aversion to unpopular traits.
When individuals of the choosy sex are realists and prefer the unpopular traits to avoid mate competition with other individuals (b is small), the equilibrium can also be unstable. In this case, the trait and the preference oscillate to become exaggerated (figure 2). A similar destabilization occurs when the decorative sex has sympathy for unsuccessful individuals and thus expresses similar traits to them (a is small). In these cases, the destabilization is more likely to occur when the opposite sex prefers popular traits, i.e. disagreement between male and female ideal traits is important for destabilization. Interestingly, the destabilization can occur even when both choosy and decorative sexes have the ideal trait that lies between the average popular and unpopular traits (i.e. 0 ≤ a ≤ 1 and 0 ≤ b ≤ 1). In other words, even when male and female ideal traits are not extreme, the joint acceleration of trait and preference can occur if there is a large sexual difference in ideal traits. In humans, male preferences are often incomprehensible to females and vice versa [52,53], which may lead to the emergence of exaggerated traits and peculiar preferences. This situation may also apply to human fashion cycles, where fluctuations in popularity of cultural traits are often observed and may be caused by positive and negative preferences for major cultural traits [54].
Although the model assumes a completely monogamous population, this assumption can easily be relaxed to apply the model to a polygynous mating system. When females are choosy, and assuming that each male accepts every female who courts him, the mating system becomes one of polygyny. Then, if males with at least one mate and males without a mate are regarded as popular and unpopular, respectively, the model does not essentially change, provided that both the trait and the preference are culturally transmitted. Even if the trait or the preference is genetically transmitted, the model is unchanged if males with multiple mates have a smaller number of offspring by one mate than males with only one mate (this affects only parameter c). When males are choosy, and assuming that each male courts multiple (a fixed number of) females and each female randomly selects a mate from the males who court her, this is a polygynous mating system, but the model is essentially unchanged, provided that every female with a mate has the same number of offspring (even this provision is unnecessary when both the trait and the preference are culturally transmitted). In this case, we may redefine parameter r as the product of the adult sex ratio (male/female) and the number of females each male courts. It should be noted that the number of females without a mate may be smaller than that of males without a mate in polygynous populations, provided that the adult sex ratio is almost one, i.e. r is larger when males are choosy than when females are choosy. Therefore, we may suppose that exaggerated female cultural traits are more likely to evolve through sexual selection than those of the male because the equilibrium is more likely to be unstable when r increases. This may explain cultural evolution of female costly body modifications such as foot binding and neck rings. In short, although the model assumes monogamy, it is applicable to polygynous populations if minor modifications are made.
In the model, exaggerated traits are more likely to evolve when success bias is strong (a and b are large) and conformist bias is weak (σ 2 is large), which is similar to cultural evolution models [30,32,55,56] where strong success bias and weak conformist bias accelerate cultural evolution. This implies that our social learning strategy has strongly influenced cultural evolution regardless of the role of cultural traits. Therefore, if we assume that the emergence of modern behaviour was caused by a change in hominid social learning strategies (abilities), we may explain why functional tools, such as microliths, and artistic ornaments, such as shell beads, were associated with modern behaviour. The present model also shows that a strong aversion to unsuccessful individuals leads to the emergence of exaggerated traits, which is also similar to previous cultural evolution models [30,32] that suggested that a strong aversion to Neanderthal culture may have caused the so-called 'artistic explosion' in European Upper Palaeolithic modern humans, i.e. certain artistic behaviours of modern humans emerged first in Europe rather than Africa [57]. In short, although the model considers cultural sexual selection, the results may apply to various cultural traits that are not associated with mating behaviours.
In conclusion, when the trait and the preference are culturally transmitted, they are more likely to be exaggerated (peculiar) compared with when they are genetically transmitted. However, when there is a strong restriction of trait modification, decorative individuals express the plain (genetic) trait, which is the most preferred by choosy individuals. A strong aversion to, or high tolerance of, individuals without a mate is necessary for exaggerated traits and peculiar preferences to coevolve. Although the model assumes cultural transmission of trait and preference in monogamous populations, it is applicable to genetic transmission and polygynous populations following the implementation of some minor modifications.
Data accessibility. This study does not use any data. Competing interests. I declare I have no competing interests. Funding. This research was supported in part by JSPS KAKENHI grant numbers JP16K07510 to W.N. and JP25118006 to Hisashi Ohtsuki.
|
2017-08-28T01:22:18.968Z
|
2017-06-01T00:00:00.000
|
{
"year": 2017,
"sha1": "43e0e61832e28256e0cf5da5453e4bc347f06292",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.160946",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fce4fb467108c8b325818de494528ef61d7195d6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
117999563
|
pes2o/s2orc
|
v3-fos-license
|
Locally compact subgroup actions on topological groups
Let $X$ be a Hausdorff topological group and $G$ a locally compact subgroup of $X$. We show that $X$ admits a locally finite $\sigma$-discrete $G$-functionally open cover each member of which is $G$-homeomorphic to a twisted product $G\times_H S_i$, where $H$ is a compact large subgroup of $G$ (i.e., the quotient $G/H$ is a manifold). If, in addition, the space of connected components of $G$ is compact and $X$ is normal, then $X$ itself is $G$-homeomorphic to a twisted product $G\times_KS$, where $K$ is a maximal compact subgroup of $G$. This implies that $X$ is $K$-homeomorphic to the product $G/K\times S$, and in particular, $X$ is homeomorphic to the product $\Bbb R^n\times S$, where $n={\rm dim\,} G/K$. Using these results we prove the inequality $ {\rm dim}\, X\le {\rm dim}\, X/G + {\rm dim}\, G$ for every Hausdorff topological group $X$ and a locally compact subgroup $G$ of $X$.
Introduction
By a G-space we mean a completely regular Hausdorff space together with a fixed continuous action of a given Hausdorff topological group G on it.
The notion of a proper G-space was introduced in 1961 by R. Palais [22] with the purpose to extend a substantial portion of the theory of compact Lie group actions to the case of noncompact ones.
Recall that a G-space X is called proper (in the sense of Palais [22,Definition 1.2.2]), if each point of X has a, so called, small neighborhood, i.e., a neighborhood V such that for every point of X there is a neighborhood U with the property that the set U, V = {g ∈ G | gU ∩ V = ∅} has compact closure in G.
Clearly, if G is compact, then every G-space is proper. Important examples of proper G-spaces are the coset spaces G/H where H is a compact subgroup of a locally compact group G.
In [6] we have shown that if G is a locally compact subgroup of a Hausdorff topological group X, then X is a proper G-space with respect to the natural action g * x = xg −1 (or equivalently, g * x = gx) of G on X. Based on the theory of proper actions, we proved in [6] that many topological properties (among which are normality and paracompactness) are transferred from X to its quotient space X/G. This paper is continuation of [6]; here we further apply the results and methods of the theory of proper actions in the study of topological groups with respect to the natural translation action of a locally compact subgroup.
Below all topological groups are assumed to satisfy at least the Hausdorff separation axiom.
All the notions involved in the formulations of the following main results are defined in the next section: Theorem 1.1. Let X be a topological group and G a locally compact subgroup of X. Then there exists a compact subgroup H ⊂ G such that: (1) the quotient G/H is a manifold, and (2) there exists a locally finite σ-discrete cover of X consisting of G-functionally open sets W i such that the closure W i is a G-tubular set with the slicing subgroup H (that is to say, there exists a G-equivariant map W i → G/H).
We recall that a locally compact group is called almost connected if its space of connected components is compact. Theorem 1.2. Let X be a topological group, G an almost connected subgroup of X, and K a maximal compact subgroup of G. Then there exists a locally finite σdiscrete cover of X consisting of G-functionally open sets W i such that the closure W i is a G-tubular set with the slicing subgroup K (that is to say, there exists a G-equivariant map W i → G/K). In particular, each W i is homeomorphic to a product G/K × S i . Theorem 1.3. Let X be a normal topological group, G an almost connected subgroup of X, and K a maximal compact subgroup of G. Then there exists a global K-slice S in X (equivalently, there exists a G-equivariant map X → G/K). In particular, X is homeomorphic to a product G/K × S.
Combining Theorem 1.3 with a result of Abels [1, Theorem 2.1], we obtain the following: Corollary 1.4. Let X be a normal group, G an almost connected subgroup of X, and K a maximal compact subgroup of G. Then there exists a K-invariant subset S ⊂ X such that X is K-homeomorphic to the product G/K × S endowed with the diagonal action of K defined as follows: k * (gK, s) = (kgK, ks). In particular, X is homeomorphic to R n × S, where n = dim G/K. Corollary 1.5. Let X be a normal topological group, G an almost connected subgroup of X, and X/G the quotient space of all right cosets xG = {xg | g ∈ G}, x ∈ X. Then there exists a closed subset S ⊂ X such that the restriction p| S : S → X/G is a perfect, open, surjective map. This fact has the following two immediate corollaries about transfer of properties from X to X/G and vice versa. Corollary 1.6. Let P be a topological property stable under open perfect maps and also inherited by closed subsets. Assume that X is a normal topological group with the property P and let G be an almost connected subgroup of X. Then the quotient space X/G also has the property P.
Among properties stable under open perfect maps and also inherited by closed subsets we highlight strong paracompactness and realcompactness (see [12,Exercises 5.3.C(c), 5.3H(d), and Theorem 3.11.4 and Exercises 3.11.G]). Corollary 1.7. Let P be a topological property that is both invariant and inverse invariant under open perfect maps, and also stable under multiplication by a locally compact group. Assume that X is a normal topological group and let G be an almost connected subgroup of X such that the quotient space X/G has the property P. Then the group X also has the property P.
Among such properties we highlight realcompactness (see [12,Theorem 3.11.14 and Exercise 3.11.G, and also take into account that every locally compact group is realcompact]).
Note that A. V. Arhangel'skii [8] has studied properties which are transferred from X/G to X for an arbitrary topological group X and its locally compact subgroup G; see also [6, Corollary 1.10]. Theorems 1.1 and 1.3 are further applied to prove the following Hurewicz type formula in dimension theory which, for a paracompact group X and an almost connected subgroup G ⊂ X, was proved in [6, Theorem 1.12]: Theorem 1.8. Let G be a locally compact subgroup of a topological group X. Then [24]). If in this theorem X is a locally compact group then, in fact, equality holds: All the proofs are given in section 3.
Preliminaries
Unless otherwise stated, by a group we shall mean a topological group G satisfying the Hausdorff separation axiom; by e we shall denote the unity of G.
All topological spaces are assumed to be Tychonoff (= completely regular and Hausdorff). The basic ideas and facts of the theory of G-spaces or topological transformation groups can be found in G. Bredon [10] and in R. Palais [21]. Our basic reference on proper group actions is Palais' article [22] (see also [1], [2], [11]).
For the convenience of the reader we recall, however, some more special definitions and facts below.
By a G-space we mean a topological space X together with a fixed continuous action G × X → X of a topological group G on X. By gx we shall denote the image of the pair (g, x) ∈ G × X under the action.
If Y is another G-space, a continuous map f : X → Y is called a G-map or an equivariant map, if f (gx) = gf (x) for every x ∈ X and g ∈ G.
If X is a G-space, then for a subset S ⊂ X and for a subgroup H ⊂ G, the H-hull (or H-saturation) of S is defined as follows: H(S)= {hs | h ∈ H, s ∈ S}. If S is the one point set {x}, then the G-hull G({x}) usually is denoted by G(x) and called the orbit of x. The orbit space X/G is always considered in its quotient topology.
A subset S ⊂ X is called H-invariant if it coincides with its H-hull, i.e., S = H(S). By an invariant set we shall mean a G-invariant set.
For any x ∈ X, the subgroup G x = {g ∈ G | gx = x} is called the stabilizer (or stationary subgroup) at x.
A compatible metric ρ on a metrizable G-space X is called invariant or Ginvariant, if ρ(gx, gy) = ρ(x, y) for all g ∈ G and x, y ∈ X. If ρ is a G-invariant metric on any G-space X, then it is easy to verify that the formula For a closed subgroup H ⊂ G, by G/H we will denote the G-space of cosets {gH | g ∈ G} under the action induced by left translations.
A locally compact group G is called almost connected, if the space of connected components of G is compact. Such a group has a maximal compact subgroup K, i.e., every compact subgroup of G is conjugate to a subgroup of K (see [26,Ch. H,Theorem 32.5]).
In what follows we shall need also the definition of a twisted product G × K S, where K is a closed subgroup of G, and S a K-space. G × K S is the orbit space of the K-space G × S on which K acts by the rule: k(g, s) = (gk −1 , ks). Furthermore, there is a natural action of G on G× K S given by The twisted products are of a particular interest in the theory of transformation groups (see [10, Ch. II, § 2]). It turns out that every proper G-space locally is a twisted product. For a more precise formulation we need to recall the following well-known notion of a slice (see [22, p. 305]): called the slicing map, such that S=f −1 (eK). The saturation G(S) is called a Gtubular or just a tubular set, and the subgroup K will be referred to as the slicing subgroup.
If in addition G(S) is open in X then we shall call S a K-slice in X.
It turns out that each tubular set G(S) with a compact slicing subgroup K is Ghomeomorphic to the twisted product G × K S; namely the map ξ : Theorem 4.2]). In what follows we will use this fact without a specific reference.
One of the fundamental results in the theory of topological transformation groups states (see [22,Proposition 2.3.1]) that, if X is a proper G-space with G a Lie group, then for any point x ∈ X, there exists a G x -slice S in X such that x ∈ S. In general, when G is not a Lie group, it is no longer true that a G x -slice exists at each point of X (see [4] for a discussion). However, the following approximate version of Palais' slice theorem for non-Lie group actions holds true, which plays a key role in the proof of Theorem 1.8: Theorem 2.2 (Approximate slice theorem [7]). Let G be a locally compact group, X a proper G-space and x ∈ X. Then for any neighborhood O of x in X, there exist a large subgroup K ⊂ G with G x ⊂ K, and a K-slice S such that x ∈ S ⊂ O.
Recall that by a large subgroup here we mean a compact subgroup H ⊂ G such that the quotient space G/H is a manifold.
Thus, every proper G-space is covered by invariant open subsets each of which is a twisted product.
A version of this theorem, without requiring K to be a large subgroup was obtained earlier in [2] (see also [4] for the case of compact non-Lie group actions, and [5] for the case of almost connected groups). We emphasize that namely the property "K is a large subgroup" is responsible for the applications of Theorem 2.2 in this paper. We refer the reader to [7] for discussion of further properties of large subgroups.
On any group G one can define two natural (but equivalent) actions of G given by the formulas: g · x = gx, and g * x = xg −1 , respectively, where in the right parts the group operations are used with g, x ∈ G.
Throughout we shall use the second action only.
By U (G) we shall denote the Banach space of all right uniformly continuous bounded functions f : G → R endowed with the supremum norm. Recall that f is called right uniformly continuous, if for every ε > 0 there exists a neighborhood O of the unity in G such that |f (y) − f (x)| < ε whenever yx −1 ∈ O.
We shall consider the induced action of G on U (G), i.e., (gf )(x) = f (xg), for all g, x ∈ G.
It is easy to check that this action is continuous, linear and isometric (see e.g., [3,Proposition 7]). Proposition 2.3. Let G be a topological group. Then for every f ∈ U (G), the map f * : G → U (G) defined by f * (x)(g) = f (xg −1 ), x, g ∈ G, is a right uniformly continuous G-map.
Proof. A simple verification.
The following lemma of H. Abels [1, Lemma 1.5] is used in the proof of Theorem 1.3: Lemma 2.4. Suppose G is an almost connected group and K a maximal compact subgroup of G. Let X be any G-space, A 1 and A 2 two G-subsets of X such that X = A 1 ∪ A 2 and the intersection A 1 ∩ A 2 is closed in X, and let f i : In the proof of Theorem 1.8 the following result due to K. Morita [19] plays a key role: Theorem 2.5. Let X be a Tychonoff space while Y is a locally compact paracompact space. Then where dim stands for the covering dimension.
One should mention that the covering dimension of an arbitrary Tychonoff space is obtained by a slight modification of the usual definition of the dimension of a normal space; namely, one should just replace "open covering" by "fuctionally open covering". It coincides with the usual definition of covering dimension for normal spaces. This modification is due to Katětov [18] (see also Smirnov [25]).
An invariant subset
This observation will be used in the proof of Proposition 3.1 below.
In the proof of Theorem 1.8 we shall also use the following simple assertion: Proposition 2.6. Let G be a topological group, X a G-space and A a G-functionally open subset of X. Then A/G is a functionally open subset of X/G.
Proof. A simple verification.
Finally we recall that by a σ-discrete cover of a space X we mean a cover which is the countable union ∞ n=1 U n , where every family U n is discrete, i.e., for each point x ∈ X there is a neighborhood V of x such that V ∩ A = ∅ for at most one element A of U n .
Proofs
Recall that a cover ω of a G-space is called invariant if each member U ∈ ω is an invariant set, i.e., G(U ) = U .
A cover {U i | i ∈ I} of a G-space X is called G-functionally open whenever each U i is a G-functionally open subset of X. Putting here G = {e}, the trivial group, we obtain the corresponding notion of a functionally open cover. Proof. Consider the following composition: Since Z is metrizable by a G-invariant metric, the orbit space Z/G is pseudometrizable (see Preliminaries). Hence the open cover {q(O t ) | t ∈ T } of Z/G admits a refinement, say {W i | i ∈ I} which is both locally finite and σ-discrete (see [12,Theorem 4.4
.1 and Remark 4.4.2]).
Then, clearly, Further, since {W i | i ∈ I} is locally finite and σ-discrete and qf is continuous we infer that {f −1 q −1 (W i ) | i ∈ I} is also locally finite and σ-discrete.
Besides, each q −1 (W i ) is G-functionally open because it is an invariant open subset of Z which is metrizable by a G-invariant metric (see the observation before f * (x)(g) = f (xg −1 ), x, g ∈ X. Denote by Z the image f * (X). Clearly, Z is the X-orbit of the point f * (e) in the X-space U (X), and the metric of U (X) induces an X-invariant metric on Z.
It follows from (3.1) and the X-equivariance of f * that Besides, since f * (x) ∈ Γ x,Q for every x ∈ X, we see that the sets Γ x,Q , x ∈ X, constitute a cover of Z.
In what follows we restrict ourselves only to the induced actions of the subgroup G ⊂ X, i.e., we will consider X and Z just as G-spaces. Note that Z may not be a proper G-space and we do not need this fact in the sequel.
It follows from (3.2) and from the G-equivariance of f * that Thus, the hypotheses of Proposition 3.1 are fulfilled for the G-map f * : X → Z and the G-invariant open covers {xP | x ∈ X} and {G(Γ x,Q ) | x ∈ X} of X and Z, respectively. Then by Proposition 3.1, {xP } x∈X admits a G-functionally open refinement {W i } i∈I which is both locally finite and σ-discrete. Hence, each W i is contained in some xP which implies that W i ⊂ xP ⊂ xV . Now observe that each set xV is G-tubular; the corresponding slicing G-map ψ : xV → G/H is just the composition Perhaps it is in order to emphasize here that V and xV are G-invariant subsets of X which is equipped with the action g * t = tg −1 , where g ∈ G and t ∈ X. Then, clearly η is a G-map, and hence, the composition ψ = ϕη is also a G-map.
Consequently, each W i being a G-invariant subset of xV , is itself a G-tubular set with the slicing subgroup H, as required.
In conclusion, it remains to observe that statement (3) follows immediately from statement (2) (see Section 2, the first paragraph after Definition 2.1).
Proof of Theorem 1.2. By Theorem 1.1, X is covered by a locally finite σ-discrete family {W i } i∈I of G-functionally open sets W i such that the closure W i is a Gtubular set associated with a large slicing subgroup H ⊂ G.
Let ψ i : W i → G/H be the corresponding slicing map. Since, by the maximality of K, there exists an evident G-map ξ : Section 2). Furthermore, by a result of H. Abels [1, Theorem 2.1], W i is homeomorphic (in fact, K-equivariantly homeomorphic) to the cartesian product G/K × S i , as required.
Proof of Theorem 1.3. By Theorem 1.1, cover X by a locally finite collection {C i } i∈I of closed G-tubular sets. Let ψ i : C i → G/H, i ∈ I, be the corresponding slicing G-map, where H is a large subgroup of G. Since, by the maximality of K, there exists an evident G-map ξ : G/H → G/K, the composition ϕ i = ξψ i is a G-map ϕ i : C i → G/K.
Well-order the index set I, and for every i ∈ I put which is closed in X by the local finiteness of the cover {C i } i∈I . We aim at constructing inductively a G-map φ : X → G/K as follows.
Assume that i ∈ I and the G-maps φ j : A j → G/K are defined for all j < i in such a way that φ j | A k = φ k whenever k < j.
Define φ i : A i → G/K as follows. If i is a limit ordinal, then Setting φ i | Aj = φ j for every j < i we get a well-defined G-map φ i : A i → G/K. The continuity of φ i follows from the local finiteness of the cover {C i } i∈I and from continuity of all maps φ j , j < i.
If i is the successor to j, then Observe that each C k , k ∈ I, is closed in X and hence it is a normal space. Since A j ∩ C j is a closed G-invariant subset of C j , the restriction φ j | Aj ∩Cj extends to C j by Lemma 2.4, and hence, we get a G-map φ i : A i → G/K. This completes the inductive step and the proof.
Proof of Corollary 1.5. Since G is almost connected, it has a maximal compact subgroup K (see Preliminaries). By Theorem 1.3, X admits a global K-slice S, and hence, it is G-homeomorphic to the twisted product G× K S (see Preliminaries).
Since the group K is compact, it then follows that the K-orbit map is open and perfect. This yields immediately that the restriction f = p| S : S → X/G of the G-orbit map p : X → X/G is an open and perfect surjection. Indeed, it suffices to observe that for every set A ⊂ S, one has f −1 f (A) = KA, which is open (respectively, closed or compact) whenever A is so. This completes the proof.
Proof of Theorem 1.8. Consider two cases.
Case 1. Assume that G is connected. In this case G has a maximal compact subgroup, say, K (see Preliminaries). Due to Theorem 1.1, we can cover X by a locally finite collection {Φ i } i∈I of G-functionally open tubular sets. Let ψ i : Φ i → G/H be the corresponding slicing G-map, where the slicing subgroup H is a (compact) large subgroup of G. Since, by the maximality of K, there exists an evident G-map ξ : G/H → G/K, the composition ϕ i = ξψ i is a G-map ϕ i : Φ i → G/K. Since {Φ i } i∈I is a locally finite functionally open cover of X, by virtue of the locally finite sum theorem of Nagami [20,Theorem 2.5], it suffices to show that dim Φ ≤ dim X/G + dim G, for every member Φ of the cover {Φ i } i∈I .
Let ϕ : Φ → G/K be the slicing map corresponding to the tubular set Φ ∈ {Φ i } i∈I . In this case Φ is G-homeomorphic to the twisted product G × K S, where S = ϕ −1 (eK) (see Preliminaries). Furthermore, by a result of H. Abels [1, Theorem 2.1], Φ is homeomorphic (in fact, K-equivariantly homeomorphic) to the cartesian product G/K × S.
Since the group G is locally compact, and hence, paracompact, we infer that the quotient space G/K is also locally compact and paracompact. Then, according to Theorem 2.5, we have: Since K is compact, according to a result of V.V. Filippov [14], one has the inequality: where q : S → S/K is the K-orbit projection and dim q = sup {dim q −1 (a) | a ∈ S/K}. Besides, since K acts freely on S, we see that each K-orbit q −1 (a) is homeomorphic to K, and hence, dim q = dim K.
Consequently, combining (3.4) and (3.5), one obtains: Next, since Φ/G ∼ = (G × K S)/G ∼ = S/K (see Preliminaries) and dim G/K + dim K = dim G (see Remark 1.9), it then follows from (3.6) that Observe that the quotient X/G is a Tychonoff space. This follows from a standard fact about proper actions [ where G 0 is the unity component of G.
Due to Theorem 1.1, we can cover X by a locally finite collection {Φ i } i∈I of Gfunctionally open tubular sets. Then {Φ i /G 0 } i∈I constitutes a functionally open locally finite cover of X/G 0 (see Proposition 2.6).
Consequently, by virtue of the locally finite sum theorem of Nagami [20, Theorem 2.5], it suffices to show that dim Φ/G 0 ≤ dim X/G for every member Φ/G 0 of the cover {Φ i /G 0 } i∈I .
Let ψ : Φ → G/H be the slicing G-map corresponding to the tubular set Φ ∈ {Φ i } i∈I , where H is a large subgroup of G. Then the quotient G/H is locally connected (in fact, it is a manifold).
Since the natural map G/H → G/G 0 H is open and the local connectedness is invariant under open maps, we infer that G/G 0 H is locally connected. On the other hand, the following natural homeomorphism holds: Consequently, G/G 0 H, being the quotient space of the totally disconnected group G/G 0 , is itself totally disconnected (this follows from Remark 1.9 and from the fact that in the realm of locally compact spaces total disconnectedness is equivalent to zero-dimensionality [13,Theorem 1.4.5]). Hence, G/G 0 H should be discrete, implying that G 0 H is an open subgroup of G.
Since G/G 0 H is a discrete group, we infer that if we put S = f −1 (eG 0 H), then each gS is closed and open in Φ/G 0 , and Φ/G 0 is the disjoint union of the sets gS one g out of every coset in G/G 0 H. In other words, Φ/G 0 is just homeomorphic to the product (G/G 0 H) × S.
Consider the natural continuous open homomorphism π : G → G/G 0 and denote L = G 0 H/G 0 . Since L = π(G 0 H) = π(H) while G 0 H is open and H is compact in G, we infer that L is an open compact subgroup of G/G 0 .
Next, observe that the composition of f with the isomorphism from (3.8) is just the following G-map: Since the quotient group G/G 0 acts on the spaces Φ/G 0 and G/G0 L by the induced actions and since S = Ψ −1 (L), we conclude that S is a global L-slice of the G/G 0 -space Φ/G 0 .
This yields (see Preliminaries) that the G/G 0 -orbit space of Φ/G 0 is just homeomorphic to the L-orbit space S/L, i.e., In its turn,
|
2011-03-08T00:48:49.000Z
|
2011-03-08T00:00:00.000
|
{
"year": 2011,
"sha1": "07b84cb88761829e7247f0d99b8d9077d285b9a6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "07b84cb88761829e7247f0d99b8d9077d285b9a6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
4148751
|
pes2o/s2orc
|
v3-fos-license
|
Using a Low-Sodium, High-Potassium Salt Substitute to Reduce Blood Pressure among Tibetans with High Blood Pressure: A Patient-Blinded Randomized Controlled Trial
Objectives To evaluate the effects of a low-sodium and high-potassium salt-substitute on lowering blood pressure (BP) among Tibetans living at high altitude (4300 meters). Method The study was a patient-blinded randomized controlled trial conducted between February and May 2009 in Dangxiong County, Tibetan Autonomous Region, China. A total of 282 Tibetans aged 40 or older with known hypertension (systolic BP≥140 mmHg) were recruited and randomized to intervention (salt-substitute, 65% sodium chloride, 25% potassium chloride and 10% magnesium sulfate) or control (100% sodium chloride) in a 1: 1 allocation ratio with three months’ supply. Primary outcome was defined as the change in BP levels measured from baseline to followed-up with an automated sphygmomanometer. Per protocol (PP) and intention to treat (ITT) analyses were conducted. Results After the three months’ intervention period, the net reduction in SBP/DBP in the intervention group in comparison to the control group was −8.2/−3.4 mmHg (all p<0.05) in PP analysis, after adjusting for baseline BP and other variables. ITT analysis showed the net reduction in SBP/DBP at −7.6/−3.5 mmHg with multiple imputations (all p<0.05). Furthermore, the whole distribution of blood pressure showed an overall decline in SBP/DBP and the proportion of patients with BP under control (SBP/DBP<140 mmHg) was significantly higher in salt-substitute group in comparison to the regular salt group (19.2% vs. 8.8%, p = 0.027). Conclusion Low sodium high potassium salt-substitute is effective in lowering both systolic and diastolic blood pressure and offers a simple, low-cost approach for hypertension control among Tibetans in China. Trial Registration ClinicalTrials.gov NCT01429246
Introduction
The WHO predicts that cardiovascular disease (CVD) will become the leading cause of Disability Adjust Life Years (DALYs) in 2020 [1] and more significantly, over 80% of this global burden will occur in low and middle income countries [2]. Further, hypertension accounts for nearly 45% of the global burden of cardiovascular morbidity and mortality [3]. Hypertension is one of the most common modifiable risk factor for CVD, with a prevalence of nearly 57% in adults 40 years and older in Tibet [4], two times as high as the 2002 China national rate [5]. This preponderance of hypertension has been strongly associated with an extreme burden of stroke in Tibetans. The Tibetan agestandardized stroke incidence was 450.4 per 100,000 persons and stroke mortality of 370.2 per 100,000 persons; both metrics were over four times the respective China national rates [6].
The high prevalence of hypertension in Tibetans is attributable to a very high level of daily dietary salt intake [7], [8]. In Tibetan adults the estimated dietary salt intake is nearly four to five times the WHO recommend amount of five grams daily [9][10][11], largely driven by the daily consumption of a traditional salty yak buttermilk tea [8], [9], [12], which has been reported to be as high as four liters per day in Tibetans [13]. Reduction in salt intake has been identified by the World Health Organization as a highly cost-effective strategy for cardiovascular disease prevention [14].
Low sodium salt substitute, as a low cost strategy to reduce sodium intake, has previously demonstrated marked reductions in systolic blood pressure among Han Chinese patients with high cardiovascular risk in the China Salt Substitute Study (CSSS). The magnitude of the effect was significantly associated with the baseline blood pressure of the CSSS cohort [15]. We set out to test the hypothesis that the low sodium dietary salt substitute could be more effective in reducing blood pressure in community hypertension programs in Tibetans living at high altitude, whose prevalence of hypertension as well as salt intake are markedly higher than Han Chinese [5], [16].
Of particularly note, the high altitude as the special regional living condition could actually cause significant difficulties to the local healthcare system in implementing hypertension prevention and control program as well as other health programs, such as lack of health care workers, remote and low access to health care services and medications, etc. Thus, the prevalence, treatment and control of hypertension in Tibet are very low [5]. This highlights the huge demand in developing strategies like salt substitute that are not only effective but can also be delivered through nonemedical or para-medical health systems.
Ethic statement
The trial was approved by the Ethics Committee of Peking University Health Science Center, Beijing, China (#IRB00001052-09003). All participants and the patriarchs of their families provided informed consent for the family. The protocol and CONSORT checklist are available as supporting information see Protocol S1 (English), Protocol S2 (Chinese) and Checklist S1.
Design
A community based survey of hypertension was first conducted [5] and then all patients meeting inclusion and exclusion criteria were invited to the trial, with the intervention group receiving free salt substitute and the control group receiving free regular salt, in quantities sufficient for a complete household's use during the three months' study period between February 2009 and May 2009. The study was conducted according to principles of the Declaration of Helsinki and subsequent amendments and the study was registered at clinicaltrial.gov NCT01429246 in September 2011. This retrospective registration was due to our unawareness of the international requirements of prospective registration for such kind of trials at that time. However, we ensure honestly that the delay of registration has little bearing on the quality of science and ethics of this study.
Study participantsand settings
Local residents aged 40 years and above from two townships (Yangbajing and Gongtangxiang) in Dangxiong County in Tibet Autonomous Region, China, were first invited to a hypertension screening, then those having measured systolic blood pressure $ 140 mmHg would be invited to the trial within 3 months. If the invitee's systolic blood pressure was confirmed $140 mmHg again, regardless use of anti-hypertension medications, he/she would be invited to participate in this study. Of 287 patients who met our inclusion criteria, 282 were recruited after excluding 5 patients who met the exclusion criteria, including 1 patient who was too old and weak to travel, 1 patient who withdrew after inform consent, and 3 patients dropped by research staff due to their living far and arduous journey for follow up. The townships are located 90 kilometers northwest of Lhasa, with an average altitude of 4300 meters above the sea level.
The study was planned to enroll patients only in Yangbajing Township originally. However, there were not enough patients meeting our inclusion criteria and agreed to participate in the study there, so the recruitment had to be extended to the nearby township Gongtangxiang in order to achieve the recruitment target. But we reduced data to be collected at baseline in Gongtangxiang due to resource restriction. For details of differences in data collection between the two townships please refer to Table 1.
Randomization
Stratified randomization by township, gender, and baseline systolic BP (,160 or $160 mmHg) was performed immediately after eligibility assessment, using a computer generated randomization list. A random number generator provides a treatment allocation identification number to each enrolled patient. This assignment was secured in a password protected encrypted digital registry. Treatment allocation was only blinded to participants through the use of indistinguishable containers; with only the study allocation number labeled. There was also no way to tell saltsubstitute and regular salt by their physical appearance. The difference of taste between salt substitute and regular salt is minor. Our previous study among young adults indicated that about 70% 280% of testers were unable to identify the difference in taste in a triangle food taste test (Data unpublished). There was 1 patient from control group but no patient from intervention group in our study quoted ''poor taste'' as the reason for dropping out, indicating our blinding to patients were successful.
Interventions
The anticipated allocation ratio between the intervention arm and control arm was set to 1:1. The intervention arm received a three months' supply of salt substitute (65% sodium chloride, 25% potassium chloride, and 10% magnesium sulfate) [15], [17]. The control arm received regular salt with a 100% sodium chloride. The study salt was distributed free of charge to every household with sufficient amount to cover three months' consumption for the whole household. Patients with preexisting anti-hypertensive medications were not directed to alter their prior regimen.
Baseline measurements
Once the patients were confirmed eligible and the inform consent was obtained, the baseline measurements were conducted immediately. The screening, eligibility confirmation and baseline measurements were all conducted by trained research staff, between December 2008 and February 2009. Blood pressure was measured with three consecutive blood pressure measurements (with at least one minute's rest between each measurement) from a seated subjects' right arm in a quiet room. A previously validated electronic sphygmomanometer (OMRON HEM-759P, Dalian China) was used [18]. Weight and height were measured in a standardized way [5]. The personal information including age, sex, and current anti-hypertensive medication use were collected by structured questionnaire for all participants. Due to the resource constraints, other variables such as educational level, occupation, smoking history, and drinking history were obtained only from participants in Yangbajing at the screening survey [5].
Follow up survey
After three months, all patients were asked to return for the measurements of blood pressure according to the same protocol used at baseline. We also collect data on anti-hypertension medication use during the whole study period for all participants.
Outcomes
The primary outcomes were the changes in systolic and diastolic blood pressure from baseline to post-intervention over the threemonth' study period. The mean of the three systolic blood pressure measurements were used for analysis. The secondary outcomes were the proportion with hypertension under control (both systolic ,140 and diastolic ,90 mmHg) at post-intervention.
Sample size
On the basis of our prior study, an estimated sample size of 230 total patients will provide 90% power, with a one-tail a = 0.05, and standard deviation of 13 mmHg to detect a 5.0 mmHg difference between two arms in change on systolic blood pressure at 3-month follow-up from baseline, with a 1:1 allocation of 115 to each treatment arm. We assumed a 20% of participants lost to follow up and thus a total of 287 participants should be recruited.
Statistical Analysis
Intention to treat (ITT) analyses was conducted and all eligible 282 subjects were included in the intention to treat analysis ( Figure 1). We adopted multiple imputations (MI) approach to impute the missing values of SBP and DBP at 3 months since the baseline observations carried forward approach has recently been criticized for many limitations and may lead to serious bias [19], [20]. The imputation model included baseline systolic/diastolic blood pressure, age, sex, township, BMI and blood pressure lowering agents using at baseline. After excluding patients who were lost to follow-up, discontinued intervention, used outside salt, or died during the follow-up period, a total of 213 patients (99 in intervention arm, 114 in control arm) completed the follow up visit and were used in the per-protocol (PP) analysis. We used paired ttests to test the difference in primary outcome between intervention and control, and we also used general linear model to adjust for variables used for randomization stratification (township, baseline systolic blood pressure, and gender) plus age, baseline BMI, and blood pressure lowering agents use. We were not able to adjust for other variables such as education, occupation, smoking and alcohol drinking since they were not collected in all participants. For the secondary outcomes, chi-square test was used. Primary outcome and secondary outcome were analyzed in both ITT and PP analysis. Statistical analyses were carried out using SAS 9.3 (SAS Inc., Cary, NC, USA).
Baseline characteristics
The mean age of the 282 randomized subjects were 63.1 years at baseline, and 58.9% were females, 94.5% were herdsman, 92.7% reported never going to school, 9.0% were cigarettes smokers, 11.7% were alcohol drinkers, and 47.0% used antihypertension medication in the past month. Baseline BMI was 23.6 kg/m 2 . All these major baseline characteristics were comparable between the two randomized groups (all p-values . 0.05) ( Table 1).
Effects on blood pressure reduction
Blood pressure was reduced in both groups and for both SBP and DBP at follow-up. However, the reduction was significantly larger in salt substitute treatment group for both SBP and DBP and in both ITT and PP analyses ( Table 2). The intergroup unadjusted net reduction was 29.1 mmHg in systolic (95% CI: 2 3.2 to 215.0; p = 0.002) and 23.4 mmHg in diastolic blood pressure (95% CI: 20.3 to 26.4; p = 0.03) in PP analysis, but it was smaller at 27.7 mmHg in systolic (95% CI: 22.5 to 212.9; p = 0.004) and 23.0 mmHg in diastolic blood pressure (95% CI: 20.2 to 25.8; p = 0.035) in ITT analysis with multiple imputations approach (Figure 2). After adjusting for baseline blood pressure, sex, age, township, baseline BMI and using blood pressure lowering agents, the net reduction was still significant, at 28.2/23.4 mmHg (SBP/DBP) from PP analysis and at 27.6/2 3.5 mmHg (SBP/DBP) from ITT analysis ( Table 2).
Effects on proportion of patients with blood pressure under control
All participants enrolled had systolic BP equal or greater than 140 mmHg at baseline. After three months' intervention, PP analysis revealed that 8.8% in regular salt group and 19.2% in saltsubstitute group had had their blood pressure well controlled (SBP/DBP,140/90 mmHg), (p,0.05). Although the peak remained around 160/95 mmHg (SBP/DBP), the overall blood pressure reduction in salt-substitute group was reflected by the dramatic shift of BP distribution to the left (Figure 3).
Anti-hypertension medication use
At three months' follow-up period, the proportion of patients taking antihypertensive medications in the past two weeks was lower in salt-substitute group than in the regular salt group (34.7% vs. 46.4%), however the difference was not significant (p = 0.081).
Severe Adverse Events
Three people died during the study period, two in control arm and one in the intervention arm. Two of the three died from cerebral hemorrhage, one in each group, and the third death was due to liver cancer.
Salt consumption estimation
Families located in the central of a group of families that participated in the study were selected by our staff on site for the convenience to distribute study salt, according to their geographic location. Our study staff delivered study salt to these families and then called for the study participants living nearby to come to pick up the study salt. This gave us the opportunity to measure the salt container at these families and ask the questions to help us estimate relatively accurate the amount of salt consumption in these families. For the selected household, we weighted the amount of study salt delivered to the household at baseline and the amount of remaining study salt at the follow-up to estimate average daily salt consumption per person. A total of 54 (27 in each group) families had complete and accurate salt consumption data and reported not using other salt (e.g. crude salt locally available, and/or salt bought from stores). The estimated average daily sodium chloride intake was 20.065.4 grams in the salt substitute group versus 26.968.1 grams in the regular salt group (p = 0.025). The additional potassium chloride intake is estimated at about 7.762.1 grams in the intervention group. Comparison between participants who did and did not have salt consumption data showed no significant difference in age, gender, blood pressure, education, smoking and alcohol drinking except for those with no data of salt consumption had a significantly lower proportion in use of anti-hypertensive drugs (48.5% vs. 64.8%, p = 0.022).
Discussion
The present study demonstrated a significantly reduction in both SBP and DBP in salt-substitute group (by 7.6/3.5 mmHg in ITT analysis), which is expectedly even greater than what we found in China Salt Substitute Study (CSSS) conducted in rural Han Chinese, where a 5.4 mmHg reduction in SBP and no significant reduction in DBP were shown [15]. The result should be attributed most likely to higher blood pressure at baseline, but also markedly higher dietary salt intake, simpler diet and lower access to medical treatment in the Tibetan cohort in the present study in comparison to the CSSS's Han Chinese cohort. Blood pressure reduction effect of an intervention is usually directly related to the baseline level of blood pressure, this is shown in many blood pressure lowering drug trials as well as non-drug trials such as DASH and CSSS [15], [21][22][23][24]. Compared to CSSS in which the baseline SBP/DBP was 159/93 mmHg, the baseline blood pressure in our study was much higher (177/105 mmHg). Furthermore, salt consumption for participants in our study was very high at about 27 grams per day but is about 15 grams in the CSSS study populations [15], [25]. The Tibetan diet is simpler, characterized by low consumption of fresh vegetables and fruits, and high meat and fat. There are evidences that a diet with rich fruits and vegetables could be beneficial to blood pressure lowering [26][27][28]. Compared with diets of CSSS study participants, the diet of Tibetan study participants was much simpler and lacked components that might be helpful in lowering blood pressure. At last, CSSS found that the blood pressure reduction effect of salt substitute was lower among patients with blood pressure lowering agents [15]. It is understandable that use of anti-hypertension medications would diminish the BP lowering effect from salt substitute through highly possible interactions. While 61% of study participants were having blood pressure medications in CSSS [15], it was only 47.0% in the present study. With smaller possible interaction with drugs, in comparison with CSSS, our study hence would have greater effect in lowering blood pressure. In this study, the significant reductions of both SBP and DBP were also found in the control arm receiving regular salt. These changes could be attributable to regression to the mean as well as the seasonal variations in blood pressure since the present study was conducted from February to May mirroring the expected temperature progression. This temperature associated variance phenomenon in BP has been previously reported by our group's experience in the CSSS and many other prior reports [15], [29], [30].
In addition, we also found that blood pressure lowering agents use was lower in intervention group than in control group, though not statistically significant. This suggests the effect of salt substitute in lowering blood pressure demonstrated in the study would not be attributable to differential use of anti-hypertension medications. We believe that our observed effect would have an even higher magnitude if the use of anti-hypertension medications had been the same in both groups. The salt consumption data showed that the intervention group consumed 7 grams less sodium chloride and 7.7 grams additional potassium chloride on average per day, which, we believe, drove the blood pressure reduction significantly more in intervention group.
Due to the imbalanced lost to follow up between intervention and control in terms of quantity and quality, we choose multiple imputations rather than baseline observations carried forward approach to impute the missing values in our ITT analysis. In fact, our intervention group lost more patients than control group (42 vs. 27 patients). Comparison of those lost in intervention and control showed that patients lost in intervention group had higher blood pressure (180.9 vs. 170.3 mmHg in SBP), less use of blood pressure lowering agents (42% vs. 50%), less women (29.6% vs. 70.4%), younger age (64.0 vs. 65.3 years) and higher BMI (23.6 vs. 22.8 Kg/m 2 ) at baseline. Although the results from baseline observations carried forward approach showed a smaller net reduction in both SBP (25.9 mmHg) and DBP (22.7 mmHg) and could be significantly underestimated the real net reduction, our results of both approaches showing a significant net reduction in SBP/DBP argue strongly for the validity of our study conclusion.
In order for population wide health intervention to be successful in a limited resource setting such as rural Tibet, the intervention and its delivery should not require a tremendous bandwidth of healthcare and financial capital. Furthermore, there is great importance for the cultural considerations in the design of the intervention, as approach of dietary restriction or outright elimination of the cultural dietary staple Tibetan yak buttermilk tea from study populations' diet would have been a difficult intervention to implement. However, with the dietary salt substitute, the intake of sodium was successfully modified by dietary replacement of normal salt and population education. Thus, dietary salt substitute intervention presented in this study has the tremendous potential for benefits to public health as an simple, low-cost, and effective interventional strategy for ameliorating the burden of hypertension related maladies in both limited resource settings (i.e. Tibet) and developed countries.
The present study was limited by restricted financial resources available and austere environmental conditions (remote and rugged terrain at an elevation of 4300 m with serious oxygen deficit); which lead to a single follow-up at three months post randomization and precluded the ability to collect and analyze urinary sample to measure changes in sodium and potassium excretion. However, the present study was a randomized controlled trial design, which would minimize the biases and provide reliable observed effects yielding strong internal validity. Also due to the resource constraints, we were only able to collect baseline data on age, sex, SBP, use of blood pressure lowering agents and BMI. The other variables such as education, occupation, smoking and alcohol drinking were only collected for part of study participants and thus not included in the analysis for adjustment. However, the high comparability between the Intervention and Control in terms of these variables indicates that not having these variables adjusted would not significantly affect the internal validity. Thirdly, the salt consumption data collected in this study does not include sodium from natural foods and is only from a small fraction of the total study population. It should only be used for a rough estimation of sodium intake for local Tibetans. If possible, future studies in Tibet should consider use of 24-hour urine or dietary recalls to get more reliable data. Fourthly, our definition of hypertension excluded those with isolated Table 2. Blood pressure at baseline and follow-up, reduction of blood pressure in each group and net reduction of blood pressure in salt substitute group in comparison with regular salt group. All numbers shown are mean (SD) except for the g2 that is shown in mean (SE).
Variables
a.
Mean reduction of blood pressure in each group after intervention. b.
Net reduction of blood pressure in salt-substitute group in comparison with regular salt group, adjusting for baseline blood pressure, sex, age, township, baseline BMI and using blood pressure lowering agents. diastolic hypertension and thus it should be cautious to extend our findings to isolated diastolic hypertension patients. Lastly, mean blood pressure in control group also dropped significantly after intervention. It was due to either seasonal blood pressure change or regression to the mean. The 'white coat hypertension' effect, if exist, should have been included in the regression to the mean. CSSS study had shown the similar seasonal blood pressure change where mean blood pressure reached the peak at the coldest season and the low ebb at the hottest season. However, our randomized parallel-controlled design well protected the validity of the study conclusion from the seasonal change and regression to the mean. Despite these limitations, this study was the first randomized control trial providing clear beneficial effects of salt substitute on blood pressure reduction in Tibet and area at high altitude.
In conclusion, this study confirmed the prior evidence of the blood pressure-lowering effects achieved with reduced sodium salt substitute in other trials. The excellent effect in lowering blood pressure achieved in this trial showed great potential of salt substitute in prevention and control of cardiovascular disease in this area and other similar settings in the world.
Supporting Information
Protocol S1 English translation of original protocol. A study to develop simple interventions for control of hypertension in Tibet highland.
(DOC)
Checklist S1 CONSORT 2010 checklist of information to include when reporting a randomized trial. (DOC)
|
2016-05-04T20:20:58.661Z
|
2014-10-22T00:00:00.000
|
{
"year": 2014,
"sha1": "0fd151a996f6bd4ab09c9a64ed8f3b669ead34fa",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0110131&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0fd151a996f6bd4ab09c9a64ed8f3b669ead34fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18767697
|
pes2o/s2orc
|
v3-fos-license
|
Enhanced cytotoxicity of mitomycin C in human tumour cells with inducers of DT-diaphorase
DT-diaphorase is a two-electron reducing enzyme that activates the bioreductive anti-tumour agent, mitomycin C (MMC). Cell lines having elevated levels of DT-diaphorase are generally more sensitive to MMC. We have shown that DT-diaphorase can be induced in human tumour cells by a number of compounds, including 1,2-dithiole-3-thione. In this study, we investigated whether induction of DT-diaphorase could enhance the cytotoxic activity of MMC in six human tumour cell lines representing four tumour types. DT-diaphorase was induced by many dietary inducers, including propyl gallate, dimethyl maleate, dimethyl fumarate and sulforaphane. The cytotoxicity of MMC was significantly increased in four tumour lines with the increase ranging from 1.4- to threefold. In contrast, MMC activity was not increased in SK-MEL-28 human melanoma cells and AGS human gastric cancer cells, cell lines that have high base levels of DT-diaphorase activity. Toxicity to normal human marrow cells was increased by 50% when MMC was combined with 1,2-dithiole-3-thione, but this increase was small in comparison with the threefold increase in cytotoxicity to tumour cells. This study demonstrates that induction of DT-diaphorase can increase the cytotoxic activity of MMC in human tumour cell lines, and suggests that it may be possible to use non-toxic inducers of DT-diaphorase to enhance the efficacy of bioreductive anti-tumour agents. © 1999 Cancer Research Campaign
DT-diaphorase is a flavoprotein that catalyses two-electron reduction of quinones, quinone imines and nitrogen-oxides (Riley and Workman, 1992). It is also a phase II enzyme that detoxifies xenobiotics and carcinogens and protects cells from tumourigenesis (Beyer et al, 1988;Riley and Workman, 1992). Several diaphorases are known in humans, but NQO1 gene is the most extensively studied and seems to be the most important for activation of bioreductive agents (Jaiswal et al, 1990;Jaiswal, 1991;Belinsky and Jaiswal, 1993). Tumour cells generally have higher enzyme activity than the corresponding normal cells, but enzyme levels in primary tumours are low compared to levels in some tumour cell lines (Belinsky and Jaiswal, 1993;Ross et al, 1994;Smitskamp-Wilms et al, 1995). DT-diaphorase can be induced in many tissues by a wide variety of chemicals, including 1,2dithiole-3-thione (D3T) (Egner et al, 1994), quinones, isothiocyanates, diphenols and Michael reaction acceptors (Prestera et al, 1993), many of which are dietary components. Much attention has been focused on the use of inducers of DT-diaphorase and other phase II detoxifying enzymes in cancer prevention. Oltipraz, a D3T analogue, has been shown to induce phase II detoxifying enzymes and is currently undergoing clinical trials as a chemopreventive agent (Kensler and Helzlsouer, 1995). Isothiocyanates like sulforaphane, which is found in broccoli, can also induce phase II enzymes (Prestera et al, 1993;Manson et al, 1997), and have been shown to prevent the formation of carcinogen-induced tumours in animals (Zhang et al, 1994).
DT-diaphorase plays an important role in activating bioreductive anti-tumour agents like MMC (Begleiter et al, 1989;Ross et al, 1993;Mikami et al, 1996;Nishiyama et al, 1997), EO9 (Plumb Enhanced cytotoxicity of mitomycin C in human tumour cells with inducers of DT-diaphorase et al, 1994), streptonigrin (Beall et al, 1996) and RH1 (Winski et al, 1998). Cell lines with high levels of DT-diaphorase are more sensitive to MMC (Begleiter et al, 1989;Ross et al, 1993;Mikami et al, 1996), and studies have shown a good correlation between the level of DT-diaphorase activity and the sensitivity to MMC in human tumour cell lines (Fitzsimmons et al, 1996). Transfecting the NQO1 gene into Chinese hamster ovary cells (Belcourt et al, 1996) and gastric carcinoma (Mikami et al, 1996) increased MMC cytotoxic activity to the cells.
Previously, we showed that D3T selectively increased the activity of DT-diaphorase in L5178Y murine lymphoma cells compared to mouse marrow cells. Pretreating these cells with D3T resulted in a twofold increase in MMC and a sevenfold increase in EO9 cytotoxicity with no effect on marrow toxicity (Begleiter et al, 1996). We also found that D3T induced DT-diaphorase activity in 28 of 38 human tumour cell lines representing ten tissue types (Doherty et al, 1998). Induction of DT-diaphorase activity in human tumour cells by D3T significantly increased the cytotoxicity of EO9 in these cells with no increase in toxicity to normal kidney cells (Doherty et al, 1998).
These findings indicate that inducers of DT-diaphorase could be used to enhance the anti-tumour efficacy of bioreductive agents. In this study, we investigated combination treatment of MMC and D3T in variety of human tumour cell lines, and studied dietary and pharmaceutical inducers of DT-diaphorase in human tumour cells. Although these inducers have been extensively studied in chemoprevention, they have not been investigated in combination with bioreductive agents.
Induction of DT-diaphorase
Cells were incubated with, or without, DT-diaphorase inducers at 37°C in 5% carbon dioxide for 48 h. The concentrations of inducers used were not toxic to the cells during the incubation time. Following incubation, cells were washed with PBS, suspended in 200 µl of 0.25 M sucrose, sonicated and stored at -80°C. Protein concentration was measured using the Bio-Rad DC Kit with γ-globulin as standard, then DT-diaphorase activity was measured spectrophotometrically by a modification of the procedure of Prochaska and Santamaria (Doherty et al, 1998;Prochaska et al, 1988) using menadione as the electron acceptor. DTdiaphorase activity was reported as dicoumarol-inhibitable activity and expressed as nmol min -1 mg protein -1 . A dicoumarol concentration of 10 µM was used.
Enzyme assays
T47D cells were incubated with, or without, 50 µM DMM for 48 h. Glutathione S-transferase (GST) activity was measured in supernatant from cell sonicates by a spectrophotometric procedure using 1-chloro-2,4-dinitrobenzene as substrate (Habig et al, 1974). NADPH:cytochrome P450 reductase activity was measured in supernatant from cell sonicates by a spectrophotometric assay using cytochrome c as the electron donor (Strobel and Digman, 1978). NADH:cytochrome b 5 reductase activity was determined by a previously described spectrophotometric procedure (Barham et al, 1996). Xanthine dehydrogenase activity was measured using a spectrophotometric method, in which xanthine dehydrogenase and xanthine oxidase forms of the enzyme were distinguished by the formation of uric acid from xanthine in the presence and absence of NAD + (Gustafson et al, 1992).
Cytotoxicity studies
Tumour cells were incubated with, or without, inducers for 4 8h and then were treated with various concentrations of MMC for 1h at 3 7°C. The surviving cell fraction was determined by MTT assay (Kirkpatrick et al, 1990;Johnston et al, 1994) after 4-9 days; this length of time was su fficient to allow at least three cell doublings. Normal human marrow cells were treated with D3T for 4 8h, and then incubated with various concentrations of MMC for 1h. The surviving cell fraction was determined by methylcellulose clonogenic assay (Begleiter et al, 1995). The D 0 (concentration of drug required to reduce the surviving cell fraction to 0.37) was calculated from the linear regression line of the surviving cell fraction versus drug concentration curve.
Combination treatment with D3T and MMC
H661, human non-small-cell lung cancer cells, were incubated with, or without, 5 0 µM D3T and then were treated with MMC.
Induction of DT-diaphorase by dietary inducers in T47D human tumour cells
The ability of dietary components and pharmaceuticals to induce DT-diaphorase was examined in T47D human breast cancer cells. The base level of DT-diaphorase in this cell line was 27.8 ± 1.2 nmol min -1 mg protein -1 , and eight of 14 inducers showed significant induction of enzyme activity ( Table 2). The induced enzyme levels ranged from 40.8 ± 1.2 to 128.5 ± 5.6 nmol min -1 mg protein -1 . The best inducers were DMM and DMF, with induced enzyme levels of 121.0 ± 3.9 and 128.5 ± 5.6 nmol min -1 mg protein -1 respectively.
Effect of DMM on NADPH:cytochrome P450 reductase, GST, NADH:cytochrome b 5 reductase and xanthine dehydrogenase activity
Incubation with 50 µM DMM at 37°C for 48 h had no significant effect on the levels of NADPH:cytochrome P450 reductase and NADH:cytochrome b 5 reductase activity in T47D cells. The activity of xanthine dehydrogenase was too low to be detected in this cell line. There was a small increase in GST activity after incubation with DMM with GST levels increasing from 20.2 ± 0.6 to 24.0 ± 1.1 nmol min -1 mg protein -1 (P < 0.05).
DISCUSSION
DT-diaphorase is a highly inducible enzyme that plays an important role in activation of bioreductive anti-tumour drugs. It is also a phase II detoxifying enzyme, which prevents carcinogenesis by detoxifying reactive carcinogens. Elevated levels of DTdiaphorase activity have been shown to increase the cytotoxicity of MMC (Begleiter et al, 1989;Ross et al, 1993). We have previously shown that pretreatment with D3T, an inducer of DT-diaphorase, significantly increased the cytotoxic activity of E09 in mouse (Begleiter et al, 1996) and human tumour cells (Doherty et al, 1998), with little or no effect on normal mouse marrow (Begleiter et al, 1996) or normal human kidney cells (Doherty et al, 1998).
In this study, we extended these investigations to examine combination therapy with MMC and D3T in six human tumour cell lines. D3T increased DT-diaphorase activity in all six cell lines, and pretreatment with the enzyme inducer significantly enhanced the cytotoxicity of MMC in four of the cell lines. Combination treatment with D3T and MMC increased the cytotoxic activity of MMC by 2.3-fold in H661 non-small-cell lung cancer cells, by 2.4-fold in T47D breast cancer cells, by 1.4-fold in HS578T breast cancer cells and by twofold in HCT116 human colon cancer cells. These results demonstrate that this combination treatment is effective in enhancing the cytotoxicity of MMC in different tumour types. D3T did not increase MMC cytotoxic activity in SK-MEL-28 melanoma cells or AGS stomach cancer cells. Both these cell lines have relatively high base levels of DTdiaphorase activity. This suggests that this combination therapy approach may be restricted by the level of DT-diaphorase in the cells. If the base or induced level of DT-diaphorase is above an upper threshold, further induction may not lead to an increase in MMC cytotoxicity. Our results, and a previous study (Beall et al, 1995), suggest that this upper threshold may be close to 300 nmol min -1 mg protein -1 . While this may limit the use of this combination therapy approach in some situations, it should not significantly impair the use of DT-diaphorase inducers to increase the anti-tumour activity of bioreductive agents in the clinic since primary tumours generally have base levels of DT-diaphorase that are <100 nmol min -1 mg protein -1 (Schlager and Powis, 1990;Malkinson et al, 1992;Ross et al, 1994;Smitskamp-Wilms et al, 1995;Marin et al, 1997).
To further improve the clinical potential of this treatment strategy for increasing the activity of bioreductive agents, we investigated dietary components and some pharmaceuticals as non-toxic inducers of DT-diaphorase. Fourteen compounds were tested for their ability to induce DT-diaphorase in T47D cells, a cell line that has been shown to be readily inducible and has a relatively low base level of DT-diaphorase. Eight of the 14 compounds significantly increased DT-diaphorase activity with some of the compounds proving to be better inducers than D3T. DMM and DMF were the best inducers studied and produced fivefold increases in enzyme activity. These compounds are metabolites of fumaric and maleic acid that are commonly found in foods. Sulforaphane, which is extracted from broccoli and has been extensively studied as a potent chemopreventive agent, showed good enzyme induction, and PG, an antioxidant added to foods, produced a 2.5-fold increase in enzyme activity. Although some vitamins have been shown to increase DTdiaphorase activity (Wang and Higuchi, 1995), only 13-cis-retanoic acid was able to induce DT-diaphorase in T47D cells. There has been great interest in non-steroidal anti-inflammatory drugs (NSAIDs) in cancer prevention, but the mechanisms through which these agents work is still not clear. Thus, we tested aspirin and ibuprofen in this study to see if they could increase DT-diaphorase activity in tumour cells. Aspirin produced a small but significant induction of DT-diaphorase, but ibuprofen had no effect. Soybeans have been shown to increase phase II enzymes, including DTdiaphorase, in rats (Appelt and Reicks, 1997); however, genistein, which is extracted from soybeans, did not increase DT-diaphorase activity in our study. Manson et a (1997) have shown that caffeic acid is a moderate inducer of phase II enzymes, but in our study caffeic acid did not increase DT-diaphorase activity.
To investigate if the level of enhancement of MMC activity is dependent on the level of DT-diaphorase induction, we carried out combination therapy studies with PG, D3T and DMM together with MMC in T47D cells. Pretreatment of cells with these enzyme inducers increased MMC activity in the order PG < D3T < DMM, and this paralleled the increase in DT-diaphorase activity. This suggests that there is a relationship between the level of induction in enzyme activity and the enhancement of MMC cytotoxicity, provided the DT-diaphorase activity does not exceed an upper threshold level. Since primary tumours usually have lower levels of DT-diaphorase activity than tumour cells grown in vitro, it may be possible to achieve greater enhancement of anti-tumour activity of bioreductive agents in the clinic by using more potent inducers of DT-diaphorase.
Since many enzymes can activate bioreductive anti-tumour agents, we examined the effect of DMM on enzymes that have been reported to be involved in MMC activation. When T47D cells were treated with 50 µM DMM, there were no changes in NADPH:cytochrome P450 reductase, or NADH:cytochrome b 5 reductase activity. Xanthine dehydrogenase activity was too low to be detected either before or after DMM treatment. GST is a phase II enzyme that can be induced coordinately with DT-diaphorase. This enzyme can also protect cells from toxins by removing them from the cell, and has been shown to play a role in resistance to a number of anti-tumour agents, including MMC (Xu et al, 1994). Although we did not see an increase in GST activity in HL-60 human leukaemia cells following treatment with 100 µM D3T (Doherty et al, 1998), in this study there was an increase in GST activity when T47D cells were treated with DMM. However, the effect was small and there was still a large enhancement of MMC cytotoxic activity in these cells.
The major toxicity associated with the use of MMC is bone marrow toxicity. We have previously shown that D3T did not increase the toxicity of MMC to mouse bone marrow (Begleiter et al, 1996). In this study, pretreatment with D3T produced a 1.5-fold increase in toxicity to human marrow cells; however, this was small compared with the enhancement of MMC cytotoxic activity in T47D, H661 and HCT116 cells. This result suggests that combination therapy with DT-diaphorase inducers and MMC may increase the therapeutic index for MMC for appropriate tumours. In addition, DT-diaphorase does not appear to be the major enzyme involved in activating MMC. Thus, we would expect a greater increase in therapeutic index if this approach were used with bioreductive agents that are selectively activated by DTdiaphorase like E09 (Doherty et al, 1998) or RH1 (Winski et al, 1998). Indeed, we did see a greater enhancement of EO9 cytotoxic activity compared with MMC in mouse lymphoma cells pretreated with D3T (Begleiter et al, 1996).
These studies support the hypothesis that inducers of DTdiaphorase could be used to increase the effectiveness of bioreductive agents in the clinic; however, a number of concerns must still be addressed. Activation of bioreductive agents by one-electron reducing enzymes like NADPH:cytochrome P450 reductase is increased in the absence of oxygen because redox cycling of the reduced intermediates is prevented. We have shown that DT-diaphorase does not contribute to the activation of MMC under hypoxic conditions (Begleiter et al, 1992), and there is some evidence that this enzyme may actually decrease the activity of bioreductive agents under these conditions (Plumb et al, 1994). Since bioreductive agents have often been used to target hypoxic cells in solid tumours, induction of DT-diaphorase might result in decreased anti-tumour activity. However, only a small proportion of tumour cells in solid tumours are actually anoxic, with most cells being exposed to at least some levels of oxygen. Marshall and Rauth (1986) demonstrated that oxygen levels of <1% were sufficient to allow redox cycling and reverse the enhanced MMC cytotoxic activity under hypoxic conditions. Thus, increasing the level of DT-diaphorase is likely to increase the overall anti-tumour effectiveness of MMC. In addition, the activity of bioreductive agents that were specifically activated by DT-diaphorase would be unaffected by the level of oxygen in the tumour cells and would be increased by induction of DT-diaphorase.
Our studies have demonstrated that induction of DT-diaphorase can increase the cytotoxic activity of bioreductive agents in many tumours in vitro (Begleiter et al, 1996;Doherty et al, 1998). However, Nishiyama et al (1993) found a negative correlation between DT-diaphorase activity and MMC anti-tumour activity in vivo. In contrast, Malkinson et al (1992) saw greater MMC activity in human tumour xenografts with higher DT-diaphorase activity. Thus, the ability of DT-diaphorase inducers to enhance the anti-tumour activity of bioreductive agents must still be demonstrated in vivo. In addition, our findings suggest that this strategy may not be effective with all tumours, and that for clinical application it will be necessary to measure the level of DT-diaphorase and the inducibility of the enzyme in vitro in tumour biopsy samples prior to the start of therapy.
In summary, this study has demonstrated that inducers of DTdiaphorase can selectively increase the cytotoxicity of MMC in human tumour cells of different tumour type. Enzyme inducers, including dietary components, that produced more induction of DT-diaphorase also produced a greater enhancement of MMC cytotoxic activity. Thus, it may be possible to use non-toxic inducers of DT-diaphorase to enhance the efficacy of bioreductive anti-tumour agents.
|
2014-10-01T00:00:00.000Z
|
1999-06-01T00:00:00.000
|
{
"year": 1999,
"sha1": "a07f10595646d556eae21a2234ae6c75f31309ac",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.nature.com/articles/6690489.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a07f10595646d556eae21a2234ae6c75f31309ac",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
257020087
|
pes2o/s2orc
|
v3-fos-license
|
Propagation of anisotropic gravitational and electromagnetic waves at very high energies
We analyze the dispersion relation for an anisotropic gravity-electromagnetic theory at very high energies. In particular for photons of very high energy. We start by introducing the anisotropic gravity-gauge vector field model. It is invariant under spacelike diffeomorphisms, time parametrization, and U(1) gauge transformations. It includes high-order spacelike derivatives as well as polynomial expressions of the Riemann and field strength tensor fields. It is based on the Ho\v{r}ava-Lifshitz anisotropic proposal. We show its consistency, and the stability of the Minkowski ground state. Finally, we determine the exact zone at which the physical degrees of freedom, i.e. the transverse-traceless tensorial degrees of freedom and the transverse vectorial degrees of freedom propagate according to a linear wave equation. This is so, in spite of the fact that there exists in the zone a non-trivial Newtonian background of the same order. The wave equation contains spatial derivatives up to the sixth order, in the lowest order it exactly matches the relativistic wave equation. We then analyze the dispersion relation at very high energies in the context of recent experimental data. The qualitative predictions of the proposed model, concerning the propagation of highly energetic photons, are different from the ones obtained from the modified dispersion relation of the LIV models.
violation scale, characterized by the Lorentz violation parameter E LV , could be 3.6 × 10 17 GeV. It has been argued that the violation could be compatible with Superstring theory and M-Theory and also with Loop Quantum Gravity (relativistic theories) due to a spontaneous breakdown of Lorentz symmetry by the ground state of the theory. In this paper we consider a different approach and introduce a model inspired by the anisotropic scaling between space and the time proposed by Hořava [11], using previous ideas of Lifshitz, including the interaction terms involving the derivative of the lapse, in a foliated decomposition of the space and time, introduced in [12]. They allow us to have a consistent model, free of instabilities, describing anisotropic gravity coupled to a gauge vector field. Hořava's proposal describes gravity at very high energies in an anisotropic space and time. At very high energies the relativistic symmetry breaks down, and the potential of the Hamiltonian includes then all interactions with highorder spatial derivatives of the Riemann tensor, up to z = 3 terms (sixth order in spatial derivatives), which implies a dimensionless coupling constant of the action. This geometric structure ensures in principle the renormalizability by power counting of the theory, which in fact has been proven. If on the same ground, one would like to introduce other fundamental forces in nature, for example, the electromagnetic force, taking into account that space and time are now described by a foliation of spacelike manifolds parametrized by the time coordinate, one is naturally led to consider anisotropic electromagnetic interactions as well as gravitational ones. Under this assumption, one can start with an anisotropic Hořava action in 4 + 1 dimensions, for the electromagnetic interaction, and then introduce a dimensional reduction to 3 + 1 dimensions as proposed in [13][14][15] or, as will be proposed here, directly introduce in 3 + 1 dimensions all higher order spatial derivatives of the Riemann tensor and the electromagnetic field strength. In both cases the coupling must be determined to fit the experimental data, taking into account that quantum electrodynamics is a very well-established theory. In this sense, the new model has fewer restrictions between its couplings parameters compared to the former one. In any case, the relevant modifications to the relativistic theory should occur at very high energies, on the order of 10 17 GeV. Geometrically, the model we propose is formulated on a 3 + 1 foliated manifold, it is invariant under spacelike diffeomorphisms that preserve the foliation of the manifold, time reparametrizations, and under U (1) gauge transformations acting on the gauge vector field describing electrodynamics. This model predicts modified dispersion relations (MDR) for both the gravitational and the electromagnetic sectors with free coupling constants whose value should be restricted by conditions of consistency of the theory and existing experimental data as well as the data to be obtained in the following years from multimessenger astronomy. This MDR has different qualitative consequences, with respect to the attenuation of highly energetic photons interacting with background photons, than the ones arising from the LIV approach. Models that predict modified dispersion relations have been proposed in the context of String theories and D-Branes in [16] and on Loop Quantum Gravity in [17]. In the context of anisotropic gravity, as mentioned above, a model was obtained by performing a dimensional reduction of a Hořava-Lifshitz anisotropic gravity model in 4 + 1 dimensions. In that case, it was necessary to include up to z = 4 spacelike derivative terms in the potential to have a power counting renormalizable theory in 4+1 dimensions. The resulting 3+1 model has coupling constants for gravity and electromagnetism, not all independent [13,14]. We take in this paper a different approach by leaving free the coupling constants with the possibility that they can be adjusted subsequently as the experimental data emerges. This approach was also proposed in a different context in [18][19][20]. In section 2 we introduce the anisotropic model and obtain its Hamiltonian structure. The constrained system is completely consistent. To obtain a power counting renormalizable model we consider up to z = 3 spacelike terms in the potential. In section 3 we analyze the stability of the Minkowskian background and the existence of a wave zone, it requires a set of restrictions on the coupling constants. In section 4 we obtain the evolution equations for the physical degrees of freedom in the wave zone of the theory, i.e. the transverse-traceless tensorial modes and the transverse vectorial ones. These are wave equations with high spacelike derivative terms. We discuss the dispersion formula in the context of the known experimental data. In section 5 we give the conclusions of the work.
II. ANISOTROPIC GRAVITY-GAUGE FIELD COUPLING
In this section, we introduce the model describing the pure anisotropic gravity-gauge vector field coupling. Once this model and its properties ı.e., symmetries and constraints structure are presented, we check the stability of the reduced Hamiltonian. This analysis is relevant since the Hamiltonian must be positively defined in order to avoid exponential instabilities (ghost fields), rendering the theory to be a well-defined one.
A. The model
The action of the model we propose is where S H-L is the Hořava-Lifshitz action where V g is the most general scalar constructed from the spacelike derivatives of the Riemann tensor of the leaves of the foliation and a i ≡ ∂ i ln N . The potential contains at most six spacelike derivatives in order to have an overall dimensionless coupling constant, which we take to be 1. And S EM is the following electromagnetic action where the potential V EM includes all scalars with spacelike derivatives up to sixth order, constructed from the contraction of the field strength F ij with the Riemann tensor, a i and itself. The first term in the bracket is the Lagrangian density for the electromagnetic interaction − 1 4 F µν F µν , without the term − 1 4 F ij F ij which is contained in the potential V EM , expressed in terms of the ADM metric for the foliated manifold. We will denote π ij and E i the conjugate momenta of g ij and A i respectively. The Hamiltonian density describing the pure anisotropic gravity-gauge field coupling at the KC point is given by At this stage some comments are pertinent. In obtaining (3) we have fixed λ to its critical point 1/3 in Hořava's Hamiltonian. The consequence is that only the transverse-traceless tensorial modes and the transverse gauge potential propagate, they are the physical degrees of freedom of the theory [13,14,21]. The potential of the theory V(g ij , N, A i ) contains then all scalar fields with high order derivatives, up to sixth order, of both sectors, the gravitational field, the vector a i and the gauge field. The action (1), is invariant under diffeomorphisms on the spacelike leaves of the foliation and under reparametrizations of the time variable. The infinitesimal generators of these symmetries are Besides, the theory is invariant under U (1) local gauge transformations. In fact, the infinitesimal transformations of the canonical variables {g ij , A i , N } given by Note that the potential scale as show that the 3-dimensional metric transforms as a tensor and scalar field under spacelike diffeomorphisms and time reparametrizations, respectively. Whereas, the lapse function N behaves as a scalar under spacelike diffeomorphisms and scalar density under time reparametrizations. Finally, the gauge vector-field A i transforms as a vector field under spacelike diffeomorphisms, a scalar field under time reparametrizations, and a gauge field under U (1) gauge group. Regarding this point, the last term of (7) represents the U (1) gauge transformation of the vector A i , whose infinitesimal generator is ζ. Under gauge transformation both g ij and N are invariant whilst A i transforms as a gauge field On the other hand, under the mentioned symmetry laws A 0 , N i , σ and µ transform as Lagrange multipliers.
Next, we shall discuss the form of the full potential V(g ij , N, A i ) of the theory. In the original Hořava's proposal, for pure anisotropic gravity, the complete potential of the theory up to z = 3 derivatives contains around 100 terms. However, this long list, for pure anisotropic gravity, is greatly reduced if one is interested in considering only those objects relevant to the propagator of the physical degrees of freedom which, indeed, are the only terms that contribute to the wave zone. Consequently, only those terms contributing to the wave zone will be taken into account. These terms are quadratic in the fields R ij , a i and F ij . So, the general form of the potential considering all possibilities up to z = 3 is given by the sum of At this stage, some comments are in order. First, all terms presented in (10)- (12) are invariant under symmetries of the theory and so is the full potential of the theory. Secondly, in principle, all coupling constants for the gravitational and gauge sector could be different (β s and κ s). However, taking into account both, theoretical arguments and recent experimental data, in the IR limit (z = 1 terms), the propagation speed of the gravitational and electromagnetic waves are exactly the same or they differ at most in one part of 10 −15 . So, we keep at z = 1 the same coupling constant β = 1 for both sectors, since in this anisotropic non-relativistic theory, β ends up being the propagation speed at low energies of both the gravitational and gauge vector field physical degrees of freedom. In this way, a comparison with the well-established Einstein-Maxwell theory can be easily performed. It turns out that, at low energies where only the z = 1 potential terms are relevant, for α = 0 and β = 1, the field equations of both models, the one obtain from a dimensional reduction and the model we propose here, exactly agree with the Einstein-Maxwell theory in a particular gauge [14]. Finally, we discuss the constraint structure of the theory. From the Hamiltonian (3), the theory posses four primary constraintsH, H j , P N and π. The former ones,H and H j , correspond to first-class constraints, generators of the U (1) gauge symmetry, and the spacelike diffeomorphisms, respectively. Specifically, these constraints are given bỹ It should be pointed out, that constraint (13) is equivalent to the Gauss law in the Maxwell theory. Concerning (14), the first term in the right-hand member generates spacelike diffeomorphisms on the pair {g ij , π ij }, while the second object generates the same symmetry on the gauge field A i . This term is important, in order to guarantee the correct transformation law (6) of the gauge vector field A i (and its conjugate momentum E i ). Additionally, this constraint (the so-called momentum constraint) can be supplemented by another extra piece N ∂ j P N , generator of the spacelike diffeomorphisms on the lapse function N and its conjugate momentum P N [22,23]. This can be done since P N vanishes on the phase space constrained surface. The remaining constraints, P N and π given by, are second-class constraints arising from the non-existence of time derivatives of N and by the fact that λ has been fixed to its critical point (the KC point λ = 1/3), respectively. The conservation in time of these constraints leads to another second-class constraintsṖ In the above expressions U and W correspond to and, where ∇ ij...k stands for ∇ i ∇ j . . . ∇ k .
B. Wave zone and stability of the model In this section, we analyze the stability of the Minkowski metric as the background on which the gravity and electromagnetic waves propagate. We notice that the Minkowski metric is a solution of the field equations of the model (1). In fact, it is a solution with enhanced symmetries compared to the anisotropic formulation. In order to study the stability of the background we consider the quadratic Hamiltonian arising from (1). The stability requirement reduces to show that the elliptic operators β∇ + β 1 ∇ 2 − β 3 ∇ 3 and β∇ + 2κ 1 ∇ 2 + 2κ 2 ∇ 3 are strictly positive definitive. The stability conditions for 2κ 1 and 2κ 2 are given below in (31) and (32) and similar for β 1 and β 3 respectively. Under such assumptions, we can now analyze the existence of a wave zone. It is known that in both Einstein's General Relativity and in the Hořava-Lifshitz gravity theory, there exists a well-defined wave zone for asymptotically flat solutions [24][25][26]. In the wave zone, the dominant mode O(1/r) of the g T T ij component, the transverse-traceless tensorial modes, satisfies a linear equation with constant coefficients. Although there exists a non-trivial Newtonian background of the same order as the dominant mode, it does not interfere with the propagation of the T T modes. In the low energy case, the modes satisfy a linear relativistic (β = 1) wave equation and, at high energies, these modes satisfy a generalized anisotropic wave equation where high-order derivative operators arise in a natural way in the Hořava-Lifshitz scenario. These equations have a solution of the form where the dispersion relation, for the wave equation (21), obtained when high order derivative operators are considered, is [26] ω 2 (k) = βk 2 − β 1 k 4 − β 3 k 6 .
In [13,14], a 3+1 dimensional anisotropic gravity model coupled to a gauge field was proposed. This model is obtained from 4+1 dimensional Hořava's gravity via a dimensional reduction. It was shown in [27], that a well-defined wave zone at low energies, exists for this model. Under the same hypothesis, it is possible to show for the particular model considered in this work, that a well-defined wave zone exits when all high-order derivative operators for both, the gravitational and electromagnetic sectors, are taken into account. The wave zone again has a dominant order O(1/r) and, both the T T modes for the gravitational field and the T modes for electromagnetism interaction satisfied a linear wave equation. We will impose the requirements given in [27], section (IIIA), on the wave zone to determine the behavior at high energy of the physical degrees of freedom. Given these requirements, we solve the field equations and show that there exists a zone where these requirements are satisfied and we determine then the evolution equations for the physical degrees of freedom: the wave equations. We notice from the interaction terms of the potential (10)- (12), the only ones that contribute to the wave zone, that there is no interaction term that couples the electromagnetic F ij to the pure gravity spacelike vector a i and the Ricci tensor R ij of the spacelike leaves of the foliation. Although there is a gravity-electromagnetic coupling through the metric, this property implies that at the leading order both wave equations are decoupled. Therefore, in the wave zone, the physical degrees of freedom of the gravitational field do not interact with the physical degrees of freedom of the electromagnetic field, furthermore, none of them interact with the Newtonian background whose order again is O(1/r). This is analogous to what happens in GR coupled to Maxwell's theory [24,28]. Besides, the energy and momentum of the system are given by the contribution of both sector ı.e., the gravitational and electromagnetic interactions. Taking into account the previous comment on the potential interaction terms, it follows that the estimates in orders of 1/r follow by similar arguments as in [25]. So, from the constraints (14) and π = 0 we obtain Using the gauge condition g ij,j = 0, the constraints (17) and (16) imply we notice that they are of the order of O(1/r), but only in the non-oscillatory part. This is the main reason why they do not contribute to the equations describing the propagation of the physical degrees of freedom. Finally, from equations of motion at order O(1/r) for the gravity degrees of freedom we obtain the behavior of π ijT T , which is oscillatory of order O(1/r) and the wave equation for the TT part of the metric For the electromagnetic sector, the constraint (13), implies that A i is a transverse mode. In the wave zone the equations of motion at dominant order O(1/r) for the gauge field reduce tö Both the gravitational and electromagnetic degrees of freedom have a spherical wave solution of the form with dispersion relation (for the gauge potential A T i , and analogous for g T T ij ) whereκ 1 = 2κ 1 andκ 2 = 2κ 2 . This solution represents dispersive waves whose phase velocity, v f ≡ ω/k, is given by the relation To guarantee the square phase velocity positivity and thus the stability of the solutions, we requirê together with a)κ 1 ≤ 0 or b)κ 1 > 0 andκ It should be noted that phase velocity in the vacuum is a function of k in contrast with its relativistic counterpart which is constant. The dispersion relation may be rewritten in terms of the energy and momentum as The group velocity v ≡ ∂ω/∂k becomes then From (31) the fifth power term on the momentum is positive, hence ifκ 1 < 0 the group velocity is v > √ β while if κ 1 > 0 the group velocity is v < √ β if the third power on the momentum term is much bigger than the fifth power one. In what follows we will take √ β = c = 1.
C. The dispersion relation
Recently LHAASO reported the detection of more than 5000 very high energy photons from gamma-ray burst GRB 221009A with energies above 500 GeV up to 18 TeV [29]. Very high energy photons γ can interact with background photons γ b , such as those from the cosmic microwave background (CMB) and the extragalactic background light (EBL), and produce an electron-positron pair γγ e → e − e + . According to relativistic physics, there is a threshold for this interaction where E is the energy of γ and E b the energy of γ b , m e is the electron mass. For photons with E > E th the gamma-ray is strongly attenuated. For CMB photons E th ∼ = 411 TeV. Since the detected VHE photons have as maximum energy 18 TeV, the CMB background is transparent to the γ photons. However, this is not the case for the EBL for which the threshold energy is 261 GeV to 261 TeV. So, according to relativistic physics, most of the photons should have been attenuated. If the high-energy photons travel extragalactic distances, it has been argued that they should not satisfy a relativistic dispersion relation. A MDR with subluminal photon velocities has been proposed as an explanation for the observation of these high energy photons [30]. The scale energy of the breakdown of the Lorentz symmetry according to a MDR has been estimated from Fermi laboratory data using the difference in arrival time, of photons of different energies emitted from the same source. The accepted characteristic parameter of the MDR, for n = 1 is ξ 1 = (E LV,1 ) −1 , E LV,1 = 3.6 × 10 17 GeV [1], and for n = 2 ξ 2 = (E LV,2 ) −2 , E LV,2 = 6.8 × 10 9 GeV [2]. Under the same assumption used in [31], that is the energy-momentum conservation of the γγ b → e − e + interaction, one obtains the general relation We will now replace the dispersion relation (29) for photons arising from the model (1). We redefine the coupling parameters in order to have a more direct comparison with the MDR in (36). We have where a is a new parameter. We notice that this MDR is a closed relation, it is not an expansion into higher powers of (E/E LV,n ) n which are suppressed at energies well below E LV,n . We are going to consider ξ > 0, hence the positivity of the quadratic Hamiltonian, or equivalently the stability requirements imposes ξ 2 < ξ 2 /a 2 . Consequently, we must have a 2 < 1, it is the only requirement on a 2 . We denoteξ and assume k 2 |ξ(k)| << 1, which can be checked to be valid in the following discussion. After replacing (38) in (37), we end up withξ where f (k) ≡ 4 k 4 b k − m 2 e is the same function which appears in the analysis of the γγ b → e − e + interaction using the MDR (36) with n = 2 [31]. In this case the functionξ(k) reduces to a constant ξ. For a given value of ξ the intersection with f (k) defines two values of k. For any k between these two values, the photon γ is strongly attenuated. For k outside this set the background becomes transparent. In the FIG. 1 we show both functions f (k) andξ(k), defined in (39). (eV) 2 , hence we also have ξ > ξ c . The difference is that in this case although we have approximated the calculations by a MDR (36) with n = 2, there exists a threshold beyond which the highly energetic photons will be absorbed by the background. This bound is, however, much bigger than the energies of the detected γ photons. The MDR (38) is then compatible with the recently detected photons with very high energy, with respect to the CMB background. In the case of the EBL background, the estimates are not as precise as in the CMB. According to the reported data, for n = 2, ξ < ξ c , and in this case it has to be determined if the energies of the detected photons are contained on the attenuation range or not. More information is needed to determine the compatibility of the MDR with the EBL background. We illustrate this situation in figure 2. FIG. 2: shows the function f (k) for the MDR (38) in the assumption that the term of highest power of momentum, k 6 , is negligible with respect to the k 4 term. For the EBL background, we consider two cases, for the values of ε b indicated in the figure. For CMB the relativistic threshold is 411 TeV. The critical value of k at which f (k) has its maximum value is kc = 548 TeV and the maximum is ξc = 3.85 × 10 −48 1/(eV) 2 , the value of ξ is 1/(ELV ,2 ) 2 = 1/6.8 × 10 18 eV. Then ξ > ξc, there is no attenuation from the background for any value of k. For the EBL background, the relativistic threshold varies between 261 GeV to 261 TeV, the critical value kc becomes 348 GeV < kc < 348 TeV and the maximum of f (k), ξc, varies from 2.3 × 10 −32 to 2.3 × 10 −43 1/(eV) 2 . The estimated value of ξ is 2.1 × 10 −38 1/(eV) 2 , it belongs to the region 2 as defined in [31]. Hence ξ < ξc, and there may be attenuation from the background depending on the values of k.
III. CONCLUSIONS
We considered an anisotropic model describing the interaction of gravity and electromagnetic forces in the context of Hořava-Lifshitz framework. We showed the consistency of the formulation, the stability of the Minkowski space-time solution, and the existence of a wave zone where both interactions propagate satisfying independent wave equations in the presence of a nontrivial Newtonian background. Finally, we analyzed the propagation of highly energetic photons, which in this anisotropic model satisfy a modified dispersion relation compared to the relativistic one. We used the data from Fermi-LAT, Fermi-GBM and LHAASO on the recently observed highly energetic gamma-ray bursts.
|
2023-02-20T02:15:48.765Z
|
2023-01-21T00:00:00.000
|
{
"year": 2023,
"sha1": "22d6336b2e2daf8a2cd3f3a753ada7468bb16273",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "22d6336b2e2daf8a2cd3f3a753ada7468bb16273",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
39666961
|
pes2o/s2orc
|
v3-fos-license
|
Antifibrinolytic Agents in Traumatic Haemorrhage
Among trauma patients who survive to reach hospital, exsanguination is a common cause of death. Could anti fibrinolytics reduce the death rate? Only a large randomized controlled trial can answer the question
Introduction
This article is an invitation to doctors around the world to participate in the CRASH-2 trial (Clinical Randomisation of an Antifi brinolytic in Signifi cant Haemorrhage), a large, multi-centre, randomised controlled trial of a simple and widely practicable treatment for traumatic hemorrhage. The rationale for the trial and contact details for those who would like to take part are given below.
Evidence from randomised controlled trials is essential for improving health care. In the case of widely practicable treatments for common health problems, even modest treatment effects can result in substantial health gains. However, to detect such modest effects requires large multi-centre randomised trials involving hundreds of collaborating health professionals internationally. Many health professionals would be pleased to collaborate in such trials if they knew that they were underway, but at present there is no easy way to bring these trials to their attention.
Three years ago, in the context of the CRASH-1 trial (Corticosteroid Randomisation After Signifi cant Head Injury), the trial investigators sent a message to the electronic mailing list of the World Association of Medical Editors, asking them to consider publishing an editorial about the CRASH-1 trial that invited doctors around the world to participate. In response to this request, many medical journals around the world published the CRASH-1 trial editorial in various different languages, and as a result, many more doctors joined the CRASH-1 trial. The trial was completed in May 2004 and involved around 400 hospitals in almost 50 countries, and because of its size provided a reliable answer to an important question (see www.crash. lshtm.ac.uk).
This current article is being published as the result of a similar such request to medical editors in the context of the CRASH-2 trial.
A Possible Role for Antifi brinolytics
For people at ages fi ve to 45 years, trauma is second only to HIV/AIDS as a cause of death. Each year, worldwide, over three million people die as a result of trauma, many after reaching hospital [1]. Among trauma patients who do survive to reach hospital, exsanguination is a common cause of death, accounting for nearly half of in-hospital trauma deaths [2]. Central nervous system injury and multi-organ failure account for most of the remainder, both of which can be exacerbated by severe bleeding [3].
The haemostatic system helps to maintain the integrity of the circulatory system after severe vascular injury, whether traumatic or surgical in origin [4]. Major surgery and trauma trigger similar haemostatic responses, and any consequent massive blood loss presents an extreme challenge to the coagulation system. Part of the response to surgery and trauma, in any patient, is stimulation of clot breakdown (fi brinolysis) which may become pathological (hyper-fi brinolysis) in some [4]. Antifi brinolytic agents have been shown to reduce blood loss in patients with both normal and exaggerated fi brinolytic responses to surgery, and do so without apparently increasing the risk of post-operative complications; most notably there is no increased risk of venous thromboembolism [5].
Systemic antifi brinolytic agents are widely used in major surgery to prevent fi brinolysis and thus reduce surgical blood loss. A recent systematic review [6] of randomised controlled trials of antifi brinolytic agents (mainly aprotinin or tranexamic acid) in elective surgical patients identifi ed 89 trials including 8,580 randomised patients (74 trials in cardiac, eight in orthopaedic, four in liver, and three in vascular surgery). The results showed that these treatments reduced the numbers needing transfusion by one third, reduced the volume needed per transfusion by one unit, and halved the need for further surgery to control bleeding. These differences were all highly statistically signifi cant. There was also a statistically non-signifi cant
Health in Action
The Health in Action section is a forum for individuals or organizations to highlight their innovative approaches to a particular health problem. reduction in the risk of death (relative risk = 0.85: 95% confi dence interval, 0.63-1.14) in the antifi brinolytictreated group.
Why a Large Trial Is Needed
Because the haemostatic abnormalities that occur after injury are similar to those after surgery, it is possible that antifi brinolytic agents might also reduce blood loss, the need for transfusion and mortality following trauma. However, to date there has been only one small randomised controlled trial (70 randomised patients, drug versus placebo: zero versus three deaths) of the effect of antifi brinolytic agents in major trauma [7]. As a result, there is insuffi cient evidence to either support or refute a clinically important treatment effect. Systemic antifi brinolytic agents have been used in the management of eye injuries where there is some evidence that they reduce the rate of secondary haemorrhage [8].
A simple and widely practicable treatment that reduces blood loss following trauma might prevent thousands of premature trauma deaths each year, and secondly, could reduce exposure to the risks of blood transfusion. Blood is a scarce and expensive resource, and major concerns remain about the risk of transfusion-transmitted infection. Trauma is common in parts of the world where the safety of blood transfusion is not assured. A recent study in Uganda estimated the population-attributable fraction of HIV acquisition as a result of blood transfusion to be around 2%, although some estimates are much higher [9,10]. Only 43% of the 191 WHO member states test blood for HIV, Hepatitis C, and Hepatitis B viruses. Every year, unsafe transfusion and injection practices are estimated to account for 8-16 million Hepatitis B infections, 2.3-4.7 million Hepatitis C infections, and 80,000-160,000 HIV infections [11]. A large randomised trial is therefore needed of the use of a simple, inexpensive, widely practicable antifi brinolytic treatment such as tranexamic acid (aprotinin is considerably more expensive and is a bovine product with consequent risk of allergic reaction and hypothetically transmission of disease), in a wide range of trauma patients, who when they reach hospital are thought to be at risk of major haemorrhage that could signifi cantly affect their chances of survival.
A Call to Health Professionals
The CRASH-2 trial will be a large, international, placebo-controlled trial of the effects of the early administration of the antifi brinolytic agent tranexamic acid on death, vascular events and transfusion requirements (http://www.crash2. lshtm.ac.uk). The trial aims to recruit some 20,000 patients with trauma and will be one of the largest trauma trials ever conducted. However, it will only be possible to conduct such a trial if hundreds of health care professionals worldwide work together to recruit patients to the trial in order to make it a success. If you are interested in recruiting patients, please contact Ian Roberts at the CRASH-2 trial coordinating centre (Box 1).
|
2014-10-01T00:00:00.000Z
|
2005-03-01T00:00:00.000
|
{
"year": 2005,
"sha1": "8a854ba05c11387bc3dab2846a821e0ee7423ee0",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.0020064&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a854ba05c11387bc3dab2846a821e0ee7423ee0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259239613
|
pes2o/s2orc
|
v3-fos-license
|
Identification of Negative Ion at m/z 20 Produced by Atmospheric Pressure Corona Discharge Ionization under Ambient Air
The negative ion at m/z 20 observed at atmospheric pressure corona discharge ionization mass spectra has been identified by supplying the vapors of deuterium oxide (D2O) and H218O. From the mass shifts of the ion at m/z 20 observed with D2O and H218O, it was suggested that the chemical composition of the ion at m/z 20 is to be H4O. Further mass shift from m/z 20 to 22 was observed by supplying the vapor of perfluorokerocene, suggesting the chemical composition of H3F. The chemical compositions of the negative ions H4O− and H3F− were consistence with the dipole-bound complex states between hydrogen H2 and polar molecules such as H2O and hydrogen fluoride (HF) having dipole moments beyond a critical dipole moment of 1.625 D, theoretically proposed by Skurski and Simons. The ionic chemical compositions and structures of H4O− and H3F− obtained with density functional theory calculations implied that both dipole-bound complex H2O−…H2 and HF−…H2 can be formed by exothermic reactions by which H2 molecule is complexing with negative ions H2O− and HF−, respectively.
INTRODUCTION
We and Nagato et al. have previously reported that atmospheric pressure corona discharge ionization (APCDI) of ambient air resulted in various kinds of negative ions Y − (Y=CO x , HCO x , NO x , HNO x , O x , HO x ) and water clusters Y − (H 2 O) n . [1][2][3][4] Although almost negative ions generated by APCDI have been identified, a small mass ion at m/z 20 remains unknown to date. 3) The negative ion at m/z 20 can be observed at high voltage conditions such as −2.7 and −3.5 kV applied to the corona needle, while at low voltage conditions such as −1.9 kV, the hydroxide HO − and its water clusters HO − (H 2 O) n can be observed by accompanying a magic cluster of HO − (H 2 O) 3 at m/z 71. 2,3) The hydroxide HO − can be formed by an ion molecule reaction between O − and H 2 O 1,5) or by the attachment of electron to hydroxyl radicals ·OH due to its positive electron affinity (EA, 1.83 eV), 6) while it is believed that the hydroxyl radicals ·OH are generated via dissociation of water molecules on the tip of needle. The dissociation of water molecules into hydroxyl and hydrogen radicals (·OH + H·) may occur on the tip with high electric field strength such as 10 8 -10 9 V/m resulting in over 100 eV kinetic energy of electrons. 3,7) Regarding the dissociation of water molecules on the steel surface, Takahashi et al. showed that water molecules attached to the steel surface heated easily dissociate into ·OH and H· radicals. 8) From this, it is expected that some kind of negative ions originated from hydrogen radical H· and/or hydride H − can be observed, because the hydrogen radical has a positive value of EA 0.75 eV. 6) Here we identify the negative ion at m/z 20 as a dipole-bound complex ion H 4 O − between hydrogen, water, and electron, proposed by Skurski and Simons. 9) Another negative ion at m/z 22 corresponding to H 3 F − produced by supplying hydrogen fluoride (HF) gas is also identified as the dipole-bound complex. The stability and structures of H 4 O − and H 3 F − are discussed from the point of quantum chemical calculations.
APCDI mass spectrometry
All the mass spectra were obtained with a reversed geometry double-focusing mass spectrometer JMS-BU30 (JEOL, Tokyo, Japan) attached to a home-build ion source of APCDI. The schematic illustration and main experimental conditions have been reported elsewhere. 2,3) The discharge gap d between the electrodes and the needle angle α with respect to the orifice axis were adjusted to 3 mm and 0 rad, respectively. An angle of 0 rad is defined as the needle being located on the orifice axis. The needle was located perpendicular to the orifice plate as a plane electrode and could be shifted parallel of 0 and 1 mm to the orifice plate. The DC voltage of −2.0 kV was applied to the needle relative to the orifice plate. It is of importance to recognize that the conditions of the angle at 0 rad and the center location at 0 mm of the needle give high electric field strength even at the DC voltage of −2.0 kV. 2,3,7) The orifice was heated at 40°C to generate hydrated clusters Y − (H 2 O) n . The room temperature of 298 K and relative humidity of 30-68% were controlled by a standard commercial air conditioner. For evaluating the correlations between the ion at m/z 20 and the negative ions at m/z 16 (O − ) and m/z 33 (HO 2 − ), the DC voltage of −2.0 to −3.4 kV and the needle location of 1 mm were employed under room temperature and 54% humidity. HF gas was generated by using a home-build reaction system made up of ultraviolet light, polytetrafluoroethylene (PTFE), and hydrogen gas. 10) Using the system, the HF gas was generated by which fluorine was abstracted by hydrogen radicals from the surface of PTFE due to the difference of bond dissociation energy for HF and carbon-fluorine (CF) bonds.
Calculations
All the calculations reported in this paper have been performed using the Gaussian 16 suite of programs, 11) and the initial molecules and ionic structures of non-covalent complex ions (H 2 …H 2 O) − and (H 2 …HF) − were generated by means of visual inspection using the GaussView program 6.0. 11) The geometry optimization and vibration frequency analysis of all mentioned species were performed with the M06-2X hybrid functional 12) level of theory and 6-31G+(d,p) basis set.
Conditions for observing the definite ion
peak at m/z 20 Figure 1 shows negative ion APCDI mass spectra of ambient air obtained with three different humidity conditions, at low electric field strength. The spectra obtained at 50 and 68% in humidity showed ion peaks at m/z 20 and/or 38, as well as the peaks corresponding to hydroxide HO − at m/z 17 and its water clusters OH − (H 2 O) n (n=1-4) at m/z 35, 53, 71, and 89, while the humidity 30% did not result in the ion at m/z 20. Figure 2 shows the spectra obtained at high electric field strength, under three different humidity conditions. As already reported, 3 Fig. 2. Interestingly, the spectra with high electric field strength showed the definite ion at m/z 20 together with its water clusters at m/z 38, 56, and 74 (Fig. 2C). The results obtained above indicate that the conditions of high humidity and high electric field strength are favorable for the formation of the negative ion at m/z 20. Especially, the influence of the high humidity in Figs. 1C and 2C suggests that the negative ion at m/z 20 might be composed from a water molecule H 2 O, and this means that the ion is made up of H 2 O and H 2 .
As was reported in the previous paper, 3) the ion at m/z 20 could be observed under relatively high electric field conditions. This may be due to that the high electric field results in efficient ionization of O 2 , abundant dissociation of O 2 − and H 2 O, and the sequential progress of the ion-molecule reactions. [2][3][4] Especially, the negative ions of O − , HO − , and HO 2 − , and radical species such as HO 2 · and HO· are generated by higher kinetic energy of electrons. 4) At the same time, the hydrogen radical H· and/or hydride H − may be generated from water molecules by the high kinetic energy electrons and/or by the dissociation of water molecules on the tip of the needle, although the H· and H − could not be detected by the mass spectrometer used. As a result, it is considered that the ion at m/z 20 is generated by which water molecules interact with H· and/or H − . Here we show the data of positive correlation of the formation of the ion at m/z 20 with the formation of the ions corresponding to O − at m/z 16 and HO 2 − at m/z 33 (Fig. 3). The correlations showed in Fig. 3 were made from the numerical data (Table 1)
O on the mass shift of the ion at m/z 20
To examine the favorable conditions for appearance of the ion at m/z 20, the influence of the vapor of water on the spectral patterns was studied by using D 2 O and H 2 18 O. The application of the high humidity condition at 68% resulted in the definite peaks at m/z 20, 38, 56, and 74 (Fig. 2C), while lower humidity at 30% did not result in such ion peaks ( Fig. 2A). This suggests that the ion at m/z 20 and its water clusters 20 − (H 2 O) n (n = 1-3) are expedited by supplying the vapor of water. Therefore, it is expected that the mass shift of the ion at m/z 20 would be observed via the H/D or 16 14) 0.75 eV, 6) and 1.83 eV, 6) respectively, it is known that the negative ion H 2 − has short lifetimes 8-11 μs and rapidly dissociates into H − and H·. 15) The ion at m/z 20 observed in Figs. 1 and 2 9,19) although it is unclear whether the electron is localized on such unstable ions or on the polar molecules. Therefore, here we suppose that the anion at m/z 20 is produced as a dipole-bound complex between hydrogen H 2 , water H 2 O, and electron e − .
The formation of dipole-bound complex ion at m/z 22 by supplying HF
To elucidate the proposed ionic composition of the ion H 4 O − at m/z 20, here we performed the experiments with adding another polar molecule HF in expectation of the formation of H 3 F − ion at m/z 22. The negative ion APCDI mass spectra were obtained with supplying HF gas under several conditions. Here we used an HF generator 10) and used another method for generating HF molecules, i.e., by supplying the vapor of a calibrant reagent PFK, because it is expected that hydrogen radicals generated by dissociation of water molecules on the tip of corona needle extract fluorine from PFK molecules. Figure 6 shows negative-ion APCDI mass spectra of ambient air obtained with supplying HF gas or the vapor of PFK molecules. It is noteworthy, as expected, that the spectra showed the ions at m/z 22 and its water clusters that may be assigned as H 3 ion at m/z 20, as shown in Fig. 6A and 6B, indicating that direct supply of the vapor of PFK seems to be more favorable for the formation of the ion at m/z 22 than the use of the HF generator. The high electric field combined with low humidity gave better conditions for the formation of the ion at m/z 22, although the ions at m/z 19 and 20 were slightly observed in Fig. 6C. The definite observation of the unusual ionic species H 4 O − and H 3 F − implies that unstable or metastable H 2 − ion might be stabilized by the dipole-bound complexing with polar molecules such as H 2 O and HF, as was theoretically proposed by the Skurski and Simons group. 9,16,19) Although the detailed processes for the formation of H 4 (Table 2). In case of the H 3 F − ion, the negative charge is distributed on the fluorine (F1) of HF and the tail-end hydrogen (H4) of H 2 , while the attached electron (spin) is merely localized on the fluorine (F1) of the HF molecule ( To estimate the stability of the dipole-bound complex ions described above, the free energy changes ΔG for the reactions of (H 2 + H 2 O) − → (H 2 O − …H 2 ) or (H 2 O…H 2 − ) and (H 2 + HF) − → (HF − …H 2 ) or (HF…H 2 − ) were calculated using the same functional level of theory and basis set. The results obtained are summarized in Table 4. The complexing reactions of neutral hydrogen H 2 with negative ions of H 2 O − and HF − showed slightly negative values of ΔG, which mean exothermic, while the reactions of negative hydrogen H 2 − with neutral molecules were largely endothermic. The exothermic reactions of neutral hydrogen with negative polar molecules indicate that the dipole-bound complex states of H 2 O − …H 2 and HF − …H 2 are thermodynamically stable and consistent with the calculated ionic structures shown in Fig. 7.
|
2023-06-25T05:06:58.757Z
|
2023-06-21T00:00:00.000
|
{
"year": 2023,
"sha1": "42a41f90a17b537da1d8f620c977bc000be6d01f",
"oa_license": "CCBYNC",
"oa_url": "https://www.jstage.jst.go.jp/article/massspectrometry/12/1/12_A0124/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42a41f90a17b537da1d8f620c977bc000be6d01f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54753595
|
pes2o/s2orc
|
v3-fos-license
|
Copper Nanoparticles as Antibacterial Agents
Although antibiotics can treat most bacterial infections, development of microbial resistance restricts the advantages of the antibacterial agents in controlling infectious diseases. This is a major challenge that poses a serious threat prompting the search for alternative strategies to treat bacterial infections. Nanotechnology as an emerging field has been extensively used to overcome microbial resistance due to specific properties of nanoparticles such as increased drug uptake and high surface area to volume ratio. The metallic particles in nanoscale have demonstrated antibacterial activity against various bacterial species, including Gram-positive and Gramnegative bacteria, and fungi. Recently, copper nanoparticles have been widely investigated for use in fighting microbial infections. This article tries to briefly summarize the current studies related to the antibacterial properties of the copper nanoparticles. The reviewed papers reveal that the copper nanoparticles possess potent antimicrobial activities and can be used for controlling and treating different infectious diseases in the future.
Introduction
Antibacterial agents are compounds that kill bacteria or slow down their growth without being generally toxic to the surrounding tissue. The word ''antibiotics'' was first introduced in 1941 by Selman Waksman to refer to antimicrobial agents produced by many microorganisms [1]. The antibacterial agents have been used in many fields, such as, textile industry, water disinfection, food packaging, and medicine [2]. Presently, the over-use of antibiotics has led to an increased occurrence of antibiotic resistant genes in various bacterial species. Many of the recognized antimicrobials have shown resistance by one species of microorganism or another [3]. Hence, a great deal of research has been performed to deal with this problem. "Nanoparticles" (NPs) has been defined by Encyclopedia of Pharmaceutical Technology as solid colloidal particles ranging in size from one to 1000 nm (one micron) [4]. Indeed, NPs exhibit a range of potentially useful applications for pharmaceutical purposes: They have the ability of targeting the drug into the site of action and consequently reducing the side effects, and increasing the drug uptake. Moreover, NPs are capable of interacting with mucosal surfaces and escaping endolysosomal compartments [5,6]. Kinetic profiles of drug release can be also modified by NPs [7]. Based on the Ostwald-Freundlich equation, the saturation solubility enhances with decreasing particle size below approximately one μm. Thus, NPs demonstrate enhanced saturation solubility and increased surface area which cause a further increase in dissolution rate according to the Noyes-Whitney equation. In contrast, the solubility of particles with normal size (above one micron) is a compound-specific constant and depends only on the temperature and the solvent [8]. The emergence of nanotechnology in the last decades has elicited immense interest in evaluating the antimicrobial activity of nano-scale metals. The use of metallic NPs results in decreased concentration as well as increased antibacterial and antifungal activity [9]. However, despite the unique set of properties of metallic NPs, there are environmental and human safety concerns regarding the release of metallic NPs, for example, release of silver causes environmental pollution [3]. The antimicrobial effects of different metallic NPs such as Alumina [10][11][12], silver [13,14], iron [15][16][17][18][19], gold [20][21][22], magnesium [23][24][25], titanium [26,27], and zinc oxide [28,29] have been widely investigated. In spite of the tremendous efforts undertaken regarding the use of metallic NPs with antibacterial effects, at present, we are far away from ideal metallic NPs with efficient activity. As a novelty discussion paper, this review will summarize the major findings for copper particles in nano-scale as antimicrobial agents.
The Privileges of NPs as Antibacterial Agents
The nano-sized particles received a great deal of attention due to their potential in biomedical and pharmaceutical applications. The particles in nano-scale are able to easily interact with bacterial membranes [30]. Based on the studies using various microscopy approaches including atomic force microscopy, transmission electron microscopy and laser confocal microscopy, after using nanoparticles, the integrity of the bacterial cell membranes changed noticeably causing bacterial cell death [31]. The advance in the preparation of metallic NPs has led to the development of a new class of antimicrobial materials. Highly ionic metallic NPs are of particular interest due to their extremely high surface areas and numerous reactive surface sites with unusual crystal morphologies [32].
Antibacterial Properties of Copper
Copper is a readily available metal and one of the essential trace elements in most living organisms. Copper particles in nano-scale have many applications in industry, including their use in gas sensors, high [33]. This metal has been also used as potential antimicrobial agent since ancient times. Copper-containing compounds such as CuSO 4 and Cu(OH) 2 are used as the traditional inorganic antibacterial agents [34]. Also, aqueous copper solutions, complex copper species or coppercontaining polymers are used as antifungal compounds [34]. Moreover, the control of legionella in hospital water distribution systems via the copper and silver ionization method is one of the most common applications of this metal in the modern healthcare setting [35]. Copper ions have demonstrated antimicrobial activity against a wide range of microorganisms, such as Staphylococcus aureus, Salmonella enteric, Campylobacter jejuni, Escherichia coli, and Listeria monocytogenes [36]. Currently, copper has been registered as the first and only metal with antimicrobial properties by the American Environmental Protection Agency (EPA) [37]. This material kills 99.9% of most pathogens within 2 h contact [38]. Also, in some cases, this metal possesses properties better relative to the other expensive metals with antimicrobial activity, such as, silver and gold [3]. For instance, the Cu NPs indicated higher antibacterial effect relative to the silver NPs against E. coli and Bacillus subtilis (B. subtilis) [39,40]. The copper surfaces can be used to kill bacteria, yeasts, and viruses which are known as "contact killing" (contact-mediated killing). Contact killing by copper was reported to occur at a rate no less than seven to eight logs per hour, and in general, subsequent to the extended incubation. No live microorganisms were recovered from the copper surfaces. This leads to the idea of using copper as a self-sanitizing material [41].
Copper Bacterial Resistance
As mentioned in previous sections, copper has been utilized as an antibacterial agent in medicine. The most common copper-resistant bacteria were isolated from animals, plants and humans-associated bacteria. Different mechanisms are involved in copper homeostasis in various bacteria. For example, in E. coli, two chromosomally encoded systems, including cue and cus are responsible for copper resistance. The CueP, a periplasmic protein, is an extra component of the cue system and possesses a critical role in copper resistance. The pco (plasmid-borne copper resistance) system is also responsible for survival of some bacteria, such as E. coli in copper-rich environments. Other Gram-negative bacteria, such as Yersinia pestis, Yersinia pseudotuberculosis, Yersinia enterocolitica, Citrobacterkoseri, and Erwiniacarotovora possess CueP-like proteins [42].
The Toxicity Mechanisms of Cu NPs against Bacteria
One of the most known NPs' toxicity mechanisms is the interaction between the bacterial cell membrane and NPs, which leads to the disruption of the bacterial membrane integrity and finally results in the death of the microorganism. It has been shown that several factors, including temperature, pH, concentration of bacteria and NPs, as well as aeration can promote the toxicity mechanism of Cu NPs [2,43]. Cu particles in nano-scale have been shown to have antibacterial effects on the bacterial cell functions in multiple ways, including adhesion to Gram negative bacterial cell wall due to electrostatic interaction ( Figure 1), having effect on protein structure in the cell membrane, denaturation of the intracellular proteins, and interaction with phosphorus-and sulphur-containing compounds like DNA [34]. Also, in one comprehensive study, mechanisms of antibacterial activity of Cu NPs were investigated using E. coli as a biological tool [44]. The results showed that the treatment of E. coli cells by Cu-NPs at the minimum bactericidal concentration (MBC) resulted in 2.5 times overproduction of the cellular reactive oxygen species (ROSs). Also, the NP-mediated increase in ROS level led to noticeable lipid peroxidation, protein oxidation, DNA degradation and finally cell killing.
Synthesis of Cu-based NPs
Cu-based NPs can be synthesized using five major techniques: chemical treatment, thermal treatment, electrochemical synthesis, photochemical methods, and sonochemical techniques. The "chemical treatment" has been used as the most popular method among them and some of the more modern techniques have utilized this method for synthesis of Cu-based NPs [45]. Lately, green synthesis of environmentally friendly NPs without toxic waste products during the preparation has been introduced. In this kind of preparation technique, safe biotechnological tools are used as an alternative to conventional physical and chemical synthesis and named green nanobiotechnology. In this method, NPs are prepared using biological routes such as bacteria, fungi, plants and enzymes or their byproducts, such as proteins [46].
The Copper Particles in Nano-scale as Antibacterial Agents
In 2008, Ruparelia et al. investigated antimicrobial properties of silver and Cu NPs on E. coli, B. subtilis and S. aureus [39]. Results of minimum inhibitory concentrations (MICs), minimum bactericidal concentrations (MBCs) and disk diffusion test revealed that the Cu NPs were more efficient compared to the silver particles against B. subtilis which is suggested to be due to more affinity of the Cu NPs to surface amines and carboxyl groups of B. subtilis. In contrast, silver NPs demonstrated more antimicrobial effect against E. coli and S. aureus relative to Cu NPs.
Another application of copper oxide (CuO) NPs for antimicrobial applications was introduced by Ren et al. [47]. The metal oxide NPs were prepared using thermal plasma (Tesima TM) technology, which allows the continuous gas phase production of bulk nano-powders. The prepared CuO NPs in suspension were active against a range of bacterial pathogens, including S aureus, meticillin-resistant S. aureus, Staphylococcus epidermidis, E. coli, and Pseudomonas aeruginosa (P. aeruginosa) with MBCs ranging from 100 mg/ml to 5000 mg/ml.
The feasibility of the use of Cu NPs as antibacterial agents against E. coli was examined by Raffi and coworkers in 2010 [34]. The In the same year, another research group assessed the toxicity of aggregated zero valent Cu NPs (ZVCN) against E. coli using a centroid mixture design of experiment [48]. Five environmental parameters including, temperature, pH, aeration rate, NP concentration, and bacteria concentration which were presumed to have major effects on the toxicity of the NPs against the bacteria, were assessed. According to their study, the interactive effects of the tested parameters as well as the primary effects of them are efficient on the toxicity of Cu NPs.
On the whole, ZVCN will have highest toxicity on nanoparticles under acidic conditions and higher temperature, high aeration and high concentration of NPs and bacteria. When any one of the independent variable is changed, the toxicity of NPs changes significantly.
In an effort to improve the antimicrobial properties of Cu NPs, Mohan et al utilized carbon nanotubes [49]. The prepared Cu NPs were grafted on the surface of multiwall carbon nanotubes (MWCNT). According to their research, carbon nanotubes increased the surface area of Cu NPs, and therefore the number of colonies of E. coli reduced in the Cu-MWCNT system compared to the pure Cu NPs and MWCNT. The antimicrobial efficiency (% kill) of Cu-MWCNT were found to be 75% ± 0.8 while pure Cu NPs showed a low-percent kill against E. coli (52% ± 1.8). The possible mechanism of the bactericidal effect of Cu-MWCNT is the release of Cu ions from Cu-MWCNT and their entrance to the bacterial cells and the subsequent disrupts of biochemical processes. Taken together, Cu-MWCNT can be used as a biocidal composite in biomedical devices and antibacterial system.
In another research, Theivasanthi and Alagar indicated that Cu NPs developed using electrolysis technique possessed more antibacterial effect against E. coli bacteria relative to those prepared through a chemical reduction process [50]. Use of electrical power in Cu NPs preparation led to an enhancement in the NPs' antibacterial effects. As a whole, the authors proposed the feasibility of use of this material in water purification, antibacterial packaging as well as air filtration.
A novel plastic antimicrobial agent including polypropylene with embedded Cu metal or CuO NPs was examined by Delgado et al [51]. Based on their study, the ability of composites to kill bacteria depended on the type of Cu NPs. It was shown that antimicrobial effect of CuO was more than that of the Cu metallic NPs in killing E. coli since for the CuO NPs the formation of oxide layer was not required which led to high ion release rates. Besides, in terms of CuO NPs, the metal was already in the oxidation condition which resulted in the particle dissolution, while for the Cu metal the previous formation of an oxide layer was necessary.
In 2012, Chatterjee and coworkers introduced a simple robust method for synthesis of Cu NPs via reduction of CuCl 2 in the presence of gelatin as a stabilizer [52]. Treatment with the NPs made E. coli cells filamentous with average filament size varied from 7 to 20 µm relative to the normal cell size of nearly 2.5 µm. The NPs were highly effective against E. coli at a much low concentration. The antibacterial effects of produced NPs were also observed in an E. coli strain resistant to multiple antibiotics as well as Gram-positive B. subtilis and S. aureus.
Aqueous solution of starch capped copper nanoparticles with bactericidal effect against both Gram negative and Gram positive bacteria at nano-molar concentrations were produced using starch as green capping agent [53]. Based on the in vitro studies on of the 3T3L1 cells, the capped NPs exhibited cytotoxicity at much higher concentration relative to Cu ions. Based on the results, the introduced starch capped water soluble Cu NPs are promising candidates for different applications for instance in photothermal therapy or cellular imaging.
In another interesting investigation, the antibacterial effect of CuO NP was studied on Legionella pneumophila [54]. According to a whole-genome microarray, CuO NPs significantly affected the expression of genes involved in metabolism, transcription, translation, DNA replication and repair, virulence, and unknown/hypothetical proteins.
Another research group showed that the antibacterial effect of CuO NPs was dependent on the particle size and a significant increase in antibacterial activities against both Gram-positive and Gram-negative bacterial strains was achieved using the highly stable minimum-sized monodispersed CuO NPs [55].
Thekkae Padil et al. produced highly stable CuO NPs using gum karaya, as a naturally occurring polysaccharide component in plants via green technology [56]. The CuO NPs with smaller particle sizes demonstrated higher antibacterial activities. The authors noted that CuO NPs produced via a simple, mild, and environmentally friendly method may possess promising applications for instance in wound dressing, bed lining, active cotton bandages, and medical and food industries.
Das and coworkers fabricated CuO NPs by thermal decomposition methods and investigated their antioxidant and antibacterial effects [57]. Their produced NPs demonstrated free radical scavenging activity up to 85% in 1 h which is relatively higher compared to other metal oxide NPs. Furthermore, the CuO NPs exhibited proficient antibacterial activity against E. coli and P. aeruginosa. Bacterial growth significantly decreased with increasing NPs' concentration.
Usman and coworkers synthesized pure Cu NPs using chitosan polymer as a stabilizer [3]. Their findings showed that the chitosanstabilized NPs were effective against Gram-negative microorganisms, including Salmonella choleraesuis and P. aeruginosa, as well as Grampositive bacteria such as methicillin-resistant S. aureus and B. subtilis, and also yeast species such as Candida albicans. Moreover, the prepared NPs indicated more antimicrobial effects against Gramnegative microorganisms (such as P. aeruginosa) compared to Grampositive bacteria. Taken together, the researchers introduced a simple and cost-effective approach for synthesis of Cu NPs which has future potential for pharmaceutical and biomedical applications.
Another interesting preparation of Cu NPs was presented by Subhankari and Nayak who presented a novel biological technique using ginger (Zingiber officinale) extract [58]. Cu NPs prepared via green synthesis approach was found to be more effective against E. coli relative to copper sulphate solution and pure ginger extract. The researchers noted that this green method involved cheap and non-toxic materials and could be useful in water purification, air quality management, and antibacterial packaging.
In an attempt to produce highly stable CuO NPs via a green chemistry approach, aqueous extract of Acalypha indica leaf was used [59]. The resultant particles were found to be effective against E. coli, Pseudomonas fluorescens and Candida albicans. Moreover, based on the MTT assay, they possessed cytotoxicity activity against MCF-7 breast cancer cell lines.
In a research carried out by Agarwala et al., antibiofilm activity of CuO and iron oxide NPs was assessed against multidrug resistant biofilm forming uropathogens [60]. CuO NPs was found to be more toxic compared to iron oxide NPs and also possess dose dependent antibiofilm properties.
Giannousi et al produced Cu, Cu 2 O and Cu/Cu 2 O NPs using hydrothermal procedure as a cost-effective and eco-friendly method [61]. The Cu-based NPs led to pDNA degradation in a dose-dependent manner and extensive degradation of ds CT-DNA. Also, Cu 2 O NPs exhibited increased antibacterial effect against the Gram-positive strains. Hence, the possible reaction pathway was investigated. The results proved ROS production and lipid peroxidation.
The use of green nanotechnology for synthesis of Cu NPs has been also investigated by Parikh and coworkers [30]. For biosynthesis, Datura Meta leaf extract was used with the ability of metal ion reduction in NPs. The proposed method has some advantages: It is efficient, rapid, easy, inexpensive, and ecofriendly. The antibacterial behavior of Cu NPs against E. coli, Bacillus megaterium, and B. subtilis was found to be more than that of the extract.
Other nano-structured Cu was developed and investigated by Tomasz and coworkers in 2015 [62]. The copper NPs showed a high antibacterial effect against Gram positive bacteria such as clinical methicillin resistant S. aureus strains. The antibacterial effects of Cu NPs were found to be even higher than that of Ag NPs. Also, the synthesized NPs demonstrated antifungal activity against Candida species. Hence, the prepared copper NPs can be used as an alternative for prevention of biofilm formation as well as reduction in bacterial or fungal adhesion at lower cost compared to use of silver.
The anti-biofilm activity of Cu NPs against P. aeruginosa was studied by Lewis Oscar et al. [63]. The authors reported that Cu NP treatments at 100 ng/ml led to a 94% reduction in biofilm formation and stated that their proposed NPs could be used as coating agents for controlling biofilm formation on surgical devices as well as medical implants.
In another study, the cytotoxicity of the synthesized CuO NPs in colon cancer cells was explored [64]. The researchers found that CuO NPs inhibited the cell proliferation in HT-29 human colon cancer cells via downregulation of Bcl-2 and Bcl-xL as the apoptosis regulatory proteins.
Taken together, in recent years researchers have paid much attention to Cu NPs due to their antibacterial activity against different microorganisms, (summarized in Table 1). This highlights the potential of the copper particles in nano-scale as effective antibacterial agents in biomedical and industrial applications. 20 Based thermal decomposition [64]
The Toxicity of Copper Nanoparticles
In spite of commercial use of NPs, their release into the environment (e.g., soil and water) is of the most important problems which affect public health, beneficial bacteria and microbial communities [2]. The information about the hazardous effects of CuO is limited. It has been shown that the toxicities of bulk and nano-sized CuO were mostly affected by soluble Cu ions. The accumulation of copper in human body leads to production of the harmful radicals such as hydroxyl radical [65].
Conclusion
There is a growing body of scientific evidence which confirms the antibacterial properties of metallic NPs against various bacterial species, including Gram-positive and Gram-negative bacteria as well as fungi. Based on the literature, in some cases Cu NPs showed higher antibacterial effect compared to the other metallic NPs. However, the in vitro investigations should be confirmed by in vivo assays using animal models prior to Cu NPs' use in the clinical setting. Hopefully, in the not too distant future, we could probably witness using Cu NPs as effective antibacterial agents in biomedical and industrial applications to fight against pathogenic microorganisms.
Declaration of Interest
All authors declare that they have no conflict of interest.
|
2019-04-30T13:06:31.773Z
|
2018-01-07T00:00:00.000
|
{
"year": 2018,
"sha1": "85f57691d7cc0cea687eda2595ee16a334feefb8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2329-9053.1000140",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "94490bb725b21a3d35e7a0fd42ac8848407f7b28",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
202564810
|
pes2o/s2orc
|
v3-fos-license
|
Fecal medicines used in traditional medical system of China: a systematic review of their names, original species, traditional uses, and modern investigations
In China, the medical use of fecal matter (fresh fecal suspension or dry feces) can be dated back to the fourth century, approximately 1700 years ago. In long-term clinical practice, Chinese doctors have accumulated unique and invaluable medical experience in the use of fecal materials. In view of their good curative effect and medicinal potential, fecal medicines should be paid much attention. This study aimed to provide the first comprehensive data compilation of fecal medicines used in various Chinese traditional medical systems by bibliographic investigation of 31 medicine monographs and standards. A total of 54 fecal medicines were found to be used in 14 traditional Chinese medical systems. Their names, original species, medicinal forms, and traditional uses were described in detail. These fecal medicines were commonly used to treat gastrointestinal, nervous system, skin, and gynecological diseases. Commonly used fecal medicines include Wu-Ling-Zhi, Jiu-Fen and Hei-Bing-Pian. The information summarized in this study can provide a good reference for the development and utilization of fecal medicines. Further studies are necessary to prove their medicinal value, identify their active ingredients, and elucidate their mechanisms of action so that more people can accept these special medicines.
Background
Traditional medicines have been used for prevention and treatment of diseases for thousands of years in China. In recent decades, they have attracted worldwide attention due to their reliable therapeutic efficacy and low side effects. During the long-term struggle against diseases, ancient Chinese doctors found that some unexpected materials, such as human or animal feces, could also effectively treat diseases. In China, the medical use of fecal matter (fresh fecal suspension or dry feces) has a long history. During the Eastern Jin dynasty (ad 300-400 years), "Zhou Hou Bei Ji Fang", a well-known monograph of traditional Chinese medicine (TCM) written by Hong Ge, recorded a case of treating patients with food poisoning or severe diarrhea by ingesting human fecal suspension (known as yellow soup or Huang-Long decoction) [1]. During the Tang dynasty, Yutuo Ningma Yundan Gongbu compiled a world-famous book of Tibetan medicine called "The Four Medical Tantras", which recorded that digestive diseases can be treated with the processed product of the feces of Sus scrofa (Hei-Bing-Pian in Chinese) [2]. Later, the "Compendium of Materia Medica" (a masterpiece of herbalism written by Shizhen Li during the Ming dynasty) described a series of prescriptions for treating diarrhea, rheumatism, jaundice, fever, and pain using fresh fecal suspension or dry feces [3]. In addition, "Jing Zhu Materia Medica" written by Danzeng Pengcuo Dimaer in the nineteenth century recorded that Hei-Bing-Pian and the dry feces of Gypaetus barbatus or Aegypius monachus (Jiu-Fen in Chinese) are commonly used to treat dyspepsia and gastric ulcer [4]. These records indicate that fecal medicines are widely used and occupy an important position in Chinese traditional medical systems.
In long-term clinical practice, Chinese doctors have accumulated unique experience in the use of fecal medicines. For example, the dry feces of Trogopterus xanthipes (Wu-Ling-Zhi in Chinese) is often used to treat blood stasis, swelling and aching due to traumatic injury [5]. Jiu-Fen is good at treating gastrointestinal diseases, such as dyspepsia, weak gastrointestinal function and gastric ulcer. Hei-Bing-Pian can treat diseases, such as indigestion, diarrhea and distending pain in the stomach [6]. These traditional medication experiences are undoubtedly valuable assets and can provide a reference for modern drug development. However, documents on the traditional use of fecal medicines are scattered and lack systematic collation.
In this review, we comprehensively collect and summarize the names, origins, and treated diseases of fecal medicines that have been used in some Chinese traditional medical systems, including TCM, Tibetan ethnic medicine (EM), Oroqen EM, Kazak EM, Uygur EM, Mongolian EM, Nu EM, Yao EM, Wa EM, Tujia EM, Korean EM, Jino EM, Hani EM, and Dai EM. In addition, we review the most frequently used fecal medicines in terms of their origins, traditional uses, chemical constituents, and pharmacological activities. Such information can provide a good reference for their development and utilization. These fecal medicines may be a valuable gift from Chinese traditional medicine to the world and has potential as drug candidates for the treatment of some chronic diseases, such as gastrointestinal diseases.
Methods
We have manually searched 31 related medicine monographs and drug standards, such as "Zhou Hou Bei Ji Fang", "Compendium of Materia Medica", "Jing Zhu Materia Medica", "Dictionary of Chinese Ethnic Medicine", "Drug Standards of Tibetan Medicine", "Lan Liu Li", "Pharmacopoeia of the People's Republic of China", and "Chinese Tibetan Materia Medica", to obtain the information about the names, origins, traditional uses, and treated diseases of fecal medicines. In addition, we have searched the online Chinese literature databases (i.e., Wan fang and CNKI) and international English databases (i.e., PubMed, ISI Web of Science and Google Scholar), using their vernacular or Latin names as keywords, to obtain their chemical constituents and biological effects.
Results
This review recorded 54 fecal medicines that have been used in 14 Chinese traditional medical systems. Their names, original species, medicinal forms, and treated diseases are presented in Tables 1 and 2. These 54 medicines mainly originate from the feces of 56 animals. Among medicinal forms used, dry feces is the most frequently used (66.67%), followed by processed feces (29.63%) and fresh fecal juice (3.70%). In addition, we found that these 54 fecal medicines are mainly used to treat gastrointestinal (37.04%), nervous system (22.22%), skin (22.22%), ophthalmic (18.52%), and gynecological diseases (16.67%).
Fecal medicines used in the TCM system
Traditional Chinese medicine is the most representative traditional medical system in China. It has a long history of more than 2500 years. In recent decades, TCM has attracted global attention due to its reliable therapeutic efficacy. Generally, TCM uses herbs, animals or minerals to treat diseases. In long-term clinical practices, animal feces have been found to be effective in treating some specific diseases under the guidance of TCM theory. As early as the Eastern Jin dynasty, human fecal juice (i.e., yellow soup) has been used by TCM practitioners to treat severe diarrhea [1]. At present, some fecal medicines are still used in the clinical practice of TCM. In the 2015 edition of Chinese Pharmacopoeia [7], 18 preparations have been found to contain fecal medicines (Table 5). For example, "Shi-Xiang-Zhi-Tong Powder" and "Tong-Jing Pills" contain Wu-Ling-Zhi, and "Huang-Lian-Yang-Gan Pills" contains Ye-Ming-Sha (dry feces of some kinds of bats).
In the present study, a bibliographic investigation of TCM monographs and drug standards revealed 14 kinds of fecal medicines that are commonly used in the TCM system. They mainly come from the feces of 22 animals and are widely used to treat dysmenorrhea, amenorrhea, abdominal mass, diarrhea, and blurred vision. Additional information on these 14 medicines is provided in Table 1. Wu-Ling-Zhi is the most representative fecal medicine in the TCM system (Fig. 1). Therefore, its traditional uses, chemical constituents and pharmacological activities are described in detail in the subsequent sections.
The dry feces of Trogopterus xanthipes (Wu-Ling-Zhi in Chinese)
Wu-Ling-Zhi (Fig. 1b), also named as Goreishi or Trogopterorum faeces, is one of the commonly used fecal medicines. It derives from the dry feces of Trogopterus xanthipes. Wu-Ling-Zhi was first recorded in the classic Chinese medicinal book "Kai Bao Ben Cao" compiled in the Song Dynasty [8]. Its traditional uses were described in several TCM monographs and drug standards. For example, "Ben Cao Jing Shu" recorded that Wu-Ling-Zhi had a good therapeutic effect on stabbing pain caused by blood stasis [9]. In addition, the Chinese Pharmacopoeia 1990 edition recorded that Wu-Ling-Zhi had good effects of promoting blood circulation, removing blood stasis and relieving pain, and was usually used to treat the stabbing pain in the chest and hypochondrium, dysmenorrhea, amenorrhea, swelling and aching due to traumatic injury, postpartum blood stasis, and snake bites.
Moreover, it is worth pointing out that extracts or chemical constituents obtained from the Wu-Ling-Zhi have been proved to possess a wide spectrum of pharmacological activities, such as anti-inflammatory, anticerebral ischemia, anti-gastric ulcer, and antithrombin effects. The basic pharmacological data of Wu-Ling-Zhi extracts and some isolated compounds are shown in Tables 3 and 4. Kim et al. [24] reported that Wu-Ling-Zhi extract could reduce lipopolysaccharide-induced NO and cytokines production. Wang et al. [25] found that the ethyl acetate extract of Wu-Ling-Zhi showed obvious inhibitory effects on xylene-induced ear swelling in mice and carrageenan-induced paw swelling in rats (400 mg/kg, ip), and it could also significantly inhibit the proliferation of granulation tissue in mice (800 mg/kg, ip). These findings indicated that Wu-Ling-Zhi has obvious anti-inflammatory effect. Furthermore, the ethyl acetate extract of Wu-Ling-Zhi was also found to be able to protect gastric mucosa and prevent experimental gastric ulcer by inhibiting gastric acid secretion [26]. It was reported that the aqueous extract of Wu-Ling-Zhi could significantly prolong the survival time of mice with incomplete cerebral ischemia, reduce the brain water content, brain index and malondialdehyde (MDA) level, and increase superoxide dismutase (SOD) activity in rats, indicating that Wu-Ling-Zhi has good protective effect against cerebral ischemia [27]. Moreover, the aqueous extract of Wu-Ling-Zhi could down-regulate the expression of intercellular adhesion molecule-1 in experimental atherosclerotic rats and reduce the degree of vascular endothelial lesions, which may account for the anti-arteriosclerosis inflammatory effects of Wu-Ling-Zhi [28].
Fecal medicines used in other traditional ethnic medicine systems in China
In addition to the TCM system, there are other traditional medical systems in China, such as Tibetan, Mongolian, Uygur, Tujia, Kazak, Yao, Korean, and Dai ethnic medicines. These ethnic medical systems have their own unique theories in the use of natural medicines. Therefore, it is also important to collect information about fecal medicines from these ethnic medical systems.
Traditional Tibetan medicine (TTM) is a representative ethnic medicine in China, and it has a unique fundamental theory, namely three elements theory consisting of "rLung", "mKhris-pa" and "Badkan" [30]. In TTM system, the use of fecal medicines has a long history. The earliest Tibetan medicine monograph that recorded fecal medicines is "The Four Medical Tantras" [2]. Later, in the seventeenth century, famous "Tibetan Medical Thangka of The Four Medical Tantras" [31] was published by Sdesrid-sangs-rgyas-rgya-mtsho, which vividly depicted some commonly used fecal medicines in the form of wall chart (Fig. 3).
In this study, we found that the feces of 41 animals were used as medicines for the treatment of various diseases in 13 ethnic medical systems. More information on these medicines is provided in Table 2. Among them, the dry feces of Gypaetus barbatus or Aegypius monachus and the processed product of the feces of Sus scrofa are representative fecal medicines in Chinese ethnic medicine systems. Their traditional uses, chemical constituents and pharmacological activities have been described in detail in the following sections.
The dry feces of Gypaetus barbatus or Aegypius monachus (Jiu-Fen in Chinese)
The dry feces of G. barbatus or A. monachus, known as Jiu-Fen in Chinese, is a commonly used Tibetan medicine (Fig. 1d). It has the functions of strengthening stomach and promoting digestion. Jiu-Fen is used in the traditional Tibetan system of medicine for the treatment of dyspepsia, gastrointestinal dysfunction, gastric ulcer, and intestinal cancer in the past few decades [4,6]. In addition, the "Jing Zhu Materia Medica" recorded that Jiu-Fen can be used to treat mental illness [4]. Nowadays, Jiu-Fen is frequently used in the clinical practice of TTM by combining other herbs. According to our statistics, there are 32 preparations containing Jiu-Fen in some monographs and drug standards of Tibetan medicine [6,[32][33][34][35]. The representative prescriptions include "Shi-Wei-Jiu-Fen Powder", "Er-Shi-Jiu-Wei-Neng-Xiao Powder" and "Jian-Hua-Mu-Xiang Pills" ( Table 5). The "Tibetan Medicine Standards" recorded that "Shi-Wei-Jiu-Fen Powder" can strengthen stomach and promote digestion [6]. Consequently, it is usually used to treat gastrointestinal diseases such as dyspepsia.
The use of Jiu-Fen in the traditional Tibetan system of medicine has a long history, but modern research on the chemical composition, quality control and pharmacodynamic evaluation of Jiu-Fen has not yet been carried out. Therefore, further studies are needed to prove its medicinal values in gastrointestinal diseases treatment, identify active compounds and elucidate the underlying mechanisms with the help of modern chemistry and pharmacology methods.
The processed product of the feces of Sus scrofa (Hei-Bing-Pian in Chinese)
The processed product of the feces of Sus scrofa (wild boar), known as Hei-Bing-Pian (Chinese name), is a widely used Tibetan and Mongolian medicine in China (Fig. 1e). Its processing method was recorded in the "Chinese Materia Medica for Tibetan Medicine": firstly, the dry feces of Sus scrofa is put into a ceramic jar, and yellow mud (adding a small amount of salt) is used to seal the ceramic jar. Secondly, the ceramic jar is calcined with fire until it turns gray outside. Then, the black matter is taken out from the ceramic jar, which is Hei-Bing-Pian [36]. In traditional Tibetan system of medicine, Hei-Bing-Pian is described as pungent in flavor and hot in nature. It is commonly used for the treatment of dyspepsia, gallbladder diseases, stomachache, and plague [6]. According to our statistics, there are 14 Tibetan medicine preparations containing Hei-Bing-Pian in official drug standards. The representative prescriptions include "Shi-Wei-Hei-Bing-Pian Powder", "Shi-Yi-Wei-Jin-Se Pills" and "Shi-Wu-Wei-Zhi-Xie-Mu Powder" ( Table 5). The "Drug Standards of Tibetan Medicines" recorded that "Shi-Wei-Hei-Bing-Pian Pills" is usually used to treat stomach and gallbladder diseases, such as dyspepsia, anorexia, jaundice, gallstones and nausea [35].
It has been reported that Hei-Bing-Pian contains a variety of inorganic elements, such as Fe, Ca, Zn, K, Cu, Mn, Co, Ti, and Mg. At present, the contents of these elements in Hei-Bing-Pian have been determined by using atomic absorption spectrometry or spectrophotometry [37,38]. The elements with high levels are Ca (18,570 μg/g), K (11,625 μg/g), Mg (9975 μg/g), and Fe (7800 μg/g). Furthermore, two bile acids (i.e., cholic acid and taurocholic acid) were detected and quantified in Hei-Bing-Pian by using ultra high-performance liquid chromatographymass spectrometry method [21]. Besides, Chang et al. [39] developed a spectrophotometric method to determine the absorption force of Hei-Bing-Pian on tartrazine, and used this adsorption force as an indicator to control the quality of Hei-Bing-Pian.
Modern pharmacological study has demonstrated that Hei-Bing-Pian can prevent mucosal damage caused by experimental colitis in rats. Compared with the model group, the high and low doses of Hei-Bing-Pian can significantly reduce the damage of colonic mucosa congestion, hyperplasia and ulcer (p < 0.05), and significantly increase the levels of superoxide dismutase (SOD) and glutathione (GSH) [40]. Cai et al. [41] reported the effect of Hei-Bing-Pian on the intestinal smooth muscle function of animals. It was found that Hei-Bing-Pian had no obvious effect on normal isolated ileum, and could not antagonize the inhibitory effect of atropine, adrenaline and promethazine Table 5 Representative prescriptions containing fecal medicines recorded in official drug standards on isolated ileum smooth muscle. However, it could significantly inhibit histamine-induced ileal smooth muscle excitation, and the inhibition rate was 25%. Moreover, in vivo studies showed that Hei-Bing-Pian could inhibit the contraction effect of pilocarpine on ileal smooth muscle. These results indicate that the effect of Hei-Bing-Pian on intestinal smooth muscle is related to the cholinergic M receptor and histamine receptor. Bai et al. [42] found that Hei-Bing-Pian could significantly accelerate gastric emptying in rats and promote the propulsive speed of activated carbon in the small intestine of mice. Moreover, the high dose of Hei-Bing-Pian could significantly promote the healing of chronic gastritis caused by acetic acid, and had obvious protective effect on gastric mucosal injury induced by cold stress. Besides, in order to make better use of Hei-Bing-Pian, its long-term toxic effects have been studied by Li et al. The results showed that, after 12 weeks of administration of Hei-Bing-Pian, there were no significant changes in body weight, blood biochemical parameters, histopathology, and several organ indexes (e.g., heart, liver, spleen, kidney, and thymus) in rats, compared with the control group, which indicated that Hei-Bing-Pian has no potential toxicity [43]. The basic pharmacological data of Hei-Bing-Pian and its ingredients are shown in Tables 3 and 4.
Similarities and differences of fecal medicines related to treated diseases in Chinese traditional medical systems
Every traditional medical system in China has its own unique theory or medication experience. Therefore, the same fecal medicines are used in different medical systems, and their therapeutic uses may be different. A detailed comparison of these differences would help researchers and traditional medical practitioners to better understand the indications of fecal medicines and promote their development and utilization. In this study, we compared the similarities and differences of therapeutic uses of Wu-Ling-Zhi and Hei-Bing-Pian in different traditional medical systems, including TCM, Tibetan EM, Korean EM, Dai EM, Yao EM, Tujia EM, Nu EM, and Mongolian EM. Additional details are provided in Table 6. The results indicate that Wu-Ling-Zhi is commonly used to treat amenorrhea and dysmenorrhea in most traditional medical systems. However, its therapeutic uses also have some obvious differences in different medical systems. For example, in the Tibetan EM, Wu-Ling-Zhi can be used to treat stomachache, whereas in the Mongolian EM, it is mainly used to treat diarrhea, gout and itching. Moreover, Wu-Ling-Zhi can treat cold, whooping cough and fever in the Nu EM. Hei-Bing-Pian has the same therapeutic use in TCM and Tibetan EM systems. It is widely used in both systems to treat dyspepsia, biliary diseases, plague and distending pain in the stomach. There is no difference in the therapeutic use of Hei-Bing-Pian in the two medical systems.
Conclusion and future perspectives
Chinese traditional medicine is an important part of the world's medical system. In long-term clinical practice, ancient Chinese doctors have accumulated invaluable experience in the use of fecal medicines. As shown in Tables 1 and 2, some fecal medicines have been found to be effective in treating amenorrhea, dysmenorrhea dyspepsia, diarrhea, fever, and stomachache. These traditional medication knowledge are valuable assets. Currently, some fecal medicines (e.g., Wu-Ling-Zhi, Jiu-Fen and Hei-Bing-Pian) are still used in clinical practice. A total of 76 preparations containing fecal medicines were Amenorrhea and dysmenorrhea Stabbing pain in the chest and abdomen, swelling and aching due to traumatic injury, and snake bites (external use) [5,8,52] Tibetan EM Stomachache [59,60] Korean EM Stabbing pain in chest and abdomen [60] Dai EM Snake bites (external use) [60] Yao EM Epilepsy [63] Tujia EM Swelling and aching due to traumatic injury, and snake bites (external use) [60] Nu EM -Cold, whooping cough and fever [60] Mongolian EM -Diarrhea, gout and itching [62] Hei-Bing-Pian Tibetan EM Sus scrofa L. Dyspepsia, biliary diseases, plague and distending pain in the stomach - [6,36,59] Mongolian EM recorded in the latest official drug standards (Table 5).
Extensive clinical application demonstrates the role and value of fecal medicines in Chinese medical systems. Moreover, Wu-Ling-Zhi extracts and its chemical constituents have been proved to possess a wide spectrum of biological activities, such as anti-inflammatory, anticoagulative and antioxidant effects (Tables 3 and 4). Some possible molecular mechanisms have also been revealed. The results of these modern pharmacological studies provide some evidences to prove the scientific nature of fecal medicines. However, most fecal medicines still lack experimental evidences. For example, Jiu-Fen is a commonly used Tibetan medicine. However, so far, no biological activities or active ingredients have been reported for this drug. Therefore, in order to better develop and utilize these fecal medicines, more in vivo pharmacological studies and even clinical evaluations should be performed to prove their scientific and medicinal value. Fortunately, an in-depth study of gut microbiota has provided an opportunity to interpret the scientific connotation of some traditional fecal medicines, such as the yellow soup. This soup is a fresh fecal suspension of Homo sapiens commonly used to treat food poisoning, severe diarrhea, heat toxins, and unconsciousness due to high fever. Zhang et al. [44,45] believe that the efficacy of yellow soup is mainly caused by the gut microbiota from fresh fecal water, and its principle for treating diseases is similar to the fecal microbiota transplantation method of modern medicine. Therefore, reconstructing the gut microbiota of patients may be the mechanism of action of fresh fecal medicines for treating diseases.
However, most fecal medicines are derived from dry feces (e.g., Wu-Ling-Zhi) or processed products (e.g., Hei-Bing-Pian). During drying and processing, these fecal medicines lose the living microbiota. Therefore, their mechanisms for treating diseases may be different from fresh feces. Feces are intestinal excretions of humans or animals. The chemical constituents in feces are mainly derived from the host or dietary metabolites. These metabolites may be the key active constituents of fecal medicinal materials. For example, the terpenoids, flavonoids and lignans contained in Wu-Ling-Zhi are closely related to the foods eaten by Trogopterus xanthipes (e.g., Platycladus orientalis leaves, Pinus tabulaeformis bark, and peach kernels). These diet-derived metabolites may be the pharmacologically active ingredients of Wu-Ling-Zhi. Moreover, some bile acids (e.g., deoxycholic acid, lithocholic acid and taurocholic acid) have been found in Wu-Ling-Zhi and Hei-Bing-Pian [21]. These bile acids are the final metabolites of cholesterol under the common metabolism of liver and gut microbiota. Bile acids are endocrine-signaling molecules that regulate metabolic processes, including glucose, lipid and energy homeostasis, by regulating gut microbiota or activating bile acid receptors, such as the farnesoid X receptor (FXR) and G protein-coupled bile acid receptor 1 [46][47][48]. Distrutti et al. [49] reported that bile acidsactivated receptors are targets for maintaining intestinal integrity. Gadaleta et al. [50] found that FXR activation could prevent chemically induced intestinal inflammation with an improvement of colitis symptoms and inhibition of epithelial permeability. In addition, bile acids can also regulate cardiovascular functions via receptordependent and -independent mechanisms [51]. These findings provide a rationale to explore the mechanisms of Hei-Bing-Pian and Wu-Ling-Zhi in the treatment of gastrointestinal and cardiovascular diseases, respectively.
With the continuous development of science and technology, some unique but sometimes incomprehensible drugs in traditional medical systems will gradually be recognized. In this study, we provide the first comprehensive data compilation of fecal medicines used in Chinese traditional medical systems. The information recorded in ancient monographs and drug standards, such as original species, traditional uses and indications, can provide a good reference for the development and utilization of fecal medicines. In view of the current research status of fecal medicines, future research may focus on the following aspects: (1) applying multidisciplinary methods to further prove their effectiveness and medicinal value, (2) revealing their active ingredients associated with clinical efficacy using phytochemical and pharmacodynamic methods, and (3) elucidating the mechanisms of action of fecal medicines based on gut microbiota or receptormediated signaling pathways.
|
2019-09-13T14:54:57.915Z
|
2019-09-13T00:00:00.000
|
{
"year": 2019,
"sha1": "6940ae8db60eec493f95bea4444a9c0bf8eee093",
"oa_license": "CCBY",
"oa_url": "https://cmjournal.biomedcentral.com/track/pdf/10.1186/s13020-019-0253-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6940ae8db60eec493f95bea4444a9c0bf8eee093",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
58639004
|
pes2o/s2orc
|
v3-fos-license
|
Increased population epigenetic diversity of the clonal invasive species Alternanthera philoxeroides in response to salinity stress.
Epigenetic modification can change the pattern of gene expression without altering the underlying DNA sequence, which may be adaptive in clonal plant species. In this study, we used MSAP (methylation-sensitive amplification polymorphism) to examine epigenetic variation in Alternanthera philoxeroides, a clonal invasive species, in response to salinity stress. We found that salinity stress could significantly increase the level of epigenetic diversity within a population. This effect increased with increasing stress duration and was specific to particular genotypes. In addition, the epigenetic modification of young plants seems less sensitive to salinity than that of mature plants. This elevated epigenetic diversity in response to environmental stress may compensate for genetic impoverishment and contribute to evolutionary potential in clonal species.
INTRODUCTION
Epigenetic modification is the covalent modification of DNA and chromatin, and can influence gene expression without altering the underlying DNA sequence (Holliday, 2006;Richards, 2006;Jones, 2012). Genome-wide patterns of epigenetic modification change dynamically and coordinately at defined stages of development (Feng et al., 2010;Law and Jacobsen, 2010;Feil and Fraga, 2012;Hu et al., 2014). In natural populations, epigenetic variation among individuals is a significant source of phenotypic variance within and among populations (Medrano et al., 2014;Kooke et al., 2015;Liu et al., 2015;Foust et al., 2016). Recently, epigenetics has attracted considerable interest in plant ecological genetics because it can affect ecological-adaptive potential in plant species. Numerous studies have provided new insights into epigenetic responses to environmental stimuli (Labra et al., 2002;Aina et al., 2004;Dowen et al., 2012;Sani et al., 2013;Radford et al., 2014). It has been proposed that plants can adjust epigenetic modification in response to stress, which could be adaptive as it may generate phenotypic variation by changing gene expression (Castonguay and Angers, 2012;Douhovnikoff and Dodd, 2014).
DNA methylation is an important type of epigenetic modification, and frequently occurs at position 5 of the cytosine ring, converting cytosine to 5-methylcytosine (Sahu et al., 2013). Stress-induced methylation rearrangement has been proposed to contribute to coping with severe environmental stress in plants (Angers et al., 2010;Zhang et al., 2010;Dowen et al., 2012;Radford et al., 2014). For example, in the model species Arabidopsis thaliana, high salinity induced global hypermethylation at CG sites (Wibowo et al., 2016), and the methylation level at CHH sites (where H is A, C or T) was found to increase at higher temperature (Dubin et al., 2015). In non-model plants, environmental stimuli such as drought (Labra et al., 2002), heavy metals (Aina et al., 2004) and cold stress (Steward et al., 2002) also caused a change in methylation level throughout the genome and at specific loci. These stress-induced methylation changes may be targeted specifically to a certain stress-related signal pathway, or to stress-related genes, thus leading to a set of defense responses to resist environmental stress (Wada et al., 2004;Choi and Sano, 2007;Secco et al., 2015;Wibowo et al., 2016). Additionally, methylation changes may raise the probability of nonspecific epimutations and increase the epigenetic diversity among individuals under stress. Similar to DNA mutation, increased epimutations may have adaptive significance because they broaden the range of variation that natural selection can act upon (Verhoeven et al., 2010). As a result, environmentally induced methylation changes, whether or not they target functional genes, may play an important role in the plant stress response.
The complex interactions between DNA sequence and DNA methylation make it difficult to separate epigenetic from genetic variation in genetically diverse populations (Bossdorf et al., 2008). However, detecting independent epigenetic variation is less complicated in populations that lack genetic variation, or in asexual lineages (Verhoeven et al., 2010). For asexual lineages, their adaptive potential to environmental change tends to be severely limited because of the lack of genetic variation that is normally associated with sexual reproduction (Verhoeven et al., 2010). It is thus surprising that asexual reproduction is widespread in plants, some of which are successful invasive species that grow in broad geographic areas with heterogeneous environments and show considerable tolerance to environmental stress (Silvertown, 2008). There are indications that clonal plants, especially the clonal invasive species, may have compensatory mechanisms to generate phenotypic variation, for instance via epigenetic modification. In addition, epigenetic variation may accumulate rapidly across generations in asexual clines due to the absence of genetic recombination. Stable epigenetic variation can generate heritable phenotypic variation, allowing plants to respond to environmental challenges (Castonguay and Angers, 2012;Douhovnikoff and Dodd, 2014;Verhoeven and Preite, 2014). Furthermore, epigenetic variation may be particularly important to the evolutionary potential of asexual species if the adaptive epigenetic variation can be stably transmitted across generations (Bossdorf et al., 2008;Kronholm and Collins, 2016).
Alligator weed, Alternanthera philoxeroides, is an invasive clonal plant and has been considered as an ideal species for ecological epigenetics, in which epigenetic variation can be studied independently from genetic variation. Alligator weed is native to South America (Julien, 1995) but has now become an invasive species in many countries, causing huge economic loss and ecological damage in the invaded areas (Wang and Wang, 1988). Natural populations in China are dominated by asexual reproduction (Sainty et al., 1997;Sosa et al., 2008) and are highly genetically uniform (Geng et al., 2016). Despite their uniform genetic background, alligator weed populations have developed high adaptability and vigorous expansion capability. Individuals not only can grow under different water gradients (Geng et al., 2007), but also are tolerant of extreme climatic variables (Julien, 1995), high salinity (Longstreth et al., 2004), heavy metals (Naqvi and Rizvi, 2000) and herbicides (Eberbach and Bowmer, 1995). It is therefore presumed that epigenetic variation plays an important role for alligator weed adapting to highly heterogeneous habitats and stress conditions.
In this study, we used alligator weed as a model to investigate epigenetic variation in response to salinity stress. Both seedlings and mature plants of different genotypes were exposed to high salinity to evaluate epigenetic modification in response to environmental stress. Amplified fragment length polymorphism (AFLP) and methylation-sensitive amplification polymorphism (MSAP) markers were used to assess genetic variation and methylation variation, respectively. Three key questions of this study are as follows. First, can salinity stress induce significant DNA methylation changes? Second, if so, are these changes consistent across different genotypes? Third, how are epigenetic changes affected by stress duration and plant developmental stage (i.e., young and mature individuals)?
MATERIALS AND METHODS
Plant material Alligator weed is a clonal plant species and most individuals in China share one single multilocus genotype. However, there is considerable genetic diversity within natural populations in other areas (e.g., the USA and Argentina) (Geng et al., 2016). In this study, we used six genotypes, namely KM, N20, N23, N26, N28 and N25, of which KM was collected from Kunming, China and the other five from the USA (Geng et al., 2016). All six genotypes were cultivated in the laboratory.
Experimental design The aim of this study was to examine the effect of salinity stress on epigenetic variation in different genotypes of alligator weed. We also wanted to know whether any such effect depends on plant developmental stage. Thus, we used a factorial design with three factors: genotype (six genotypes), salinity stress (treatment vs. control) and plant developmental age (young vs. mature plants). There were five replications for each treatment combination, totally including 120 plant individuals. This experiment was performed in Yunnan University (E102°42'31", N25°3'37"), Kunming. First, clonal offspring for each genotype were derived from short stem-fragments of mother plants, and grown in plastic pots (21 cm diameter × 15 cm height) with a homogeneous mixture of vermiculite and sand (1:1). Each pot was fertilized with 4 g of commercial slowrelease compound fertilizer containing 15:11:13 of N/P/K (Osmocote, Scotts Company, Marysville, OH, USA). The pots were labeled and then watered with tap water every two days. Next, when the young seedlings were established and had produced two or three pairs of leaves, 20 healthy individuals for each genotype (120 individuals in total) were randomly assigned to four different treatment groups (i.e., Control-Young, Stress-Young, Stress-Mature and Control-Mature). The plants in Control groups were watered as above with tap water throughout the experiment, while the plants in the Stress-Young group were exposed to salinity stress from the very beginning of the experiment by adding 200 ml NaCl solution (0.2 M) every two days. In contrast, the plants in the Stress-Mature group were grown under control conditions for one month (i.e., until they became mature plants) and then were switched to salinity stress conditions. The salinity stress lasted for 30 days, and then all plants were switched to control conditions for one week. Newly produced and fully expanded leaves were collected for epigenetic analysis. After sampling, all plants were harvested at the end of the experiment. Each plant was separated into aboveground and below-ground parts, which were oven-dried at 80 °C for 48 h and then weighed for dry mass.
Genetic and epigenetic DNA sampling For genetic background analysis, two leaves were collected and dried in sealed plastic bags with silica gel. For epigenetic analyses, leaves were collected from the Young groups and Mature groups under different schedules. Specifically, for the Young groups, epigenetic samples were collected from newly produced leaves after 30 days of stress duration. For the Mature groups, epigenetic samples were collected at two time points (i.e., seven and 30 days after the beginning of stress treatment), resulting in two levels of stress duration. Altogether, we had 30 samples for genetic analysis (6 genotypes × 5 offspring replications) and 180 samples for epigenetic analysis (6 genotypes × 3 stress levels (Stress-Young-30-Day, Stress-Mature-7-Day, and Stress-Mature-30-Day) × 2 treatments (stress and control) × 5 replications). There were 210 DNA samples in total.
AFLP and MSAP analysis Total genomic DNA of each sample was extracted with the TIANGEN (Beijing, China) Plant Genomic DNA Kit following the manufacturer's protocol. The AFLP technique was used to determine the genetic background information for each genotype. The process was performed according to the method of Vos et al. (1995) with some modifications (Gao et al., 2010). Nine EcoRI/MseI primer combinations were used for fragment amplification: AGG/CAA, AGC/ CAA, AAC/CTT, ACA/CTA, CAA/CAT, AGC/CTT, AGC/ CTA, AGG/CTT, AGG/CAT. The final PCR products were separated on 6% denaturing polyacrylamide gels, which were then silver-stained and scanned for further data scoring.
MSAP fingerprinting was used to examine the epigenetic variation of alligator weed in response to salinity stress. MSAP analysis followed the protocol described by Portis et al. (2004), which is similar to AFLP but the 'frequent cutter' MseI was replaced by methylationsensitive restriction enzymes HpaII and MspI. These isoschizomers both cleave 5'-CCGG sequences but have different sensitivities to the methylation patterns of cytosine (Schulz et al., 2013): HpaII can recognize the methylated external cytosine on a single strand but cannot cut sequences with methylated cytosines on both strands, whereas MspI can cleave the sequence with methylated internal cytosines but cannot cut if the external cytosine is methylated. Consequently, different patterns of fragments appear on the polyacrylamide gels, indicating the epigenetic variation among individuals.
Data analysis For AFLP and MSAP results, the sequencing gels were inspected visually and fragments were scored as present (1) and absent (0) in the data sets. Visually poor-quality samples were excluded from scoring and only reproducible fragments were scored. All scoring was performed by the same person in the absence of information on sample identities. The AFLP results were scored as a binary matrix following a conventional protocol. The MSAP results were somewhat more complicated. The status of each MSAP site was determined by comparing the EcoRI/HpaII and EcoRI/MspI fragment profiles: fragments are present in both profiles (epi-loci type:11), fragments are present in the EcoRI/HpaII profile but absent from EcoRI/MspI (epi-loci type: 10), fragments are present in EcoRI/MspI but absent from EcoRI/ HpaII (epi-loci type: 01), and fragments are absent from both profiles (epi-loci type: 00). The first three conditions represent different methylation status while the last indicates a complicated state that could reflect either methylation variation or DNA sequence mutation. Here, we used clonal offspring as the plant material, and presumed that there was no genetic variation within each genotype. Accordingly, we included the last fragments status (epi-loci type: 00) as part of the data sets and considered these fragments as methylated. The raw MSAP data were transformed into a binary matrix for further analysis using the 'Mixed Scoring 2' method (Schulz et al., 2013), which is based on R statistical software (version 3.2.3, R Core Team, 2015, http://cran.r-project.org/ src/base/R-3/R-3.2.3.tar.gz).
The binary data sets from AFLP and MSAP were analyzed in GenALEx6.59 (Peakall and Smouse, 2012). Genetic diversity and epigenetic diversity within each genotype × treatment combination were calculated, including the percentage of polymorphic loci, Shannon's diversity index and Nei's unbiased gene diversity (uHe). Hierarchical AMOVA was performed to test the significance of epigenetic differentiation among treatments and genotypes in each group (i.e., Young-30-Day; Mature-7-Day and Mature-30-Day). In the AMOVA calculation, the probability of non-differentiation (PhiPT = 0) was estimated over 9,999 permutations. To investigate the effect of stress on methylation status, the fragment patterns were compared between the stressed plants and the control plants within and among genotypes based on the raw MSAP data sets.
Shannon's diversity index of MSAP data, and aboveground and below-ground biomass data, was analyzed using two-way ANOVA with two main factors, salinity stress (two levels) and genotypes (six levels), in order to test the influence of salinity stress on epigenetic diversity and plant growth. Means of treatments were compared using Tukey's HSD tests at a 5% probability. The biomass data were log-transformed if necessary to meet assumptions of homoscedasticity, and all data were analyzed with R statistical software (version 3.2.3, R Core Team, 2015).
RESULTS
Genetic variation within and among genotypes A total of 563 AFLP bands were scored from the nine primer combinations. The AFLP results showed that each genotype had a unique multi-locus profile. In contrast, for each genotype, the five clonal offspring were genetically uniform, as revealed by AFLP markers. Specifically, Nei's index of genetic diversity (uHe) and Shannon's diversity index within each genotype were close to zero at the genotype level (Supplementary Table S1). These data indicated that the clonal offspring of each genotype were genetically homogeneous and that all MSAP polymorphisms within genotypes can be interpreted as true epigenetic variation rather than genetic mutation.
Effect of salinity stress Salinity stress had a significant effect on plant growth (i.e., biomass) and methylation variation in alligator weed. Both above-ground and below-ground biomass were significantly decreased by salinity stress (Table 1, Fig. 1). The raw MSAP data were used to compare the methylation variation between control and stress groups. We observed obvious changes in methylation status in all genotypes, with the propor-tion of changed loci ranging from 6.28% to 8.91%. We had expected that salinity stress might induce reproducible changes in certain epigenetic loci that are involved in functional gene expression of stress resistancerelated pathways. Both methylation and demethylation occurred in stress groups compared to control groups; however, we did not find consistent epigenetic modification changes at such loci across different individuals even within the same genotype (Fig. 2). In contrast, we found significantly higher levels of epigenetic diversity within genotypes in stress groups (Fig. 3, Table 2). Similar results were found in AMOVA analysis, which suggested that a large proportion of epigenetic variation Fig. 1. Above-and below-ground biomass of young plants and mature plants. Values represent means ± SE (n = 5). Different superscript letters indicate means that are significantly different at P < 0.05 based on Tukey's pair-wise comparisons. Two means that have the same letter, or no superscript letters, are not significantly different from each other at the 5% significance level. resided among different treatments rather than different genotypes within a treatment (Table 3). Additionally, the effect of salinity stress on epigenetic variation increased with increasing stress duration: across all genotypes, longer stress duration produced higher epigenetic diversity in the stressed plants. On average, plants that were treated for 30 days had higher epigenetic diversity indices (percentage of polymorphic loci, uHe and Shannon's diversity index) than those that were treated for seven days ( Table 2).
Effect of plant developmental stage We found that epigenetic modification in young plants was less sensitive to salinity stress than that in mature plants. AMOVA results suggested that salinity stress had no significant effect on the structuring of epigenetic diversity in young plants (Table 3). The percentage of polymorphic loci, Nei's diversity index and Shannon's diversity index of the stressed plants ranged from 0.20-4.45%, 0.001-0.015 and 0.001-0.020, respectively, which were similar to the control (Table 2). Tukey's HSD tests on Shannon's diversity index showed that salinity treatment only affected the plants of genotype N26 in the Young groups, and had no effect on the other genotypes (Fig. 3A), which indicated a significant salinity × genotype interaction (Table 1). In contrast, salinity stress for 30 days significantly increased Shannon's diversity index across all genotypes in the Mature groups, as described above.
Effect of genotypic diversity In this study, we used six different genotypes to examine epigenetic change in response to salinity stress. We did find some evidence that different genotypes had contrasting sensitivity to salinity stress in plant growth and epigenetic modification. For example, in the Mature groups, high salinity only reduced the above-ground biomass in genotypes N26, N28, N25 and KM (Fig. 1B), but had no significant effect on below-ground biomass (P = 0.847) (Fig. 1D). Shannon's diversity indices of both the Stress-7-Day and the Stress-30-Day groups were significantly influenced by genotype and salinity treatment according to two-way ANOVA analysis (Table 1). Stress duration had the greatest effect on plants from genotype N28, while genotype N23 was the least affected by stress duration among all genotypes (Fig. 4). Tukey's HSD tests showed that salinity stress had no effect on genotypes N26 and N23 in the Stress-7-Day group (Fig. 3B), whereas high salinity significantly affected all genotypes in the Stress-30-Day group (Fig. 3C). In addition, genotype N28 was the most variable with 44 methylation sites showing changed patterns, while genotype KM changed the least with 31 altered methylation sites (Table 4).
DISCUSSION
In this study, we examined the effect of salinity stress on epigenetic variation in the clonal plant alligator weed. Although the effect of salinity stress is qualita-tively variable depending on the specific genotype, stress duration and plant developmental stage, a clear and consistent pattern is that salinity stress produced significantly higher epigenetic diversity among clonal offspring within genotypes. This effect was more obvious in mature plants than young plants, and increased with prolonged stress duration. Given the limited genetic variation in this clonal plant, our results suggested that elevated epigenetic diversity in response to environmental stress compensates for genetic impoverishment and contributes to evolutionary potential in alligator weed.
Effect of salinity stress on epigenetic variation We found that salinity stress could induce methylation alterations in alligator weed, which is consistent with previous studies in other plant species (Labra et al., 2002;Mason et al., 2008;Bonasio et al., 2010). In rice (Oryza sativa), salt-induced methylation changes were detected in roots in both salt-sensitive and salt-tolerant genotypes. Most methylation variation remained even after recovery, implying its stability in the present generation . After stressing Jatropha seedlings (Jatropha curcas L.) by different concentrations of NaCl solution, Mastan et al. (2012) found global hypermethylation in both roots and leaves, and an increased methylation polymorphism level. In addition, methylation changes have been reported in response to heavy metals (Aina et al., 2004), cold temperature (Xie et al., 2015), viral infection (Choi and Sano, 2007;Mason et al., 2008) and water deficit (Verkest et al., 2015). Methylation alterations detected under stress may be beneficial for plants coping with environmental changes as the methylation difference induced by stress may represent candidate epigenetic markers, which previously have been suggested to be linked to plastic phenotypic variation (Lukens and Zhan, 2007;Gao et al., 2010). Phenotypic differences between individuals may have adaptive significance by enabling a population to maintain itself in a heterogeneous environment.
Our study innovatively found significantly higher levels of epigenetic diversity in stressed plants within genotypes. These epigenetic changes were not consistent across different individuals or genotypes; in other words, they were not occurring at specific loci. It seems that the elevated epigenetic diversity may instead result from random permutation. DNA methylation in plants is known to be maintained and modulated by molecular machines including several enzymes like MET, CMT, DNMT2 and DRM (Choi and Sano, 2007;Furner and Matzke, 2011). A possible explanation is that salinity stress interferes with the activity of these functional enzymes, resulting in a higher epimutation rate. This stress-epimutation hypothesis is supported by another finding in our data. Specifically, we found that the effect of salinity stress on epigenetic changes was reinforced by extended stress duration. The higher epigenetic diversity in long-term stress (i.e., 30 days) may result from the progressive accumulation of epimutations during DNA replication. These stress-induced random epigenetic modifications may persist or spread with somatic cell proliferation, resulting in higher levels of epigenetic diversity within genotypes in long-term stress duration. Researchers have reported that increasing genome methylation results in the inhibition of expression of some genes and reduces energy consumption to maintain growth and development (Kovalchuk et al., 2004). Although these random epigenetic modifications are not necessarily associated with salinity resistancerelated pathways, they may act like random genetic mutations in adaptive evolution. As there is nearly no genetic differentiation in alligator weed in China, the observed epigenetic diversity in a stressful environment may compensate for genetic impoverishment, contributing to its evolutionary potential and thus guaranteeing vigorous invasiveness in this invasive species (Schlichting, 1986;Sultan, 2004).
Effect of plant developmental stage Our results confirmed that plants at different developmental stages had differential tolerance to stress. Generally, the epigenetic diversity within genotypes seems less sensitive to salinity stress in young seedlings than in mature plants. This pattern could also be explained by the stress-epimutation hypothesis. As the young seedlings were relatively small in size, the salinity stress would have had a stronger effect on plant growth in young seedlings than in mature plants. For example, salinity stress decreased greatly both the above-ground and the below-ground biomass of young seedlings ( Fig. 1A and 1C). In contrast, plant growth was more tolerant to stress in mature plants. Thus, in the same period of stress duration (i.e., 30 days), the number of cell divisions would be much lower in young seedlings than in mature plants. It is therefore not surprising that we observed a lower accumulation of epigenetic change in young seedlings, producing different sensitivity to salinity stress at different plant developmental stages.
Different responses among genotypes We observed different sensitivity to salinity stress among six genotypes and the stress-induced epigenetic diversity increased only in stressed seedlings of genotype N26, while plants of all genotypes showed a significant elevation of epigenetic diversity in the Stress-Mature group. The methylation status changed inconsistently at some loci across different individuals. Such differences are probably related to the genetic variation between genotypes. As the activity of DNA methyltransferase is usually regulated by gene expression, methyltransferase may have varied activities in different genotypes, leading to diverse methylation variation in response to stress. Furthermore, we found that methylation status differed between genotypes at certain loci. If these epigenetic differences could be stably maintained when the stress was eliminated (Verhoeven et al., 2010;Wang et al., 2015) and transmitted across generations (Crews et al., 2007;Boyko and Kovalchuk, 2008), they would represent candidate epigenetic alleles that could influence the evolutionary trajectory between two closely related genotypes. For clonal reproductive species, epigenetic variation between genotypes is especially interesting because it potentially provides new and independent heritable phenotypic variation, opening a potential new pathway of rapid evolution even without DNA sequence variation (Bossdorf et al., 2008).
Implications for adaptive evolution of alligator weed Epigenetic variation is considered to play a vital role in population evolution because of its higher mutation rate than genetic variation (Ossowski et al., 2010;Becker et al., 2011;Schmitz et al., 2011) and its capacity to regulate gene expression. Populations established by asexual reproduction are often characterized by extremely low levels of genetic diversity within and among populations, which may limit their adaptive potential in fluctuating environments. Diverse epigenetic variation may compensate for low genetic variation and contribute to adaption in asexually reproducing populations. If epigenetic variation accumulates rapidly in asexual populations, thereby producing heritable phenotypic variation that is independent of genetic variation, the evolutionary constraints in genetically impoverished populations might be alleviated significantly. In other words, epigenetic variation may have important impacts on clonal populations. Our results on epigenetic variation of alligator weed in response to different salinity treatments support this hypothesis. For alligator weed, flexible epigenetic variation is particularly important during plant development. The adaptive potential in novel and heterogeneous environments, which may be closely related to epigenetic variation, could be one of the reasons that alligator weed, with its low genetic diversity, expanded rapidly in the areas it invaded.
|
2019-01-22T22:20:09.079Z
|
2018-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "e3da77991f6e745dba6ff2ece093e694d82c5d76",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/ggs/93/6/93_18-00039/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2348c9ff840cd39046787be031fc22e3811f551b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
29845609
|
pes2o/s2orc
|
v3-fos-license
|
New group of the Early Palaeozoic conodont-like fossils
The paper is devoted to the Upper Cambrian and Tremadocian organophosphatic microfossils which were hitherto treated as conodonts and assigned mainly to the genera Coelocerodontus and Viirodus. Individual elements of the fossils, similarly to the elements of conodonts, belonged originally to the multi-element apparatuses. Present studies, based mainly on the collections from Sweden, Poland (core sections), Estonia and Kazakhstan, show that despite the similarities of their individual elements to conodonts, they significantly differ from them in the inner structure, as well as in the construction of the apparatuses composed of them. Elements of their apparatuses are matched in shape to each other and certainly functioned in conjunction, while those belonging to the euconodont apparatuses are usually differentiated in shape and usually functioned in separation. All fossils of this group are provisionally named coelocerodonts in this paper. Their individual elements, as well as the apparatuses composed of them, are similar in construction to those of the genus Phakelodus, which is an ancestor of chaetognaths.
INTRODUCTION
Conodont-like microfossils occurring in the Cambrian-Ordovician transition beds are strongly diversified in structure.Excellent preservation of protoconodonts and paraconodonts in many localities was probably a result of their secondary phosphatization, which became very common in that period of time.As a result even the elements of originally organic composition, like the grasping spines of chaetognaths, are often well preserved (Szaniawski 1982(Szaniawski , 2002)).A structural study of these fossils is necessary to obtain a better knowledge and understanding of the early evolution of conodonts and chaetognaths.
In this paper the fossils collected from the uppermost Cambrian and early Tremadocian calcareous deposits of Sweden, Estonia, Poland (boring cores) and Kazakhstan (Malyj Karatau) are discussed.The most important fossils for these studies are the very abundant and well-preserved specimens from the Tremadocian of Öland Island (Fig. 1B-P), which have previously been described, briefly by Van Wamel (1974) and in detail by Andres (1988).Similar fossils from Malyj Karatau have been taxonomically described by Dubinina (2000) and shortly discussed by Szaniawski (2014).The present investigations are based on collections from the same localities and were conducted with a scanning electron microscope equipped with an energy-dispersive (EDS) detector for the chemical characterization of the specimens.The longitudinal and cross sections of the selected specimens were etched in 2% hydrochloric acid (Fig. 2).The investigated collection is housed in the Institute of Paleobiology of the Polish Academy of Sciences (institutional abbreviation ZPAL), Warszawa, Poland, with the collection number ZPAL C.23.The additional Arabic numerals indicate the number of the SEM stub and of the specimen on the stub.
RESULTS
Structural studies of very well-preserved Tremadocian conodonts from Öland Island show that some of the supposed conodonts, assigned usually to the genus Coelocerodontus Ethington, 1959or Stenodontus Chen & Gong, 1986, and those of the genus Viirodus Dubinina, 2000, as well as those determined by Dubinina (2000) as 'Proacontiodus ' An, 1982, differ significantly from all hitherto known euconodonts (= true conodonts) and paraconodonts.The differences concern not only the inner structure of their elements, but also the construction of the apparatuses composed of them.The elements consist of two layers.The outer one (Fig. 2B, C, E) is composed of calcium phosphate but, contrary to the crown of conodonts, it is very thin and not laminated.The layer covers the whole specimen, except the basal cavity.Longitudinal ridges occur on the surface of some elements (Fig. 1L, M).In some specimens the outer layer became partly separated from the whole element (Figs 1N, 2B).The EDS analyses show that the inner layer of the elements is thicker and much richer in organic matter than the outer one (Fig. 2C, E) Tremadocian, Sweden, Äleklinta; ZPAL C23/17-23: A, specimen without the outer layer, A 1 , higher magnification of the middle part showing diagonal striations; B, specimen partly crushed to show that the outer layer of some specimens can be easily separated; C, specimen specially fractured to show its inner structure, the arrow points to the magnified fragment; C 1 , higher magnification; 1, fracture of the outer layer; 2, surface of the inner layer; 3, inner surface of the outer layer.D, F, specimens showing diagonal striations.E, specimen longitudinally sectioned and etched, the arrow points to the magnified fragment; E 1 , higher magnification; 1, the outer layer; 2, the inner layer; 3, secondary filling of the specimen cavity.and contains abundant levels of carbon.Attempts to etch the layer did not yield good results.All the elements possess a large cavity reaching almost the tip (Fig. 2E), which is usually filled with apatite and other material of secondary origin.The manner of growth of the elements is not well documented, but the diagonal striations visible in some specimens (Fig. 2A1, D, F) suggest that they grew similarly to the grasping spines of chaetognaths, by basal accretion of thin laminae (Szaniawski 2002).
Coelocerodonts differ from both euconodonts and paraconodonts (sensu Bengtson 1976) in the structure of their elements and the construction of the apparatuses composed of them.From the elements of euconodonts they differ mainly in lacking the thick, laminated calcium phoshate crown which grows by accretion of the new lamellae from the outside.They differ from them also in possessing a much greater cavity and lacking the 'basal body', which in euconodonts fills the cavity.The 'basal body' of euconodonts is composed of a mixture of organic matter and phosphate and, as the elements of coelocerodonts, grows in the basal direction.However, a detailed comparison of the manner of their growth has not yet been completed.They differ from paraconodont elements in possessing the outer calcium phosphate layer and a much deeper central cavity.Both types of elements grow in the basal direction but in a different manner.As mentioned above, the elements of coelocerodonts most probably grew in the manner characteristic of the grasping spines of chaetognaths, by the addition of a new fibrous lamina to the base (see Szaniawski 2002).Paraconodonts, however, grew mainly basally but added part of the new lamella also to the inner side of the element, making the element thicker and partly filling the cavity.Euconodonts and coelocerodonts both possessed apparatuses composed of elements of different shapes but the construction of these apparatuses is substantially different.Elements of the euconodont apparatuses are usually diversified in shape and not fused together, while the elements of coelocerodonts are matched in shape (Fig. 1D, G, I) and often fused at the base (Fig. 1K).Because of that they are often preserved as fused clusters.Moreover, elements of some of the coelocerodont apparatuses are very similar to each other (Fig. 1, D, E, H-J).In this respect they are very similar to the apparatuses of Phakelodus Miller, 1980.Additionaly, there are also transitional forms between them (Fig. 1-E).Moreover, some elements of the coelocerodont apparatuses are slightly deformed in the manner suggestive of their original flexibility (Fig. 1J).Such deformations are common among the elements and apparatuses of Phakelodus and are characteristic of fossils of the original organic composition.Phakelodus is treated as an ancestor of chaetognaths (Szaniawski 1982(Szaniawski , 2002)).This has recently been confirmed by the discovery of Cambrian chaetognath body fossils (Chen & Huang 2002) with grasping spines preserved, which is suggestive of an affiliation between coelocerodonts and chaetognaths.
Apparatuses of paraconodonts composed of strongly differentiated elements are not known.
CONCLUSIONS
Early Palaeozoic microfossils of the genera Coelocerodontus Ethington, 1959 andViirodus Dubinina, 2000, treated hitherto as conodonts, in fact strongly differ from both euconodonts and paraconodonts.They should therefore be treated as a separate group of fossils, provisionally named coelocerodonts in this paper.Structurally the fossils are most similar to the grasping apparatuses of Phakelodus which is an ancestor of chaetognaths.The conclusion is consistent, to some extent, with the view of Sweet (1988), who included several taxa, treated commonly as euconodonts, in the separate class Cavidonti, to which he assigned also the genus Coelocerodontus.
Fig. 2 .
Fig. 2. A-F, inner structure of the elements of coelocerodonts;Tremadocian, Sweden, Äleklinta; ZPAL C23/17-23: A, specimen without the outer layer, A 1 , higher magnification of the middle part showing diagonal striations; B, specimen partly crushed to show that the outer layer of some specimens can be easily separated; C, specimen specially fractured to show its inner structure, the arrow points to the magnified fragment; C 1 , higher magnification; 1, fracture of the outer layer; 2, surface of the inner layer; 3, inner surface of the outer layer.D, F, specimens showing diagonal striations.E, specimen longitudinally sectioned and etched, the arrow points to the magnified fragment; E 1 , higher magnification; 1, the outer layer; 2, the inner layer; 3, secondary filling of the specimen cavity.
|
2017-10-27T12:20:24.460Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "c823557891ff070e46209f4ae503e58499a5c8e8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3176/earth.2015.16",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c823557891ff070e46209f4ae503e58499a5c8e8",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
237274185
|
pes2o/s2orc
|
v3-fos-license
|
Gender disparities in colorectal polyps
Реферат Цель. Изучить гендерные характеристики, связанные с колоректальной дисплазией. Имеются убедительные доказательство того, что риск развития колоректальной неоплазии связан с половыми различиями, но этиология этого различия остается неясной. Это может быть вызвано как гормональным фоном, так и генетическими особенностями женщин и мужчин. Материалы и методы. Ретроспективный анализ был проведен группе 100 пациентов. В исследование в основном включаены пациенты в возрасте от 19 до 65 лет, среди нихбыло 60 мужчин и 40 женщин. Результаты. Результаты относительно положительной взаимосвязи женского пола и распространенности колоректальной неоплазии, основаны на колоноскопии и других клинико-лабораторных расширенных исследованиях. Bыводы. В результате исследования мы хотели бы подчеркнуть, что колоректальные полипы чаще встречаются у мужчин, чем у женщин. Ключевые слова: гендерные различия; колоректальные полипы; колоректальный рак; колоноскопия; неоплазия. Abstract Objective. The main objective of the research is the pathohistological consequences of colorectal adenomatosis polyps and their gender-based assessment. Gender and its connection with advanced colorectal displasia are the focus of the research study. Materials and methods. A retrospective analysis was conducted 100 patients. The age group of the patients in the research study mainly encompassed 19 to 65 aged patients. During the medical procedure, the gender distribution of the cases was determined as 60 males and 40 females respectively. Results. There is strong evidence for an association between gender and the risk of advanced colorectal neoplasia. The findings regarding the positive interrelation for gender and advanced colorectal neoplasia are consistent with other large colonoscopy-based studies. Conclusions. As a result of the study, we would like to emphasize that colorectal polyps are more common in men compared to in women.
Colorectal adenomas are more commonly found in the developed Western states. In this regard, nutritional factor, environment, inflammatory diseases of the gastrointestinal tract, ulcerative colitis, and "Crohn's disease" demonstrates an important role in the formation of colorectal polyps [1]. Thus, the reduction of proliferation and apoptosis in colon cells increases the risk of carcinogenesis under the background of inflammatory diseases. According to this, polyps less than 1 cm in size constitute 1% likelihood to be malignant, slightly greater than 1 cm, around 1 to 2 cm of those have a 10% chance of becoming cancer, but those are 2 cm or greater have a 40% chance of transforming into malignancy. Whilst, polyps less than 1 cm and 1 to 2 cm have a 10% probabilityin villous adenomas, and more than 2 cm have a 53% chance of turning into malignancy. Therefore, these derivatives can be repeated and the risk of transition to the neoplastic process can be variable depending mainly on their characteristics including multiplicity, dimensions, histological structure, and dyspla-sia. It has been considered that 15% of all derivatives with a size of more than 1 cm are likely to be transmitted to malignant neoplasms within 10 years [2 -4].
Generally, the colon polyps are more founded pathology and commonly are observed in over 50 aged patients group. Colorectal polyps are not only malignant pathology but also the precursor of malignant neoplasms. Where the polyps are mostly detected, there is a high likelihood of being cancer, which normally grows in around 5% of adenomatous polyps [5]. It is an overt fact that based on studies, the colorectal adenomas are mainly accompanied by epithelial dysplasia, which is considered the main cause of malignant neoplasms, are "sneak" lesions of colon (intestinal) cancer, playing a major role in the spread of colorectal polyps and the formation of malignant neoplasms. In this matter, adenomatous polyps are the significantly important cause of colorectal cancer [1,3,6,7].
The dysplasia can also be observed in colorectal adenomas showing that it is the predecessor of malignant tumors. Ac-cording to this, the development of colorectal adenomas in the colon is also the main insignia of malignant tumor progression. The number of polyps, histological structure, and grade of dysplasia is essential amid the transition into malignancy. Besides, the complex glandular crowding and irregularity, prominent glandular budding, cribriform structures, vague luminal papillary extensions, and back-to-back glands were observed in the analysis. The colon cancers encompass 30% to 50% of adenomas [2,8,9]. The formation of ectopic crypts leads to dysplasia. Abnormal differentiation of crypts in basal layer cause the development of colorectal cancer [3,9,10].
It is ostensibly that thestrong evidence for an association between gender and the risk of advanced colorectal neoplasia emerged out. The findings of a positive association for gender and advanced colorectal neoplasia are consistent with other large colonoscopy-based studies. The etiology for the gender difference remains uncertain, but it might be related to hormonal differences.Genetic differences between men and women might account for some of the differences in the rate of advanced neoplasia [11].
Aim of study. The pathohistological consequences of colorectal adenomatosis polyps andtheir gender based assessment were considered objective of the chosen research.
Materials and methods
Between the period of 2011 and 2016, the endoscopic polypectomy was performed in 118 out of 1375 patients enrolled at the Endoscopy Department of the Central Customs Hospital in Baku, Azerbaijan. The polypectomy was performed with a squeezing ring and a biopsy clamp. A retrospective analysis was conducted in 100 of these patients. The age group of the patients in the research study mainly encompassed 19 to 65 aged patients. During the examination, 18 patients were excluded from the study group for certain reasons. The colonoscopic examination was performed in patients above 45 years of age who had gastrointestinal disorders, bleeding, bloody mucous, and constipation. The patients underwent bowel preparation and those with cardiac problems were referred to the cardiologist before the medical procedure that was conducted. Colonoscopy was performed under intravenous sedation, and the patients were discharged to their homes right after the medical procedure.
Results
The retrospective study included 100 parafin blocks of polypectomized specimens. During the medical procedure, those blocks were used to examine 100 polyps based on gender disparities. Tissue preparation was done in Central Customs Hospital Patomorphology Department. The sections were taken from the paraffin-embedded tissues using a microtome, afterward, put on the slide, and stained with hematoxylin eosin (HE). Because of the investigation, it was revealed out that colorectal polyps are more common in men than in women.
In line with the histological examination of Hematoxylin and Eosin stained (H&E) preparations, there were 45 tubes, 23 tubulovillous, 16 inflammatory, 13 hyperplastic, 2 serrated and one polyp in the villous form were detected. The middle age of the patients was 56.78 ± 1.64. In this study, around 38 patients constituted 50 to 69 age group and 5 patients were included between 40 and 49 age group. Generally, the proportion of the detection of colorectal adenomas in men and women is about 1.5:1.
In terms of localization of colorectal adenomas, the distal region of the colon was superior to the proximal region. The average size of the colorectal adenomas was 1.27 cm ± 0.11, which ranged from 0.3 to 3.2 cm and the average size of the villous adenomas was 1.14 cm. The size of the tubular adenomas ranges from 0.8 to 3 cm with an average of 1.38 cm. Tubulovillous adenomas range from 0.3 to 3.2 cm, with an average of 1.4 cm. When the cases were inspected in terms of having or not having dysplasia, it was detected that dysplasia was present in 42 cases and was absent in 58 cases. When 42 cases with dysplasia were followed, 26 cases constituting 61.9 % of it proceeded to the malignancy over time. 16 cases estimating 38.1 % did not develop the malignancy. Malignancy of the polyps with high and low dysplasia is far more expected. Dysplasia has been detected in large dimensional polyps, which overlaps with the literature data. In this manner, in the chosen study, there is 32.5% of dysplasia in polyps less than 1 cm, and polyps 1 to 3 cm have 70.58% of dysplasia, and 100% of it, is observed in 3 cm large polyps. Dysplasia was detected 26 out (32.5%) of 80 patients, which have smaller polyps of 1 cm. We found dysplasia in 12 out of 17 polyps (70.58%) between 1 to 3 cm in size, while dysplasia was found in all 3 polyps (100%) larger than 3 cm.
Discussion
The medical examination was conducted with the evaluation of bowel and the detected polyps were treated mainly depending on their sizes. Amid the procedure, small polyps removed by forceps biopsy, and 0.5 cm large polyps have been cut off by squeezing ring. All detected polyps were sent to the pathological examination. During the medical proce- dure, the gender distribution of the cases was determined as 60 males and 40 females respectively. As a result of the examination, the pathophysiological classification is identified as followed. Histopathological evaluation revealed tubular adenoma in 46 cases, (18 females and 28 males) tubulovillous adenoma in 23 cases (7 females and 16 males) inflammatory polyp in 15 cases, (8 females and 7 males) hyperplastic polyp in 13 cases, (4 females and 9 males) serrated polyp in 2 cases, (1 female and 1 male) and villous adenoma in one (1 female) case. In the cases we examined, we found several polyps ranging from 1 to 3.
Studies have shown that tubular polyps are present in both sides of the intestine, tubulovillous polyps on the left side, and villous polyps are mostly localized in the rectosigmoid region. The chosen study revealed that the size of polyps is directly correlated with their pathomorphological structures. The patients with tubular adenomatous polyps were asked to undergo the follow-up examination once a year and those with villous hyperplasia were asked to take the follow-up examination once every 6 months. Two patients with a high degree of dysplasia were surgically treated (Fig. 1).
In the study, 13 removed polyps were cosnidered hyperplastic polyps. Dysplasia of polyps was assessed pathophysiologically and was found in 42 (14 females and 28 males) cases of the 100 polyps, but the remaining 58 (25 females and 33 males) polyps did not show dysplasia. When revaluated 42 polyps detected in dysplasia 26 out of them (61.9%) later developed malignancy and it was not detected in 16 patients. (38.1%)None of the 58 polyps with dysplasia were detected in malignancy (Fig. 2).
Conclusions
As a result of the study, we would like to emphasize that colorectal polyps are more common in men compared to in women. Hence, gender and its connection with advanced colorectal displasia are the focus of the research study. During the medical procedure, the gender distribution of the cases was determined as 60 males and 40 females respectively. A pro-gressive risk of polyp or tumor formation is noted with aging. Colonoscopy is needed to correctly diagnose an increasing prevalence of pathology in the elder people. The findings of a positive association for gender and advanced colorectal neoplasia are consistent with other large colonoscopy-based studies. The etiology for the gender difference remains uncertain, but it might be related to hormonal differences. Genetic differences between men and women might account for some of the differences in the rate of advanced neoplasia.
Funding. The chosen study was not supported or funded by any drug company. During the examination process, the research study did not receive any financial support from either institution or funding companies.
|
2021-08-23T20:21:39.629Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4c28d6adbe3d89d5b333243edd4ecfc654198fe1",
"oa_license": "CCBY",
"oa_url": "https://hirurgiya.com.ua/index.php/journal/article/download/903/901",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "33393968a545fd7ad80e46df319106a467ee00ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
125465949
|
pes2o/s2orc
|
v3-fos-license
|
Application of quasi-optimal correlation algorithm for surface quality assessment
The article describes the application of a new approach to assessing the surface roughness of the GTE blade profile after vibration contact polishing. The basis for the calculation of the microgeometry of the surface of the back and the trough of the blades is the average amplitude of the variable component of the autocorrelation function obtained as a result of computer processing of the video image of the surface on the optical-electronic complex. The use of an optoelectronic method for evaluating the microgeometry of the surfaces of compressor and turbine blades allows obtaining the roughness distribution on the surface and the stress concentration coefficient, as well as a more in-depth analysis of the final processing technology.
Introduction
Currently, to ensure the necessary operational properties of gas turbine engine parts (GTE), complex technologies based on various physical principles, for manufacturing and for control are used.
A particular attention is given to development of new research areas such as computer-based smart process environments or diagnostic & control systems for figurine-shaped parts. Progress in this area can substantially advance the methodology of GTE surface control.
Development of modern methods and means of surface quality technical control using roughness data (surface microgeometric deviations) obtained with computer-based technologies is one of the greatest challenges of the modern machine building industry.
These guidelines show the best layout for your paper using Microsoft Word. If you don't wish to use the Word template provided, please use the following page setup measurements.
Research objective
One of the most important characteristics of the GTE blade surface quality is a microrelief of its working surface: reliability and durability parameters of many blades depend on a pattern of microgeometric deviations of their working surfaces [1]. It is known that the smoother is the surface, the higher is fatigue resistance of the part; many studies show that fatigue fracture nuclei first appear on the part surface. Surface roughness areas are hotspots of tensions and are one of the reasons for fatigue resistance degradation: tension at the bottom of a groove mark is 2-2.5 times higher than average on the surface layer. Experimental results reveal that roughness decrease from Ra = 0.74 m to Ra = 0.22 m increases fatigue strength limit of a specimen by 14% on average and prolongs its service life more than three times.
Solution method
Creating modern systems of technical quality control and a methodological basis for researching microgeometric deviations of figurine machine part surfaces in production environment via development of an optical electronic information measurement system (IMS) is a challenging issue.
This problem includes a number of requirements than modern portable surface microrelief measurement devices [2-9] must meet. Such devices must be compact, produce real-time measurement data without physical contact in a production environment, allow digital processing of the produced data and evaluating microreliefs of figurine surfaces, such as inner cavities of small orifices or working surfaces of molds and dies [10,11]. Moreover, metrological characteristic of such measurement devices must also be constant regardless of changes in the production environment; finally, the devices must be provided with an energy saving mode.
Existing methods and measurement means of microrelief parameters Xi, (i = 1, ..., r) do not allow reducing measurement error below the order of a few tens percent; therefore the objective of this study is development of quality control systems and a methodological basis for research of microrelief of figurine surfaces in a production environment using an optical electronic information measurement system (IMS).
Theoretical basis
The IMS evaluates the microrelief of a studied surface by means of comparing the surface with images of reference surfaces with roughness parameters defined using standard methods [12] and known. As a result, the studied surface is matched with a certain reference surface with a preset recognition probability.
In the original halftone frame of the pixel format K1×K2, a strip of width N2 in pixels is selected starting from the first line. In the center of this strip is set the standard image size of N1×N2 pixels. Then the standard, starting from the extreme left position, moves along the selected strip in increments of 1 pixel. Each time the reference u(n1, n2) and the current fragment x (n1, n2) of the halftone image are combined, the correlation coefficient is calculated.
When the correlation coefficients in the first band are calculated, the next band of the same format but shifted down by one pixel is specified. In this band, a new standard with the same dimensions is set in the center and the same actions are performed, etc. After processing the entire image, a matrix of correlation coefficients M1×M2 -a two-dimensional correlation function-will be formed.
The problem in this formulation is solved optimally by means of a two-dimensional spatial filter coordinated with the signal. The output signal will be proportional to the correlation function of the two-dimensional input signal and the maximum signal-to-noise ratio of the filter output will be achieved.
To increase the speed of the calculation program, the analysis of known quasi-optimal correlation algorithms and criterion functions was performed, which showed the prospects of using pair criterion functions and binary images.
Experimental results
Profile roughness of turbine 1st stage blade airfoils made without allowance ("ЖС6ФУ" material) after vibratory polishing using a "ЛВП-4" machine was studied. The movement path of a blade during polishing is a result of geometric addition of mutually perpendicular oscillations generated by two crank mechanisms and is shaped as a mesh with controlled parameters, geometrically complex and practically nonreproductible. Such movement path allows creating a uniform microgeometry over the surface of the pressure and suction sides of the blade. Vertical and horizontal oscillation modes directly influence the processing capacity and dynamic loads appearing in the oscillating system. Usually, the oscillation frequency is taken as 20-25 s -1 and the amplitude as 5-10 mm which results in the machining rate of 30-120 m/min. The machining process simulation has shown the maximum machining capacity is provided by a discrepancy in the proportion of frequencies of figures ωо = ωh/ωv, where ωh is frequency of horizontal oscillations, ωv is frequency of vertical oscillations. The 1/2 ≤ ωо ≤ 1 frequency range was studied. By The first part of the equation is a hyperbola, the second one is a tangensoid. By solving this equation for ωо, we obtain the following series of frequency proportions: 0.543, 0.617, 0.704, 0.763, 0.833, 0.917.
An optical electronic unit was used to study microgeometry of blade suction and pressure sides. The unit [13] (see figure 1) includes the following components: 1-studied surface, 2-television camera with a CCD sensor, 3-analog to digital converter, 4-memory storage, 5-device for setting window coordinates and sizes for transformation of the input halftone surface image to a binary image as well as for setting reference image size in the binary image, 6-digital computer, 7-device for setting coordinates of the current binary image fragment, 8-correlator and software for processing of video imagery of studied surfaces. It is known that surface microgeometric parameters must be uniform over the entire blade surface. Let us agree that uniformity of the surface roughness parameters creates an equiaxial structure and excludes one-directional roughness. For this purpose, an electronic image was rotated by 90°, 180° and 270°, and autocorrelation surface parameters were defined. Matching of their values with a 5% tolerance indicates presence of a certain structure on the blade surface.
Blade surface spots after vibratory polishing were analyzed. Figure 2 shows half-tone and binary images of the selected surface area, a correlation surface and the correlation coefficient change diagram for this area [14,15].
The surface area format stored in the computer memory was 320×240 pixels in this case. Processing of the experimental results has shown that the average of the variable component of the correlation function calculated for 30 images is Ucp=23.1 rel. units.
If the studied surface structure (roughness) recognition probability is set to P = 0,99, the following expression was obtained for the confidence interval:
Conclusion
The study proves that the optical electronic method for evaluation of turbine blade surface quality allows developing surface roughness fields and use them to better analyze the vibratory polishing technology. The research proves that using optimal proportions of vertical and horizontal oscillation frequencies allows creating a uniform surface structure of the blade airfoil profile. The first ever study of surface areas of a turbine 1st stage adjacent to orifices along the entry edge has shown the roughness increases by 1.5-1.7 times around the orifice.
|
2019-04-22T13:13:10.964Z
|
2018-12-01T00:00:00.000
|
{
"year": 2018,
"sha1": "e0f47e857b2c0d56d4300f68660d46425d749d7f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1096/1/012141",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "86b0c52ba93b108de91d8b0c86aacdeb3564df74",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
119249394
|
pes2o/s2orc
|
v3-fos-license
|
Renormalization Group Flow in CDT
We perform a first investigation of the coupling constant flow of the nonperturbative lattice model of four-dimensional quantum gravity given in terms of Causal Dynamical Triangulations (CDT). After explaining how standard concepts of lattice field theory can be adapted to the case of this background-independent theory, we define a notion of"lines of constant physics"in coupling constant space in terms of certain semiclassical properties of the dynamically generated quantum universe. Determining flow lines with the help of Monte Carlo simulations, we find that the second-order phase transition line present in this theory can be interpreted as a UV phase transition line if we allow for an anisotropic scaling of space and time.
Introduction
The formalism of Causal Dynamical Triangulations (CDT) provides a regularization of the putative theory of quantum gravity [1,2]. Its underlying assumption is that the fundamental theory of quantum gravity can be understood purely in terms of quantum field-theoretical concepts. CDT quantum gravity shares this assumption with the asymptotic safety program, originally put forward by Weinberg [3], which was subsequently studied in a (2 + ε)-dimensional expansion [4] and more recently with the help of the functional renormalization group equation [5]. Similarly, a key idea behind Hořava-Lifshitz gravity (HLG) [6] is to use ordinary quantum field theory to construct quantum gravity, but to circumvent the usual problem of non-renormalizability by explicitly breaking the four-dimensional diffeomorphism invariance of the continuum theory with the introduction of a preferred time foliation. In this setting one can naturally introduce terms with higher spatial derivatives in the action to render the theory renormalizable while keeping the theory unitary.
Their common field-theoretic basis, as well as coinciding results on the spectral dimension of spacetime on Planckian scales [7] and a similar phase structure of CDT and HLG [8,9] make it natural to try to relate the three approaches -causal dynamical triangulations, asymptotic safety and Hořava-Lifshitz gravity -more directly. 1 Interesting examples of this include the formulation of a functional renormalization group equation for foliated spacetimes [14] and its application to projectable HLG at low energies [15], and an extension of CDT quantum gravity by the explicit addition of higher spatial derivative terms (albeit at this stage only in three spacetime dimensions [16]). Note that HLG does not appeal to an asymptotic safety scenario for the theory to make sense at high energies.
Although the distinguished notion of proper time of CDT looks superficially similar to the time foliation in HLG, its status is different because CDT does not possess any residual diffeomorphism invariance, which therefore cannot be broken either. The role of time in CDT was recently clarified further in a study in three dimensions, where it was verified explicitly that key results of CDT quantum gravity continue to hold in a version of the theory which does not possess preferred simplicial hypermanifolds that can be identified with surfaces of constant time [17]. This provides strong evidence that the notion of proper time that is naturally available in standard CDT is simply a convenient parameter to (partially) describe the spacetime geometry, and that its presence does not skew the results of the theory in an unwanted way. Of course, also this "non-foliated" version of CDT incorporates microscopic causality conditions, implying an asymmetry between time and spatial directions that persists after Wick-rotating, just like in standard CDT. It is therefore conceivable that in part of the coupling constant space [8] the nonperturbative effective quantum action of CDT can be related to an anisotropic action of HLG-type, even though in the former no higher-order spatial derivative terms are added explicitly to the bare classical Einstein-Hilbert action. Let us also point out that the built-in unitarity of the CDT formalism -resulting from a well-defined transfer matrix [18] -is likely to affect the functional form of the dynamically generated quantum action, in a way we currently do not control explicitly.
In this article, we present a first attempt at establishing a concrete renormalization group flow in four-dimensional CDT quantum gravity (in the standard version and without higher-derivative terms in the bare action), assuming a straightforward identification of lattice proper distances with continuum proper distances. More specifically, with the help of computer simulations we determine trajectories of constant physics -interpreted in a specific way in terms of semiclassical observables we have at our disposal -in the coupling constant space spanned by the bare coupling constants of the lattice theory. Moving along these lines in the direction of smaller lattice spacing, we do not find evidence that they run into the second-order phase transition line, with the possible exception of the triple point of the phase diagram, where three transition lines meet. A slightly more general ansatz that allows for a relative scaling of time and space as the second-order transition is approached leads to a more interesting result, which can be interpreted as a proper UV limit. -In terms of procedure and first results, our investigation provides a reference frame and opens the door to a further systematic study of renormalization group flows in CDT and perhaps other nonperturbative lattice formulations of quantum gravity. This will involve more sophisticated arguments for an appropriate relative scaling of time and space near the phase transition, and hopefully a wider range of observables to provide alternative definitions of what it means to "keep physics constant".
Causal Dynamical Triangulations
CDT is a theory of fluctuating geometries, which at the regularized level are represented by triangulated, piecewise flat spacetimes. It can be viewed as a lattice theory in the sense that the length assignments to the one-dimensional edges (links) of a given triangulation completely determine the piecewise flat geometry. 2 As already mentioned, a well-behaved causal structure is implemented on each Lorentzian triangulation with the help of a global time foliation that is distinguished in terms of the simplicial structure. One sums over these geometries in the path integral, where the action is given by the Einstein-Hilbert action in Regge form, suitable for piecewise linear geometries (see the review [1] or the original articles [18] for further details). All triangulations can be obtained by suitably gluing together two types of building blocks, the so-called (4,1) and (3,2) four-simplices, leading (after Wick rotation) to a very simple form for the Euclidean Regge action S E , namely, where N are the numbers of four-simplices of type (4,1) and (3,2) respectively, and N 0 is the number of vertices in the triangulation. The parameter κ 0 is proportional to a 2 /G 0 where G 0 is the bare gravitational coupling constant and a denotes the length of (spatial) links. Similarly, κ 4 is proportional to the bare cosmological constant but will play no role here, since we will keep the number of four-simplices (almost) constant during the Monte Carlo simulations of the CDT lattice system.
The parameter ∆ appearing in the action (1) requires a more detailed discussion. There are two types of edges that occur in the Lorentzian-signature triangulations before everything is Wick-rotated, spacelike links with squared length a 2 and timelike links with negative squared length a 2 t = −αa 2 , where the parameter α > 0 quantifies the relative magnitude of the two. We then perform a rotation to Euclidean signature by analytically continuing α in the lower-half complex plane from α to −α =α, so that The original, Lorentzian Einstein-Hilbert action in Regge form depends on α and satisfies iS L (α) = −S E (−α) when rotating from Lorentzian to Euclidean signature. The Euclidean action S E is now a function ofα (see [1] for details). It can be parametrized in the form (1), where ∆ is now a function ofα, normalized such that the case of uniform edge lengths,α = 1, corresponds to ∆ = 0. At this stage ∆ is not a coupling constant, but only a parameter in the action. Even for ∆ different from zero the action continues to be the Euclidean Regge-Einstein-Hilbert action, merely reflecting the fact that some links are assigned a different length. However, in the effective quantum action ∆ will appear as a coupling constant. The reason why this can happen is that the choice of coupling constants for which interesting fluctuating geometries are observed is far from the semiclassical region. In this nonperturbative region the measure used in the path integral becomes as important as the classical action, and ∆ will effectively play the role of a coupling constant. We refer again to [1] for a detailed discussion, and examples of nongravitational lattice models where one encounters a similar situation. In view of this, the coupling constant space of CDT quantum gravity is spanned by κ 0 and ∆. For reference, we are showing in Fig. 1 the corresponding phase diagram, already reported in [19,9]. It has three phases, denoted by A, B and C. Previous studies have shown that only phase C is interesting from the point of view of quantum gravity, in the sense that only there one seems to find quantum fluctuating geometries which are macroscopically four-dimensional. The properties of the quantum geometry in this phase have been studied in great detail [20,19,21,22].
In the present work, we will follow standard lattice procedure by trying to trace the flow of the bare coupling constants inside phase C when we take the lattice spacing a to zero, while keeping physics constant. We know from [9] that the phase transition line separating phases B and C is of second order, while phases A and C are separated by a first-order transition. Our expectation is therefore that the flow lines will approach this second-order transition line when a goes to zero and continuum physics is kept constant.
Identifying paths of constant physics
For the purpose of illustration, consider a φ 4 -lattice scalar field theory with bare (dimensionless) mass term m 0 and bare dimensionless φ 4 -coupling constant λ 0 . Correspondingly, the effective action has a renormalized mass m R and a renormalized coupling constant λ R . Let us assume that λ R is defined according to some specific prescription in terms of the four-point function. Similarly, assume that m R is defined by some prescription related to the two-point function, for example the exponential fall-off of the connected two-point function 3 . One can thus write m R a = 1/ξ, where ξ is the correlation length measured in lattice units a. This relation specifies how one should scale the lattice spacing a to zero as a function of the correlation length ξ in order for m R to stay constant. Once the actual value of m R has been supplied from the outside, say by comparison with experiment, the value of a(ξ) is fixed in physical units by measuring ξ.
In order to define a continuum limit where a(ξ) → 0 while m R is kept fixed one needs a divergent correlation length ξ, in other words, a phase transition point or phase transition line of second order in the (m 0 , λ 0 )-coupling constant space. The lattice φ 4 -theory has such a second-order phase transition line. Choosing specific initial values m 0 (0) and λ 0 (0) for the bare coupling constants, performing the functional lattice integral will determine the renormalized coupling λ R = λ R (m 0 (0), λ 0 (0)) corresponding to these values. The requirement that λ R (m 0 , λ 0 ) stay constant when changing m 0 and λ 0 then defines a curve (m 0 (s), λ 0 (s)) in the plane spanned by the bare coupling constants, where s is an arbitrary curve parameter.
Along this curve the correlation length ξ will change. Assuming for simplicity that ξ is a monotonic function of s, one can parametrize the curve by ξ instead. Moving along the curve in the direction of increasing ξ will in general lead to the second-order phase transition line where ξ becomes infinite. At the same time, because of a(ξ) = 1/(m R ξ), the UV cut-off a will decrease. If the curve reaches the transition line at a point λ * 0 , this point will be a UV fixed point for the φ 4theory, corresponding to a renormalized mass m R and a renormalized coupling constant λ R , since approaching it one has a(ξ) → 0. However, it can happen that a curve of constant λ R does not reach the second-order phase transition line. If one cannot find a single curve of constant λ R , for any starting point (m 0 , λ 0 ), which reaches such a critical point, one would conclude that the theory does not have a UV completion with a finite value of the renormalized coupling constant λ R . For the four-dimensional scalar φ 4 -theory this turns out to be the case.
Assume for the sake of the argument that there is a UV fixed point λ * 0 somewhere on the second-order phase transition line 4 . The β-function then has a zero there, β(λ * 0 ) = 0, since at fixed m R and λ R the coupling λ 0 (ξ) stops running for ξ → ∞. Approaching the fixed point along such a trajectory, the behaviour of 3 As usual in a lattice set-up, there is the question of lattice artifacts when defining m R and λ R , due to the finiteness of the lattice spacing and accompanying discretization effects. In the discussion below we ignore such technical issues because our focus will be on the essence of the renormalization group flow of the bare lattice coupling constants. 4 Note that in formulas (3) and (5) below it is assumed that λ * 0 = 0. If λ * 0 = 0 the fixed point is Gaussian and the formulas have to be modified appropriately.
In the CDT quantum gravity theory it will be convenient to analyze the flow of bare coupling constants for fixed continuum physics under the additional assumption that the physical volume of spacetime is fixed and finite. With this in mind, one can reformulate the above coupling constant flow in ordinary lattice field theory in terms of so-called finite-size scaling. Consider the case of d dimensions and introduce a dimensionful physical d-volume V d by where N d is the total number of d-dimensional elementary building blocks (hypercubes on a cubic lattice, simplices on a triangular lattice). We want to make sure that V d can be viewed as constant along a trajectory of the kind described above, with m R and λ R kept fixed, in the continuum limit as a(ξ) → 0. This can be achieved by keeping the ratio between the linear size L = N 1/d d of the lattice "universe" and the correlation length ξ fixed. In terms of the renormalized mass m R and the lattice spacing a(ξ) the ratio can also be written as Accordingly, moving along a trajectory of constant m R and λ R in the bare (m 0 , λ 0 )-coupling constant plane and changing N d ensures that the quantum field theory in question has a finite continuum spacetime volume V d . Furthermore, the equality (4) implies that the dependence on the correlation length ξ in (3) can be substituted by a dependence on the linear size N 1/d in lattice units of the spacetime, leading to We noted above that the absence of a UV fixed point is signaled by the fact that no curve of constant λ R reaches the phase transition line. In this case the correlation length ξ along curves will not go to infinity and the lattice spacing will not go to zero. Restated in terms of the discrete lattice volume it means that N d will not go to infinity. We have outlined in this section in some detail how to define and follow lines of constant physics in the φ 4 -lattice scalar field theory, because we want to apply the same technique to understand the UV behaviour of the lattice quantum gravity theory. Of course, it should be emphasized that the two theories differ in important ways. First, because φ 4 -theory is renormalizable in four dimensions, we know a priori that it suffices to study the flow in the bare couplings m 0 and λ 0 : if no UV fixed point is found along lines of constant λ R in the (m 0 , λ 0 )-plane, it does not exist. On the other hand, gravity is not renormalizable, and restricting the search for a UV fixed point to the two-dimensional coupling constant space spanned by (κ 0 , ∆) -although suggestive because of the observed second-order transition line -may ultimately not be sufficient.
Second, while the meaning of lines of constant physics is relatively straightforward in φ 4 -theory, the same cannot be said about this concept in nonperturbative and background-independent quantum gravity, because any measure of length one uses is defined in terms of geometry, which is subject to the dynamics of the theory. As will become clear in the remainder of this paper, defining lines of constant physics in terms of suitable geometric observables needs considerable care and is at this stage much more tentative than in the case of scalar field theory.
Application to nonperturbative gravity
In the present application to quantum gravity, we will use the coupling constant flow in the form (5), staying at a constant spacetime volume V 4 = N 4 a 4 for the universe, where N 4 is the number of four-simplices 5 . How can we make sure that it is consistent to view V 4 as constant when we increase the lattice volume N 4 ? In the case of ordinary field theory we achieved this by using the physical correlation length as a fixed yardstick and requiring m d R V d to remain constant. Since in the CDT pure gravity model we do not have a similar simple correlation length at our disposal, we need to find another indicator of constant physics.
In phase C, at least somewhat away from the B-C phase boundary, the threevolume profile of the universe is to excellent approximation given by [21] and the variance of the spatial volume fluctuations δN 3 (i) : for a specific function F , whose details are not important for the discussion at hand. Both profiles are functions of the lattice time i. The number of spacelike three-simplices at fixed integer time i is denoted by N 3 (i), and the parameters ω and γ depend on the geometric properties of the triangular building blocks and the bare coupling constants κ 0 and ∆. The profiles (6) and (7) represent finite-size scaling relations, and show in the first place that the time extension of the universe scales like N 1/4 4 and its spatial 5 Strictly speaking, we are keeping the number N (4,1) 4 of four-simplices of type (4, 1) constant, see [21] for a discussion. The distinction is not important for our present analysis.
volume at a given time like N 3/4 4 , as one would expect from a four-dimensional spacetime. This might seem like a triviality since we started out with fourdimensional building blocks, but in a set-up where no background geometry is put in by hand it is not: all our results are extrapolated to an infinite limit (N 4 → ∞), and in this limit nonperturbative contributions from the summed-over path integral histories play an important role in bringing about the final outcome. To illustrate the point, no four-dimensional macroscopic scaling behaviour is found in phases A and B of the present model, although they are of course based on exactly the same (microscopically four-dimensional) building blocks. Similarly, one may in principle find deviations from such a scaling inside phase C when getting close to the second-order transition between phases B and C.
The data (6) and (7) extracted from the Monte Carlo simulations in phase C at fixed lattice volume N 4 allow us to interpret the ground state of geometry as a macroscopically four-dimensional quantum universe with a definite average volume profile and a definite behaviour of the average quantum fluctuations of the spatial volume around it. Moreover, making a specific identification of continuum proper time with lattice proper time (by fixing a relative constant for given values of the bare couplings), these properties are characteristic for a de Sitter universe [21].
Sufficiently far away from the phase boundaries of phase C the data summarized in relations (6) and (7) is compatible with the discretized action which was reconstructed from measuring the correlation function of spatial threevolumes [21] and has the form where it is understood that the function F for identical arguments coincides with the function F on the right-hand side of eq. (7). For sufficiently large N 4 and to first approximation, the measured parameters k 1 andk in the reconstructed action (8) were shown to be independent of N 4 and the coefficient γ in (7) was shown to be related to k 1 by Phrased differently, for appropriate choice of the couplingk the classical solution to the discretized action (8), solved under the constraint of fixed N 4 , is well approximated by the observed distribution N 3 (i) N 4 of (6). In addition, the observed behaviour of the volume fluctuations, eqs. (7) and (9), is well described by expanding the action (8) to quadratic order around the average profile (6), thus leading to (10). Note finally that the coupling constantk is a function of ω if the distribution (6) is to represent the local minimum of S discr for large, but fixed N 4 , namelyk A natural starting point for trying to relate the above results to continuum physics is to compare the effective action (8) for the spatial three-volume (constructed from numerical "observations") with a minisuperspace action for the scale factor of a homogeneous, isotropic universe with spatial slices of the same S 3 -topology. We can then ask which continuum minisuperspace actions can be matched to an emergent background like (6). The line element of (Euclidean) minisuperspace is where a(t) is the scale factor, N (t) the lapse function and dΩ 2 3 the line element on the unit three-sphere, such that the spatial volume at time t is V 3 (t) = 2π 2 a 3 (t).
As we have already argued in the introduction, Hořava-Lifshitz gravity provides a natural and potentially useful reference frame for nonperturbative properties of CDT quantum gravity. Also in our present analysis of the renormalization group flow we will use an extended class of reference metrics of type (12), including minisuperspace models of Hořava-Lifshitz type.
Recall that the quadratic part of the action of projectable 6 HLG in four dimensions in terms of the three-metric g ij (x, t) and the extrinsic curvature K ij (x, t) reads where N (t) is the lapse function and (3) R is the intrinsic scalar curvature of the spatial three-geometry. For the parameter values λ = 1 andδ = −1 one obtains the standard form of the Euclidean Einstein action, in which case one can identifyκ = 1/(16πG), where G is the gravitational coupling. The three terms in parentheses on the right-hand side of (13) are separately invariant under foliation-preserving diffeomorphisms, the invariance group of HLG. Using the metric ansatz (12), with a(t) re-expressed in terms of V 3 (t), the continuum HLG action (13) becomes Firstly, the equation of motion derived for V 3 (t) from the action (14), under the constraint that the total four-volume is V 4 , is solved by which we have written in a form that facilitates comparison with the lattice expression (6). It is of course precisely the match of the lattice results with a classical cos 3 -profile (15) that allows us to identify lattice time with a continuum time t, which is a constant multiple of continuum proper time τ , The parameter χ in relation (15) is defined as Computing the scale factor a(t) corresponding to the volume profile (15) and substituting it into the line element (12) one obtains Unless χ equals its general relativistic value χ = 1 this describes a deformed foursphere with time extension πχR and spatial extension πR, R being the (maximal) radius of the spatial three-sphere 7 .
Next, comparing the continuum expressions (14)- (17) with the corresponding lattice expressions (6) and (8) and assuming V 4 ∝ N 4 a 4 , one is led to the identifications We note that in the transition from lattice to continuum data only the ratio of ω and χ 3/4 appears. The first relation in (19) reiterates our earlier assertion that the continuum proper time can be viewed as proportional to the integer lattice time multiplied by the lattice spacing, where the said ratio is now seen to enter. Following the logic outlined at the beginning of this section, we would now like to define a path of constant continuum physics in the coupling constant space spanned by (κ 0 , ∆). In doing this, we want to keep the total four-volume V 4 ∝ N 4 a 4 fixed. This will enable us to take the lattice spacing a → 0 by changing N 4 , a parameter we can control explicitly. Our definition of what constitutes "constant physics" will rely on the assumptions that (i) throughout phase C the behaviour of the three-volume is described adequately by the (semi-)classical continuum formulas derived above, and (ii) we can associate space-and timelike lattice units with continuum proper distances and proper times in a way that inside phase C is independent of κ 0 and ∆. More precisely, regarding this latter point it is sufficient to make the weaker assumption that the ratio of unit proper distance and unit proper time is a fixed number times the speed of light c throughout coupling constant space. 8 This is equivalent to keeping fixed the ratio ω/χ 3/4 in relations (19).
Under these assumptions, keeping ω constant in the simulations implies a constant χ and thus a constant volume profile, giving us one criterion for constant, macroscopic physics. However, keeping ω(κ 0 , ∆) fixed is not sufficient to ensure that the emergent continuum universe is unchanged in the limit N 4 → ∞. Denoting the typical size of volume fluctuations by |δN 3 (i)| := N 3 (i)N 3 (i) 1/2 N 4 and analogously for |δV 3 (τ )|, one has where the result in parentheses follows from relations (10) and (19), and the scaling should be understood for fixed times τ . In view of the proportionality τ i ∝ i/N 1/4 4 from (19) above, the discrete time label i used in N 3 (i) and δN 3 (i) should change proportional to τ N 1/4 4 when changing N 4 . According to our assumptions the three-volume profile V 3 (τ ) and the fluctuation size |δV 3 (τ )| are physical quantities, and the ratio |δN 3 (i)|/N 3 (i) (with the interpretation of i just given) must therefore remain constant along any path of constant physics in the space of bare coupling constants.
First, note that staying at a given point (κ 0 , ∆) while taking N 4 → ∞ does not correspond to constant continuum physics. Rather, according to (20) it describes a situation where V 3 (τ ) (and V 4 ) go to infinity, and the fluctuations around this macroscopic universe become ever smaller relative to V 3 (τ ). Since we have already established that ω(κ 0 , ∆) must be kept fixed along a trajectory of constant physics, eq. (20) implies that as N 4 → ∞ we must follow a path (κ 0 (N 4 ), ∆(N 4 )) satisfying This pair of conditions can be regarded as the CDT equivalent of keeping V 4 constant in scalar field theory by insisting that the correlation length satisfies ξ ∝ N 1/d d , as discussed above. Furthermore, we read off from relation (20) that the conditions (21) are consistent with a physical situation where also the gravitational coupling constant κ is kept fixed. In the next section we will investigate whether it is possible to satisfy (21) in the limit as N 4 → ∞.
Measuring indicators of constant physics
In phase C of the CDT phase diagram we have performed a systematic study measuring the distributions N 3 (i) for a fixed number N 4 of building blocks. By fitting, following the procedure outlined in [21], we can determine ω(κ 0 , ∆) and γ(κ 0 , ∆) for given N 4 . Our analysis assumes that the values of ω(κ 0 , ∆) and γ(κ 0 , ∆) will only change little with increasing N 4 . This assumption is well tested inside phase C, and for the fixed four-volume we have been using, namely, N (4,1) 4 = 40.000. Any significant changes in ω and γ must therefore be due to changes in the bare couplings κ 0 and ∆. A dense grid of measuring points in coupling constant space was used to collect the relevant data. Details of this computing-intensive process will be published elsewhere. The resulting contour plots for ω(κ 0 , ∆) and γ(κ 0 , ∆) in the (κ 0 , ∆)-plane (Fig. 2) can be interpreted directly in terms of constant physics: moving along any given line of constant ω on the left contour plot, we can read off from the right contour plot how γ changes along this line, and in particular whether it increases as desired for a UV limit. Approaching the B-C phase boundary, which lies along the bottom of the two plots of Fig. 2, we observe that ω decreases significantly, while γ increases somewhat, as one would expect when approaching a second-order phase transition line. However, this increase does not appear to be large enough to result in an increase of the product ω(κ 0 , ∆) · γ(κ 0 , ∆). According to the logic outlined above, this product should go to infinity in a UV limit where V 4 and κ stay constant while we take a → 0. At least in the region where we can measure reliably, somewhat away from the transition line, the product ω · γ changes little, as can be seen in Fig. 3. Close to the B-C phase transition line our results are not reliable. Autocorrelation times grow enormously, and the decrease in the parameter ω means that the universe becomes very short in the time direction, rendering the use of the effective action (8) questionable.
UV fixed point scenario
In the minisuperspace action (14) we have introduced a generalized inverse gravitational coupling constant κ, which has mass dimension 2 and incorporates a dependence on the HLG-parameter λ. We can introduce a corresponding dimensionless couplingκ via κ(a) =κ(a)/a 2 . Comparing with relations (19), we see thatκ(a) ∝ (ω/χ 3/4 ) 2 k 1 (a). Of course, this identification is only meaningful as long as physics is well described by the effective actions (14) and (8). At least well inside phase C this is known to be the case.
For long-distance physics we expect κ to be a constant, implying that k 1 should behave like k 1 (a) ∝ κ · a 2 ∝ κ (V 4 /N 4 ) 1/2 . This implies the scaling behaviour 4 , which we have already discussed earlier as a requirement of constant physics. However, this is not the behaviour one would in general expect to encounter at a UV fixed point. By definition a nonperturbative UV fixed point is one where the dimensionless coupling goes to a finite fixed value,κ(a) →κ * . Consequently, the analogue of the expansion (5) for the inverse gravitational coupling constant is given bŷ provided we are in the vicinity of the fixed pointκ * and move on a trajectory where V 4 is kept constant. According to relations (19) and (22) this implies a k 1 -behaviour of the form Still assuming that our minisuperspace analysis provides a reliable frame of reference, this leads to (24) We conclude that this quotient cannot be kept constant in the neighbourhood of the UV fixed point and for constant χ, unless for some reasonκ * = 0. One way to make a vanishing fixed-point value for k 1 appear natural is by explicitly invoking the HLG-parameter λ and discussing the UV fixed point in terms of the coupling constantκ, which appears in the continuum action (13). In terms of its dimensionless counterpartκ(a) := a 2κ (a) one would make an ansatẑ analogous to (22). However, because of κ = ( 1 3 − λ)κ, in place of relation (23) one then obtains This now leaves open the possibility of a vanishing k 1 at the ultraviolet fixed point, k 1 (N 4 ) → 0, provided one chooses to scale λ → 1/3 at the same time.
Note that by doing so one gives up staying on a curve of constant physics, in the sense of keeping V 4 , V 3 , |δV 3 | and the shape of the emergent semiclassical minisuperspace geometry fixed. The reason is that according to (c.f. eq. (14) and (17)) a change in λ implies a change in χ, the parameter describing the shape of the universe, unless we choose to scaleδ precisely asδ ∝ (1 − 3λ). According to our assumptions, ω then also changes. Ifδ stays constant or goes to zero slower than (1 − 3λ), both χ → 0 and ω → 0 at the UV fixed point. Since we observe in our computer simulations that ω goes towards zero when we approach the B-C second-order phase transition line, the line appears as a candidate for UV fixed points in this particular scenario. Approaching it along some path where χ (and therefore ω) decreases but V 4 is kept fixed implies that V 3 ∝ (V 4 /χ) 3/4 is no longer constant. Also the constancy criterion (20) for |δV 3 |/V 3 can no longer be applied in a straightforward manner.
If on the other hand we choose to scaleδ like (1 − 3λ), we can maintain the concept of constant shape and three-volume V 3 for fixed V 4 . We are then back to the situation analyzed previously; γ has to grow proportional to N 1/4 4 along paths of constant ω, with the only difference that this now allows for a UV interpretation in terms ofκ rather than κ. However, as discussed in the previous section, there is little support for this growth from the data, at least in the region where we can measure reliably.
Discussion and conclusion
In this paper, we have presented the results of a first nonperturbative analysis of renormalization group flows in four-dimensional CDT quantum gravity. Since a second-order phase transition line has been found in this formulation of quantum gravity [9] -thus far a very rare occurrence in dynamical models of higher-dimensional geometry -how this line may be reached along suitably defined RG trajectories in phase space will give us important information about the theory's ultraviolet regime. It will also allow us to make a closer comparison with continuum investigations of gravity in terms of functional renormalization group techniques and may provide an independent check on ultraviolet fixed point scenarios derived in this approach.
As explained in Secs. 3 and 4, we use conventional lattice methods to investigate the behaviour near the phase transition, adapted to the case of dynamical geometry, where we do not have a fixed background geometry to refer to and any physical yardstick for measuring distances has to be generated dynamically.
Taking a UV limit is achieved formally by sending the lattice spacing a to zero, but to make this into a physically meaningful prescription a has to be related to some physical length units. As illustrated by the scalar field example, this is usually done by referring to the correlation length. Alternatively, since in the case of gravity we currently do not have a suitable correlation length available, one may also refer to the total volume of the system, and re-express scaling relations near a fixed point in terms of this volume, as illustrated by eq. (5). This is the strategy we follow for gravity to make sure that we have a true, physical implementation of the ultraviolet limit. The difference with the scalar field case is that the macroscopic reference volume used is generated dynamically, and any possible dependence on the bare couplings should be considered carefully, because it can have an influence on how one defines 'lines of constant physics' on coupling constant space.
For the latter we have made the most direct ansatz available in CDT gravity, namely, to define constant physics in terms of the physical quantities characterizing the macroscopic universe that emerges as the ground state of the quantum dynamics. These are its total four-volume, its three-volume as a function of proper time and quantum fluctuations of the three-volume around its mean, the so-called volume profile. We have interpreted all of them physically in terms of a class of homogeneous and isotropic cosmological solutions of Hořava-Lifshitz type, and have assumed that this interpretation is valid throughout phase C, where we observe extended geometry. At the same time we have assumed that we can make an identification of lattice units in terms of continuum proper times and distances that likewise remains unchanged inside phase C. Conceptually, these are the most straightforward assumptions one can make, and it is important to understand what conclusions they lead to.
Concretely, we then defined lines of constant physics by keeping the shape parameter ω(κ 0 , ∆) constant, as well as the relative size of three-volume fluctuations, leading to the scaling requirement γ(κ 0 , ∆) ∝ N 1/4 4 for the "fluctuation parameter" γ in the UV limit N 4 → ∞. Analyzing the computer simulation data presented above we saw no concrete indication that the second-order B-C phase transition line is reached when flowing along any of the lines of constant physics. Instead, the lines of constant ω(κ 0 , ∆) run parallel to the B-C phase transition line if one starts in the vicinity of this line. Increasing γ(κ 0 , ∆) along such a line brings one close to the triple point of the phase diagram. For the finite value of N 4 used here, curves of constant ω eventually turn away from the triple point and run parallel to the A-C transition line. However, this may well be a finite-volume effect, leaving open the possibility for flow lines to end up in the triple point. On the other hand, on the basis of the measurements made up to now the increase in γ when moving along a line of constant ω seems to be too slow to satisfy the criteria of constant physics for N 4 → ∞.
However, as we described in Sec. 7, it is possible to view the B-C line as a second-order UV phase transition line for the HLG action (13), if we allow for a suitable scaling of "little lambda", λ → 1/3. 9 In this interpretation an anisotropy between space and time develops as one moves along flow lines, corresponding to χ → 0 in (18).
It is clear that the next step in our investigation of renormalization group flows will be a more extended analysis of different UV scaling scenarios, where in particular our current assumption of "frozen" proper distance units throughout coupling constant space is relaxed, which will have consequences for how "constant physics" is defined. It would also allow us to consider a scenario where the shape of the emergent universes is interpreted in terms of round four-spheres, at least somewhat away from the phase transition, as we have done in previous work [19,21], in contrast to the family of deformed spheres we have used here. It is clear that this can change the running of the renormalization group flows significantly, and improve on the results found in the present work, where we have adopted rather conservative assumptions about scaling and constant physics.
Using different notions of constant physics close to the phase transition is certainly well motivated by nonclassical features of quantum geometry already found on Planckian scales, like the anomalous behaviour of the spectral dimension [7], and by taking seriously anisotropic scaling scenariosà la Hořava in the UV, which we have already argued constitute a natural frame of reference for our investigation. There will be technical issues to deal with when investigating different scalings near the B-C transition line, including the fact that the time extension of the universe shrinks to only a few lattice spacings there, making any construction of an effective action imprecise. One obvious solution would be to increase the lattice size N 4 , but one also has to take into account the critical slowing-down near the B-C transition (as one would expect), which makes simulations there painfully slow. We are currently trying to circumvent this issue by using the so-called transfer matrix formalism [25], where a large time extension is not needed. Progress on this will be reported elsewhere.
Matter (FOM), financially supported by the Netherlands Organisation for Scientific Research (NWO). The work was also sponsored by NWO Exacte Wetenschappen (Physical Sciences) for the use of supercomputer facilities, with financial support from NWO. JA and RL were supported in part by Perimeter Institute of Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development & Innovation. The authors thank Daniel Coumbe for discussion, for reading the paper and for useful comments.
|
2014-07-08T11:37:26.000Z
|
2014-05-18T00:00:00.000
|
{
"year": 2014,
"sha1": "8bb30ed25837e124e0bc0d188eae2257155112b8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1405.4585",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "581002420076a71848fd290ac7bd47c09d887f47",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
18916467
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Voltage-Gated K+ Channel on Cell Proliferation in Multiple Myeloma
Objective. To study the effects and underlying mechanisms of voltage-gated K+ channels on the proliferation of multiple myeloma cells. Methods. RPMI-8226 MM cell line was used for the experiments. Voltage-gated K+ currents and the resting potential were recorded by whole-cell patch-clamp technique. RT-PCR detected Kv channel mRNA expression. Cell viability was analyzed with MTT assay. Cell counting system was employed to monitor cell proliferation. DNA contents and cell volume were analyzed by flow cytometry. Results. Currents recorded in RPMI-8226 cells were confirmed to be voltage-gated K+ channels. A high level of Kv1.3 mRNA was detected but no Kv3.1 mRNA was detected in RPMI-8226 cells. Voltage-gated K+ channel blocker 4-aminopyridine (4-AP) (2 mM) depolarized the resting potential from −42 ± 1.7 mV to −31.8 ± 2.8 mV (P < 0.01). The results of MTT assay showed that there was no significant cytotoxicity to RPMI-8226 cells when the 4-AP concentration was lower than 4 mM. 4-AP arrested cell cycle in G0/G1 phase. Cells were synchronized at the G1/S boundary by treatment of aphidicolin and released from the blockage by replacing the medium with normal culture medium or with culture medium containing 2 mM 4-AP. 4-AP produced no significant inhibitory effect on cell cycle compared with control cells (P > 0.05). Conclusions. In RPMI-8226, voltage-gated K+ channels are involved in proliferation and cell cycle progression its influence on the resting potential and cell volume may be responsible for this process; the inhibitory effect of the voltage-gated K+ channel blocker on RPMI-8226 cell proliferation is a phase-specific event.
Introduction
Multiple myeloma (MM) is the malignant proliferation of plasma cells involving more than 10 percent of the bone marrow. The multiple myeloma cell produces monoclonal immunoglobulins that may be identified on serum or urine protein electrophoresis. MM comprises about 1% of all cancers but more than 10% of all hematooncological diseases. Cytogenetic analysis of MM cells shows frequent mutations and chromosomal aberrations. There are reciprocal chromosomal translocations involving the IgH locus, chromosome 13 monosomy, loss of short arm of chromosome 17 and gains of the long arm of chromosome 1, and others. Chemotherapy with melphalan-prednisone is the standard regimen for multiple myeloma. Other treatment modalities include polychemotherapy and bone marrow transplantation. But only 50 to 60 percent of patients respond to these therapies. The aggregate median survival for all stages of multiple myeloma is three years.
Membrane ion channels are essential for maintaining cellular homeostasis and signaling. Thus, they contribute to the control of essential parameters such as cell volume, intracellular pH, and intracellular Ca 2+ concentration. K + selective ion channels form the largest ion channel protein family, which may be subdivided into voltage-gated and Ca 2+ -dependent K + channels and so on. Evidence is growing that K + channels play a central role in the development and growth of human cancer like those of prostate, colon, lung, breast, and others. In nonexcitable cells voltage-gated K + channels (Kv) play a critical role in cell development, volume regulation, membrane potential maintaining, and cell proliferation [1,2]. Lots of researches speculated that 2 The Scientific World Journal Kv channels are correlated with the initial event of T and B lymphocyte activation [3,4]. In the present study, the existence of Kv on the MM cells was directly detected; the relation between the channel and cell proliferation, cell cycle, and cell volume was further investigated, and the underlying mechanism was also explored.
1 mL of bone marrow sample was collected from young adult donor using a needle, which was immediately transferred into a sterile tube containing 1% heparin solution. To isolate bone marrow mononuclear cells, the samples were centrifuged in a 1.077 g/mL Ficoll density gradient for 30 min. The cells in the white middle layer were isolated and resuspended in culture medium with DMEM (GIBCO) containing 10% (v/v) fetal bovine serum (GIBCO) and 1% (w/v) penicillin/streptomycin. Then 1 × 10 6 cells were seeded in a 100 mm dish that was precoated with 0.1% gelatin and incubated at 37 ∘ C with 5% CO 2 at 100% humidity. After 3 days, the medium containing floating cells was removed and new medium was added to the remaining adherent cells. These adherent cells were considered to be BMSCs. The medium was changed every 3 days. The third passages were used for the following experiments [5,6].
MTT Assay.
The MTT solution was added to each well (1.2 mg/mL) and incubation was done for 4 h. The absorbance value (A) was measured at 570 nm using a multiwell spectrophotometer (Mapada, China). The percentage of cell viability was calculated using the following formula: cell viability (%) = A of experiment well/A of control well × 100%.
Patch-Clamp
Recording. For patch-clamp experiments, the cells were plated on cover glass which had been pretreated with 1 mg/mL poly-L-lysine. Then cells were transferred to a recording chamber which was attached on the stage of an inverted phase-contrast microscope. The microscope was coupled to a video camera system with magnification up to 1500x in order to monitor cell size during the experiments. Cells were bathed at room temperature (20-25 ∘ C) and then superfused by gravity at a rate of about 2-4 mL/min (bath volume 2 mL) with normal Tyrode's solution. The patch pipettes were made from Kimax capillary tubes (Vineland, NJ) using a vertical two-step electrode puller (Narishige PB-7, Japan) and the tips were fire-polished with a microforge (Narishige MF-83, Japan). The resistance of the patch pipettes was 3-5 MΩ when it was immersed in normal Tyrode's solution. Voltage-clamp potentials of step or ramp depolarization were generated by a programmable stimulator (Biologic SMP-311, France). Ionic currents were recorded in whole-cell clamp conditions with the use of a patch-clamp amplifier (Biologic RK-400, France) and amplified with a low-pass filter at 1-3 KHz. All potentials were corrected for liquid junction potential which developed at the tip of the pipette when the composition of pipette solution was different from that of bath. Tested drugs were applied by perfusion to the bath to obtain the final concentrations indicated.
Whole-cell voltage-clamp recordings were performed to record the voltage-dependent potassium currents in RPMI-8226 cells. Axopatch 200B patch-clamp magnifying instrument is controlled by the computer. The recording pipette was pulled using borosilicate capillaries. The resistance of the pipette was 3-5 MΩ when it was filled with the pipette solution. All recordings were done at room temperature (21 ∘ C). The internal solution contained (mM) K-aspartate 135, MgCl 2 2, EGTA 1.1, CaCl 2 0.1, and HEPES-KOH buffering liquid 10, adjusted to pH 7.2 with 1 M KOH (280-300 mOsm). The electrode external solution contained (mmol/L) NaCl 136.5, KCl 5.4, CaCl 2 1.8, MgCl 2 0.53, glucose 5.5, and HEPES-NaCl buffering liquid 5, adjusted to pH 7.2 with 1 M NaOH (280-300 mOsm). For the voltage-dependent potassium currents recording, the membrane voltage was stepped to −90 mV for 1 s followed by a ramp to +50 mV. All the records are kept in the hard disk for the postexperiment analysis.
RT-PCR Assay.
RNA was extracted from RPMI-8226 cells using Trizol reagent (Invitrogen) according to the manufacturer's instructions. Two micrograms of RNA was reverse-transcribed and the products were amplified with cDNA-specific primers (Roche). The sequence of primers (Jinsite Biotechnology) for RT-PCR was as follows: Kv1.3 forward: 5 -TCGCCATCGTGTCCGT-3 and reverse: 5 -CCATTGCCCTGTCGTT-3 ; Kv3.1 forward: 5 -GAG-GACGAGCTGGAGATGAC-3 and reverse: 5 -GGCAGA-AGATGACACGCATG-3 ; -actin forward: 5 -AGCGGG-AAATCGTGCGTG-3 and reverse: 5 -CAGGGTACATGG-TGGTGCC-3 . The PCR was performed with 40 cycles of denaturation at 95 ∘ C for 30 seconds, annealing at 60 ∘ C for 30 seconds, and extention at 72 ∘ C for 60 seconds, with an initial denaturation at 95 ∘ C for 10 minutes and a final extension at 72 ∘ C for 7 minutes. -Actin served as control. The experiments were repeated twice. The PCR product was loaded onto 2% agarose gel for electrophoresis and visualized using Gel Doc XR imaging system (Bio-Rad). The Scientific World Journal 3 a flow cytometer. Ten thousand events were collected and the data was analyzed with CellQuest software. Forward side scatter (FSC) describing cell size was used to measure the cell volume [7].
Cell Cycle and Cell
2.6. Statistical Analysis. Current inhibition rate was calculated using the following formula: (| control | − | measure |)/| measure |. Data values are presented as means ± SDs. Student's paired -test was used to analyze the difference between the control and 4-AP-treated cells. All values <0.05 were considered to be significant.
Detection of Kv on MM Cells.
To study the effect of voltage-gated potassium channels on multiple myeloma cell proliferation, we first measured whether the currents were voltage-gated potassium currents. The mean resting potential and cell capacitance were −42 ± 2 mV and 37.5 ± 2 pF, respectively ( = 40). Membrane currents were evoked at 0.1 Hz by various step pulses with a duration of 1 s before and after the addition of 4-AP. Under controlled conditions, when the cell was held at −80 mV, the depolarizing pulses which are more than −30 mV can elicit the outward currents. The amplitudes of these currents were increased with greater depolarization pulses. When the cells were held at −80 mV, the measured potentials were −54 ± 1 mV, −49 ± 3 mV, −30 ± 1 mV, and −10 ± 2 mV ( = 11) according to extracellular K + concentrations of 5.4 mM, 10 mM, 40 mM, and 80 mM. These results indicate the changes of membrane currents dependence on the extracellular K + concentration. Furthermore the elicited current was voltage-gated and could be deactivated by repeated depolarization.
Kv Channels Subtype Expression in RPMI-8226 Cells.
There are two kinds of Kv channels expressed in the lymphocytes, n-type and l-type, and they are coded by Kv1.3 and Kv3.1 genes, respectively [8]. As multiple myeloma cells originate from pre-B lymphocytes, we assayed the mRNA expression of the two channels in RPMI-8226 cells by RT-PCR. A high level of Kv1.3 mRNA was detected and no Kv3.1 mRNA was detected (Figure 1), which indicated that the ntype Kv channels exist in RPMI-8226 cells.
Effect of 4-AP on RPMI-8226 Cell Proliferation and Cell
Cycle. Cytotoxicity of 4-AP to RPMI-8226 cells was studied by applying different concentrations of 4-AP in the normal culture medium for up to 4 days. Viability of control and 4-AP-treated cells was determined by MTT assay. When 4-AP concentration was less than 4 mM, no significant cytotoxicity to RPMI-8226 cells was observed (Figure 3(a)). The effect of 4-AP on RPMI-8226 cell proliferation was evaluated by counting cell numbers. Introduction of 2 mM 4-AP, a dose similar to that used by other groups [9], for 48 h caused a significant decrease in RPMI-8226 cell number (Figure 3(b)). A dose-response curve for 4-AP inhibition of cell proliferation was obtained (Figure 3(c)). In bone marrow stromal cells and RPMI-8226 cells, the calculated half-maximal inhibitory concentration of 4-AP was 6.1 ± 0.2 mM and 4.4 ± 0.3 mM separately ( < 0.05), respectively, demonstrating higher sensitivity of MM cells to 4-AP.
The effect of 4-AP on cell cycle was also evaluated. RPMI-8226 cells were cultured in the normal medium, serumdeprived medium, or medium containing 2 mM 4-AP for 24 h and then collected for cell cycle analysis using flow cytometry. The cells in G0/G1 phase were increased from 32.5 ± 4% in control group (cultured in the normal medium) to 61.5 ± 4.7 and 78.9 ± 4.0% in serum-deprived group and 4-AP-(2 mM) treated group, respectively ( Figure 4). S phase population significantly decreased from 60.6 ± 6.1% in control group to 35.6 ± 4.4% in 4-AP-treated group (Figure 4).
To determine the phase-specific blockage of 4-AP in cell cycle, RPMI-8226 cells were synchronized at the G1/S boundary by application of aphidicolin, a DNA polymerase inhibitor. After incubation for 16 h with 5 mg/mL aphidicolin, RPMI-8226 cells passed the G1 checkpoint at the end of the G1 phase and approached the beginning of the S phase. The cells were then released from blockage by replacing the medium with normal culture medium (control group) or medium containing 2 mM 4-AP. However, 4-AP did not inhibit cell cycle progression that passed G1 checkpoint, indicating that 4-AP specifically inhibited G1 phase which requires the involvement of K + channel for cell proliferation (Table 1). signaling pathways (26). We examined the effect of inhibition of K + channel activity by 4-AP on cell volume (see Figure 5 and Table 2). Cell volume is in direct proportion to the intensity of forward side scatter (FSC), which is used to measure the cell volume [7]. RPMI-8226 cells were cultured in the presence of 2 mM 4-AP for 24 h. The suppression of K + channel activity resulted in 12 ± 3.2%, 19.3 ± 1.6%, and 28 ± 4.3% ( = 3) increase in cell volume compared with control cells, respectively (Table 1). Our data are in agreement with the model proposed by Dubois and Rouzaire-Dubois [10], in which the K + channel plays an important role in controlling cell volume.
Discussion
It has been reported that voltage-dependent potassium channel (Kv) is found in T lymphocytes, B lymphocytes, and other cells [8]. In this study, we noticed the presence of Kv channels, which is a type of K + channels coded by Kv1.3 gene, in RPMI-8226 multiple myeloma cells. The currents exhibited the kinetic, voltage-dependent, and pharmacological characteristics as expected. As found in other preparations, the voltage-activated K + currents were sensitive to 4-AP. K + channels' activity has been implicated in cellular proliferation in a variety of cell types [11]. Although 4-AP has previously been reported to inhibit the proliferation of tumor cells [12,13], this is the first study to show such an effect in RPMI-8226 multiple myeloma cells.
In previous studies, K + channel blockers were found to inhibit mitogenesis of various cell types, which suggested that K + channels are required for cell proliferation and signal transduction in mitogenesis [3,14]. The suppression of K + channels' activity could weaken the immune system by inhibiting T lymphocytes proliferation. Accordingly, human Kv1.3 has been recognized as an excellent therapeutic target for modulating immune system [14]. However, it is unclear whether voltage-gated potassium channels in MM cells membrane are involved in cell proliferation.
In human T lymphocytes, voltage-gated K + channels are crucial to transmembrane potential and cell proliferation in response to mitogenic stimulation [15,16]. In our study 4-AP induced a significant decrease of membrane potential in a dose-dependent manner in MM cells. The effect of membrane potential variations under the control of K + channels would be involved not only in regulating Ca 2+ influx, which is well established as a crucial factor for cell proliferation, but also in maintaining the driving force for Na + -dependent nutrient transport and influencing the intracellular pH [17]. On the other hand, a large number of reports show that a transient hyperpolarization is required for the progression of the early G1 phase in cell cycles, and K + channels activated by mitogenic signals mainly control the hyperpolarization [18]. Our data indicate that suppression of the K + channels by 4-AP inhibits RPMI-8226 cells proliferation and results in accumulation of G1 phase cells. Arrest of RPMI-8226 cells at the G1/S boundary with aphidicolin revealed that RPMI-8226 cells that have progressed through the G1 checkpoint are capable of entering S phase and synthesizing DNA, independent of the presence of channel blocker. Our results support the hypothesis that inhibition of the K + channels by 4-AP on RPMI-8226 cells proliferation is a phase-specific event. It is suggested that the mechanism of cell proliferation inhibition should be as follows: K + channel blocker inhibits K + flux, which leads to membrane potential depolarization and inhibition of transient hyperpolarization and therefore intracellular environment changes, such as the levels of Ca 2+ , Na + , and pH and the related enzymes' activation.
In addition to controlling membrane potential, plasma membrane K + channels are also critical components in cell volume regulation [2]. Heterologous expression of Kvl.3 channels in mouse CTLL-2 cells, which were unable to control cell volume before gene transfection, reconstitutes their ability to regulate cell volume [2]. In the present study, we observed that RPMI-8226 cell volume increases up to 28 ± 4.3% after 24 h suppression of K + channels by 4-AP. The volume increase may play a regulatory role in cell growth, either by altering intracellular ions concentration or by altering the activity of mitogen-activated kinases [19]. In fact, glioma cells show their highest proliferation rate within a relatively narrow range of cell volumes, with decreased proliferation both over and under that optimal range [20], namely, "cell size checkpoint, " which exists in many types of mammalian cells. The hypothesis brought forward by Dubois and Rouzaire-Dubois [10] postulates that K + channels responsible for cell volume regulation have an admission function in cell proliferation through controlling volume in such a way that crucial solutes, including Na + which is very important to DNA synthesis and the second messenger Ca 2+ , can maintain an appropriate concentration to support proliferation or activate mitogen-activated protein kinases (MAPKs). Thus the mechanism of cell volume regulation in proliferation needs to be further explored.
Conclusion
In conclusion, we have demonstrated that suppression of K + channels' activity can markedly inhibit RPMI-8226 cell proliferation and arrest cells in G1 phase of cell cycle. Our results additionally support the notion that voltage-gated K + channels contribute to control cell growth through the G1/S transition in the cell cycle. And further studies are required to investigate the mechanism of how the K + channels mediate signals involved in cell growth.
|
2016-05-12T22:15:10.714Z
|
2014-06-08T00:00:00.000
|
{
"year": 2014,
"sha1": "b5e1cff2173c3a0750dffc21848b63173ce934a3",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2014/785140.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "240ef6fedc289cca8cde154d5579acb204a7c707",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Biology"
]
}
|
253439716
|
pes2o/s2orc
|
v3-fos-license
|
Influence of sputtered AlN buffer on GaN epilayer grown by MOCVD
The ex situ sputtered AlN buffer and GaN epilayer grown on top of it by metalorganic chemical vapor deposition were studied comprehensively by a variety of techniques including atomic force microscope, high resolution x-ray diffraction, Raman and x-ray photoelectron spectroscopy characterizations. It exhibited that the AlN buffer deposited by using sputtering technique could be oxidized with exposure in atmosphere. Such oxidation phenomenon significantly influences the characteristics of GaN epilayer, for example leading to poor surface morphology, high dislocation density, and large compressive stress. This study demonstrated the effect of oxygen impurities on GaN growth and has an important guiding significance for the growth of high-quality III-nitride related materials.
Introduction
III-nitrides are wide band-gap semiconductors suitable for light emitting diodes (LEDs) [1], power electronics and highfrequency devices [2] due to their excellent physical properties, such as direct bandgap, high breakdown voltage, high electron saturation and electron mobility [3]. The progress of III-nitrides is significantly remarkable since the crystalline quality of GaN epilayer can be greatly improved with two-step growth technology by metal-organic chemical vapor deposition (MOCVD) [4]. However, with maturing IIInitride technology in recent years, some limitations of the conventional approach, albeit very successful, have emerged. * Author to whom any correspondence should be addressed.
Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Therefore, further reduction of dislocation density and residual strain is necessary for improvement of device efficiency and reliability [5]. Recently, sputtering technique, a mature deposition method widely employed in semiconductor industry, has been considered a viable alternative to MOCVD growth of IIInitrides, or at an aid [6]. It has been demonstrated that a thin ex situ sputtered AlN buffer layer greatly improves crystalline quality of upper MOCVD epitaxial GaN layer, particularly, it has been confirmed the validity in LEDs, electronic devices and photo detectors [7][8][9][10][11]. In practical, such technique has been widely employed in LED industries [12], as it could greatly reduce the growth time and cost. However, to the best of our knowledge, the characteristics of such a composite template, especially the influence of sputtered AlN buffer layer on GaN grown on top of it has not been systematically studied.
In this paper, we found that the crystal quality and surface morphology of GaN epilayer could significantly deteriorate with the increased the exposure time of the sputtered AlN/sapphire template in atmosphere. It can be mainly attributed to the oxidation effect on the sputtered AlN, confirmed by detailed x-ray photoelectron spectroscopy (XPS) characterization. Importantly, we also confirmed that oxygen impurities obviously affect GaN nucleation and screw dislocation generation. This result is of great significance for using ex situ sputtered AlN as a nucleation layer for epitaxial growth of GaN, especially for high-quality material epitaxial preparation and other related applications.
Experiment
In this study, AlN buffer layers with a thickness of 25 nm are sputtered on 2 inch c-plane sapphire substrates by using a physical vapor deposition system. The whole preparation process is carried out in a high-vacuum chamber. The tray with the substrate is transferred into the vacuum chamber by robot arm, later the tray is heated up to 650 • C, and a mixture gas of argon and nitrogen in a ratio of 1:8 is introduced, which has the flow of 30 sccm and 240 sccm, respectively. The pressure during the sputtering process is kept at 3.72 × 10 −3 Torr, and the effective power is fixed to 3000 W. In addition, at the beginning of the sputtering process, a trace amount of oxygen was introduced to control the lattice mismatch between the AlN buffer and the sapphire substrate. An electric field is added between the target and the tray to ionize the Ar ions. Under the action of the electric field, they hit the surface of the target at a high speed, then the sputtered Al ions react with nitrogen and deposit on the surface of the sapphire substrate [11,13]. In this study, as-deposited fresh AlN buffer sample was denoted as sample AlN-A. In order to investigate the evolution process of the sputtered AlN buffer layer, as-deposited sample was intentionally exposed in atmosphere for 5 days, 10 days and 20 days, respectively. Accordingly, these samples are denoted as sample AlN-B, AlN-C and AlN-D, respectively. After exposure, GaN epilayer was grown on these sputtered AlN buffer samples by a MOCVD system with identical growth condition. During this growth process, trimethylgallium (TMGa) and ammonia (NH 3 ) were used as the precursors for gallium (Ga), and nitrogen (N) sources respectively. The hydrogen was used as a carrier gas with a total flow of 80 standard cubic centimeter per minute (SLM). First, the AlN films was baked with high-temperature cleaning (1050 • C) for 300 S to remove the surface contamination. Second, a low temperature (1020 • C) with low V/III (800) and high reactor pressure (450 mbar) to grow 20 nm u-GaN layer. Third, the temperature was increased to 1050 C for growing GaN layer with high V/III (1600) and low reactor pressure (150 mbar). The total thickness of GaN epilayer is 2 µm. Accordingly, GaN epilayers grown on sample AlN-A, AlN-B, AlN-C and AlN-D were denoted as sample GaN-A, GaN-B, GaN-C and GaN-D, respectively.
The crystal quality of samples was measured by the highresolution x-ray diffraction (HR-XRD). The surface morphology of samples was characterized by optical microscope (OM) and atomic force microscopy (AFM). The in-plane residual stress of GaN epilayer was characterized by Raman spectroscopy. Finally, the surface state of AlN buffer layer was investigated by x-ray photoelectron spectroscopy (XPS), which were recorded by ESCALAB 250Xi XPS system (ThermoFisher Scientific) equipped with a focused monochromatized Al-Kα radiation(1486.6 eV) Figure 1 shows the OM of GaN epilayers grown on sputtered AlN buffer layers with different exposure time in atmosphere. It demonstrates that the surface of GaN epilayers is obviously changed with the increase of exposure time of AlN buffer. As shown in figure 1(a), for sample GaN-A grown on fresh AlN buffer layer, the surface of GaN is fairly smooth and clean. However, some surface fluctuation can be observed on the top of sample GaN-B with underlying AlN buffer layers being exposed in the atmosphere (shown in figure 1(b)). With further increase of the exposure time, stripe shape surface morphology is clearly presented in GaN-C shown in figure 1(c). Finally, there are a large number of hexagonal hillocks covering almost the entire surface for the sample GaN-D in figure 1(d). The hexagonal base of the hillocks on sample GaN-D is estimated about 20-50 µm in size. Based on previous study, the origin of the formation of the hexagonal hillocks in GaN films is mainly attributed to that inversion domains in GaN could nucleate at thin platelets of oxygen containing amorphous material [14][15][16]. Therefore, the significant morphology changes of GaN epilayers could be related to the oxidation effect on AlN buffer with the increase of exposure time. The more undulation of the GaN surface, the higher degree of oxidation of the AlN buffer, Then, its morphology gradually changed from smooth surface to stripe shape, finally tending to form hillock morphology.
Results and discussion
As AFM images of GaN epilayers shown in figure 2, it can be observed obviously step flow on the surface, indicating that the growth model of GaN on all the sputtered AlN buffer are step flow growth mode. However, it should be noticed that with the AlN buffer exposure time increase, the roughness for growth GaN become larger, i.e. 0.176 nm, 0.197 nm, 0.249 nm and 0.258 nm, respectively. Moreover, we can also see that the step flow's width and height of gradually increases. This is probably associated to the modification of the surface state of AlN buffer after being exposed to the atmosphere, which may affect the free path of diffusion on the surface of the source molecule for the GaN growth at the initial stage of growth. Figure 3 shows the x-ray rocking curve and Raman scattering spectra of GaN-A, GaN-B, GaN-C and GaN-D, respectively. The measured full width at half maximum (FWHM) of the GaN epilayer along (002) plane is 69.4, 96.9, 106.2 and 169.1 arcsec for GaN-A, GaN-B, GaN-C and GaN-D samples, respectively, as shown in figure 3(a). According to the Mosaic model [17], the screw dislocation densities of GaN are calculated as 1.05 × 10 7 cm −2 , 2.04 × 10 7 cm −2 , 2.45 × 10 7 cm −2 and 5.93 × 10 7 cm −2 , respectively. We can see a five-fold increase in screw dislocations in GaN-D than that of GaN-A. Figure 3 2.67 × 10 8 cm −2 , 2.58 × 10 8 cm −2 , 3.07 × 10 8 cm −2 and 3.09 × 10 8 cm −2 , respectively. As demonstrated more clearly in figure 3(c), it can be seen that the quality of the GaN (002) plane crystals will significantly deteriorate, if growing on the AlN buffer with longer exposure time. But the quality of the crystal for the (102) plane is not obviously variety. In order to show the difference between different type of dislocations, we used the method of the tails of the rocking curves to analyze the as prepare samples, this is the nondestructive method for the threading dislocation analysis [18,19], it shows the same result for the variation of crystal quality of GaN in GaN epilayers. The XRD data shows that the influence of AlN buffer have a much larger impact on the formation of screw dislocations during the epitaxial growth of GaN, but has a relatively small impact on the formation of the edge dislocations. It can be seen that oxygen segregation to screw dislocations in GaN epitaxial layer from XRD results. It may be due to the substitution of oxygen for nitrogen, which is to extend over many monolayers for the open core dislocation It has been reported that the major origin of screw dislocations in GaN is related to appearance of oxygen impurity [20]. Therefore, such significant increased screw dislocations in GaN epitaxial layer strongly associated with the oxidation effect of the AlN buffer with the increase of exposure time.
Raman scattering measurements are performed in backreflection geometry using a Raman microscope with nonresonant 532 nm (2.3 eV) excitation. It can be seen the peak of GaN high-E 2 plasmon-longitudinal optical (LO) phonons at 570.8 cm −1 (GaN-A) shift to 571.4 cm −1 (GaN-D), as shown in figure 3(d). Note that the observation of the peaks for the E 2 (LO) mode drift in the direction of higher wave number. As we know, the E 2 (LO) phonon peak is used to evaluate the strain/stress present in the GaN films and the phonon frequencies of unstrained GaN E 2 (high) is equal to 567.6 cm −1 . The E 2 (LO) phonon mode of Raman spectrum is sensitive to the amount of strain and has been widely used in the characterization of GaN. A quantify stress by the following equation [21,22]: where δ is the residual stress and ∆ω is the E 2 (LO) phonon peak shift. It can be calculated that the stress existing in the GaN/AlN/sapphire is approximately drift from 0.761 GPa to 0.904 Gpa, indicating that the top GaN were under compressive stress due to higher thermal expansion coefficient of sapphire than that of GaN. More accurate stress values in GaN epilayers can be determined as from 0.574 GPa to 0.957 GPa for all four samples based on a reliable approach [23]. It can be observed that such biaxial compressive stress of GaN epilayer become larger with AlN buffer exposure time longer, which is also confirmed by XRD characterization by using 2theta/omega scan. Such results suggested that the tensile stress during the GaN epitaxy process is released through the formation of dislocations. Generally speaking, such tensile stress functions as the compensation of the compressive stress, caused by the sapphire to GaN epilayer during the cooling process. And it ultimately leads to a larger compressive stress in GaN epilayer after cooling. Furthermore, to study the evolution process of AlN buffer by sputtering deposition with the change of exposure time, the as prepared AlN buffer samples were also studied comprehensively by a variety of techniques including AFM, HR-XRD, XPS. Firstly, AFM surface images of the sample AlN-A, AlN-B, AlN-C and AlN-D were recorded in tapping mode within 2 µm × 2 µm scanning area and microscopic images are shown in figure 4. AFM image shows there is no essential change in morphology, basically columnar, which is the classic morphology of low temperature AlN growth prepared by sputtering method. The roughness is 0.339 nm, 0.554 nm, 0.362 nm and 0.407 nm for AlN-A, AlN-B, AlN-C and AlN-D, respectively. Figure 5 shows the (002) plane x-ray rocking curve of the AlN buffer on the sapphire. It can be seen that the AlN buffer has a wurtzite crystalline structure, the FWHM of sample AlN-A, AlN-B, AlN-C and AlN-D are 265.7, 274.6, 297.1 and 281.1 arcsec, respectively. While the (102) plane is too thin to obtain a signal. The relatively stable FWHM of (002) of all four samples proved that the overall crystalline quality of AlN buffer did not change significantly after exposure in atmosphere. Such mild fluctuation in FWHM of (002) rocking curve could probably be attributed to the severe oxidation effect related modification of surface state of sputtered AlN, which is associated to the polycrystalline nature of buffer deposited by sputtering method at relatively low temperature. High density of grain boundary in polycrystalline AlN serve as a fast diffusion path for reactive oxygen atoms. As a result, a severe oxidation of AlN buffer as the time of exposure increases. Such conclusion can be strongly confirmed by detailed XPS characterizations as follow. Figures 6(a) and (b) show the XPS survey scans of AlN-A, AlN-B, AlN-C and AlN-D series buffers and magnify view of peak oxygen levels O 1S . The spectra confirmed that these AlN buffer were composed of the elements aluminum, nitrogen, oxygen and carbon [24,25]. Due to the high chemical activity of Al, the remnant oxygen is likely to interact with the Al atoms. The most noteworthy feature in the spectra is that the prominent oxygen peaks are taller than those of aluminum. In addition, a considerable amount of carbon is observed, which is mainly from adventitious contamination at atmosphere. In order to analysis the surface chemical surface state, high-resolution spectra of Al2p and N1s were performed on the AlN buffer layers. Figure 7(a) shows the comparison of the Al2p peaks for all four AlN buffer samples [26,27]. The binding energy scale was calibrated by measuring adventitious C1s peak at 284.8 eV and A Shirley type background subtraction It can be observed with the increase of exposure time, the center of the peak shifts from 73.47 to 73.70 eV, indicating the drift of binding energy in the direction of high bond energy, what's more, Al2p peak has closely spaced spin-orbit components, it shows only one peak. Similarly, the center of N1s peaks shift from 396.60 to 396.87 eV, as shown in figure 7(b). Moreover, the intensity of the peak decreases with the increasing of exposure time. Normally, such intrinsic peak value of the material moves in the direction of high bond energy after oxidation as the bond energy of oxygen is relatively high electronegative [28]. Therefore, this result strongly suggested that AlN buffer samples were gradually oxidated with the exposure time increasing. Moreover, as shown in figure 7(a), it can also be noticed that the high-resolution Al2p scans curve is not symmetrical due to Al-O bonding on the right side of the curve. In order to analyze such oxidation effect more clearly, the Al2p photoelectron spectrum can be deconvoluted into two components assigned to Al-N and Al-O bonding. For example, as presented in figure 7(a), the spectrum of GaN-D can be deconvoluted by two dash curves, Al-N (73.90 eV) and In all, the XPS results indicate that the Al2p binding energy of AlN buffer grown at low temperature fabricated by sputtering method is unstable. When the surface of AlN buffer is oxidized in atmosphere, the binding energy of aluminum and oxygen atoms is stronger than that between aluminum and nitrogen atoms. The oxidized area is easier to obtain gallium atoms molecules for preferential nucleation growth. Therefore, the enhanced oxidation effect with the increase of exposure time will generate higher density of the nucleation point for GaN growth. It will lead the formation of screw dislocations in GaN epilayer and finally result in the larger FWHM of the (002) plane of the GaN epilayer, as shown in figure 3(a). Moreover, such oxidation effect of the AlN buffer with the increase of exposure time will also induce the significant morphology changes of GaN epilayers as presented in figure 1.
Conclusion
In this study, we found that the surface morphology, crystalline quality and strain state of GaN epilayer grown on sputtered AlN buffer is significantly influenced by the oxidation effect of AlN buffer after exposure in atmosphere. It indicated that the oxygen atom segregation to screw dislocations in GaN epitaxial growth. It may have similar influencing factors for other nitrides epitaxy. According to our experiments, the sputtered AlN/sapphire template should be used for epitaxy growth promptly in order to achieve high crystal quality of the subsequent material and high reliability device applications.
Moreover, AlN/sapphire template should be stored in a nitrogen cabinet to delay its oxidation effect, although such oxidation process cannot be completely avoided entirely, there is still a trace of oxygen in the cabinet in the actual operation process. Such conclusions/suggestions should be applicable to other materials, which grow at low temperature condition and are prone to oxidation.
This work was partially supported by National Key
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors.
|
2022-11-10T16:45:45.511Z
|
2022-11-08T00:00:00.000
|
{
"year": 2022,
"sha1": "b68d75fbe64d2a7391afea64a731c82b853b0006",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1361-6463/aca106",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "6386e761408ad822825b44ba8879d8afd1837cdf",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
182589176
|
pes2o/s2orc
|
v3-fos-license
|
Criminal law policy about KPK authorities in the perspective of criminal action in corruption in Indonesia Author’s
Information: Criminal law policy of the authority of the Corruption Eradication Commission the authority associated with the Corruption Eradication Commission (KPK) is the state agency that are unconstitutional, although not spelled out in the state constitution is the 1945 Constitution. Corruption eradication commission (KPK) was formed to look at the nature of the corruption itself is an extraordinary crime, so it requires an independent institution to fight corruption in Indonesia. Background The Commission is not due to the formation of the constitutional design rigidly interpreted, but rather incidental issues in the country and the common will of the people of Indonesia to combat corruption. Position of the Commission as a state agency is independent and free from the influence of any power, it is meant for combating corruption Commission did not get the intervention of any party. The establishment of the Commission was also a response to the ineffectiveness of the law enforcement agency performance so far in combating corruption, which impressed protracted in handling even indicated there was an element of corruption in the handling of his case. The authority granted by the Act prosecution to the Commission under the authority of the legitimate .The authority of the Commission is constitutional, it is reinforced by a number of decisions of the Supreme constitution..
Introduction
Law enforcement in the history of eradicating corruption in Indonesia has been going on since the 1960s, and has changed laws 4 times, and finally by Law Number 20 of 2001 concerning changes to Law Number 31 of 1999 concerning Eradication of Corruption Crimes .Even though the amendment to the Act is that much, the philosophy, purpose and mission of eradicating corruption remain the same. Philosophically, the legislation to eradicate corruption confirms that the welfare of the Indonesian people is a goal of the nation to realize the development goals that are aspired and at the same time the founding ideals of the independence of the Republic of Indonesia formulated in the Preamble of the 1945 Constitution, and adopted into the fifth principle of the Pancasila. Therefore, every threat and obstacle to achieving this nation's welfare is a violation of the nation's ideals. However, as a rule of law, steps to prevent and eradicate criminal acts of corruption must be based on the principle of legal certainty and based on the ideal of justice as a legal ideal since Greece. The juridical foundation, is the 1945 Constitution as "grund-norm" (basic law) which should be realized in an Act that reflects the ideals and legal objectives as described above. It is necessary to examine the extent to which the Corruption Eradication Law (UUPK) has reflected the legal principles and intended legal ideals, which will be described in this paper.
The sociological foundation of law enforcement in Eradicating Criminal Acts of corruption according to Giddens, is that, the poverty that hit approximately 35-50 million Indonesians today is caused by corruption that has been systemic and extends to all layers of the bureaucracy and cannot be separated from the reciprocal influence bureaucracy and the private sector. Syed Hussein Alatas in his book The Socciology of Corruption explains corruption is "The sociologist studying the phenomenon of corruption has to be fully conversant with the history, the culture. Sociology of corruption is the study of the phenomenon of corruption related to history and culture where corruption occurs. Therefore, the eradication of corruption is not just the aspirations of the wider community but is an urgent need for the Indonesian people to prevent and eliminate everything from this country because thus law enforcement in eradicating corruption is expected to reduce and broaden the elimination of poverty.
Starting from the three legal policies to eradicate corruption in Indonesia above, it is clear that the law enforcement measures to eradicate corruption are a common obligation not only for law enforcement but also for all components of the nation with the guidance and leadership of the nation's leaders. the vice president reaches the bureaucratic leadership in the region, the legislature and the judiciary. While the government's political will to eradicate corruption crimes has become a real priority program. This form of political will was proven by the ratification of Law 28 of 1999 concerning the Implementation of a Clean Country, Free of Corruption, Collusion andNepotism, andLaw No. 31/1999 (amended by Law No. 20 of 2001) on Eradicating Corruption. In addition, Law No. 30/2002 concerning the Corruption Eradication Commission (KPK) has a very strategic role. Corruption in the political dictionary is a symptom or practice in which officials of state agencies misuse their positions to enable bribery, forgery and other irregularities for personal gain.
The intended corruption eradication policy is first to maintain and maintain the ideals of social justice and the welfare of the nation within the Republic of Indonesia as a legal state as a philosophical foundation; maintain and protect the right of every person to the recognition, guarantee, protection and fair legal certainty and equal treatment before the law (Article 28 D paragraph (1) of the 1945 Constitution) as the basis for law enforcement. maintaining the function of criminal law, specifically the 1999 and 2001 Corruption Eradication Act nowLaw No. 30 of 2002 as an operational foundation that prioritizes the balance of order and security maintenance functions on the one hand, and the deterrence / punishing function on the other principles of criminal law.
The purpose of the legal eradication of corruption cases is the birth of a deterrent effect. The deterrent effect is important for controlling corruption in order not to develop into a systemic crime. The reason is, if corruption is at a systemic level, then the impact of this crime becomes more serious, because it not only causes huge state losses, but also creates poverty, poor public services, and damages the economic foundation of the country. Law enforcement without deterrent effects will create a conducive situation for perpetrators to continue corruption. Likewise, the costs or costs of combating corruption will be more expensive than the results achieved. Starting from and based on the background as stated above, the problems in this study are formulated as follows.
What is the criminal law policy regarding the authority of the Corruption Eradication Commission in Law Number 30 of 2002 in the Corruption Eradication
Commission in Indonesia and What is the criminal law policy regarding the authority of the Corruption Crime Commission in eradicating corruption in Indonesia for the coming masses.
Method
The method is written descriptively and should describe the research methodology or steps in conducting the study. A brief justification of the method is recommended to give an idea to the reader about the appropriateness of the method, reliability and validity of the results.
Criminal Law Policy Regarding KPK's Authority in Eradicating Corruption in Indonesia for Future Masses
The government's efforts in eradicating corruption in the reform era were marked by the issuance of various legislative products whose purpose was to renew both the substance and institutional aspects. The legislation includes: is a state institution that in carrying out its duties and authority is independent and free from the influence of any power. Marwan Effendi said the purpose of the establishment of the Corruption Eradication Commission was to increase the effectiveness of efforts to eradicate corruption
Urgency of Investigators in Carrying Out the Tasks of the Corruption Eradication
Commission.
As we know, the investigators possessed by the Corruption Eradication Commission today are not investigators appointed by the Corruption Eradication Commission itself, but investigators who are owned and still have the status of Police and Prosecutors. The result of this was the lack of effective performance of the Corruption Eradication Commission in combating corruption. Moreover, the cases handled by the Corruption Eradication Commission involved members of the Police and Attorney General's Office. Here comes the sectoral ego investigating the Corruption Eradication Commission to investigate members of the Police and the Prosecutor's Office with the intention of not decreasing the authority of the Police and Prosecutor's Office which is a senior institution of the Corruption Eradication Commission in Combating Corruption Crimes.
The existence of Independent Investigators in an Effort to Increase the Eradication of Corruption in Indonesia.
But the momentum is that the law should be enforced from the general public. There are many benefits to the Corruption Eradication Commission, the independent investigators from the internal Corruption Eradication Commission. One of the reduced public concerns about the independence of the corruption eradication Commission in investigating corruption cases. The recruitment of independent investigators can certainly add to the composition of investigators in the Corruption Eradication Commission, which currently only number around 100 people. Even though the workload of the Corruption Eradication Commission is very complex and the existence of the existence of the KPK is very heavy. Another improvement is the recruitment regulations for 7 investigators of the Corruption Eradication Commission. As soon as possible, the hinder as possible regulations of the recruitment of independent investigators must be marginalized immediately. It remains now that the government wishes to react as stakeholders and the House of Representatives as the KPK legislator, if later the opportunity is opened for independent investigators to be present at the Corruption Eradication Commission.
3.2.1.Implementation and Strategy of the Corruption Eradication Commission (KPK) in Combating Corruption in Indonesia.
The Corruption Eradication Commission in the task of eradicating corruption carries out two ways, namely acting (repressive) and preventing (preventive). Both are carried out simultaneously at balanced speeds. This is the way to deal with the crime of corruption will be in vain.
This method was carried out after examining the conditions and situations of eradicating corruption in Indonesia. History has proven that the efforts to eradicate corruption in a repressive manner without any results will not be effective. A number of Teams, Commissions or Agencies have been tasked with eradicating corruption since the 1950s such as OPSTIB, (oppression operation) in 1977, which only focused on prosecution without touching prevention efforts. The result lights up at the beginning, then slowly dims without a trace even the team / body is not sterile with the virus of corruption.
Learning from that history, the KPK put prevention efforts in the same position with prosecution. One of the preventive measures is to improve the bureaucratic system that is effective, transparent and accountable. Because corruption occurs not only because of bad people (dilapidated state organizers) but also because of bad systems (bad government systems).
Law Number 30 of 2002 gave a mandate to the Corruption Eradication
Commission to take part in creating this condition among the duties and authorities of the Corruption Eradication Commission to conduct a study of the bureaucratic system, advise on improvements and supervise bureaucratic institutions and law enforcement officers. The ultimate goal is to create clean and effective bureaucrats and law enforcement officers.
Another strategy as the executor of the duties and authorities of the Corruption Eradication Commission in accordance with Law Number 30 of 2002 is to carry out coordination and supervision. These two tasks aim to be more empowering law enforcement agencies or others. The coordination activity was carried out in the form of a coordination meeting with the prosecutor's office and the police to discuss the handling of cases of corruption. While supervision activities are carried out in the form of research and review, and the title of the case of the investigation or prosecution of corruption cases being carried out by the prosecutor's office and the police based on the notification of the start of investigation (SPDP) reported to the Corruption Eradication Commission.
Coordination and supervision was carried out by the Corruption Eradication Commission, including the High Prosecutor's Office and the police. One of the prominent things in the effort of supervision and coordination carried out by the corruption eradication commission was the emergence of obstacles both the police and the prosecutor's office against the decline of the President's permission to examine state officials. In connection with these obstacles / obstacles the Corruption Eradication Commission helped monitor the process of requesting permits.
The various efforts that have been made in the fight against corruption do not necessarily immediately eliminate corruption in Indonesia. However, through continuous efforts and with the support of various parties, surely the vision of creating Indonesia that is free from corruption is not impossible.
One of the concepts of law enforcement on eradicating corruption is the effort and strategy of law enforcement against the eradication of corruption should make the following requirements, namely: the existence of a national political commitment to eradicate corruption with responsive law enforcement. Indeed, all stages of national development regulated by various laws have included political commitments concretely proven in discussion activities, analysis opinions and suggestions made by various elements of the community stating that the KKN practices (Corruption, Collusion and Nepotism) should be abolished immediately.
In the authority of the Corruption Eradication Commission the need for preventive strategic measures must be made and carried out directed at matters that cause corrupt practices, each of the causes of corruption that are identified must be preventive, so as to minimize the causes of corruption. Besides that, efforts need to be made to minimize the opportunities for corruption. Detective strategies must be made and implemented especially by being directed so that if an act of corruption already occurs then the action will be known in a short and accurate time, so that it can be followed up appropriately.
A repressive strategy must be made and implemented especially by being directed to provide appropriate and fast legal sanctions to parties involved in corrupt practices. Thus, the process of investigation, investigation and prosecution up to the judiciary needs to be reviewed to be able to be perfected in all aspects so that the handling process will be carried out quickly and precisely.
Along with the socio-economic development of the community, there will always be new corruption cases that are increasingly sophisticated. Efforts to standardize this understanding may be more useful for law enforcement purposes, so that people and law enforcement officials have more clear signs. For this reason, various efforts to criminalize criminal acts do not stop until preventive and repressive actions, but must also be innovative or become pre-emptive, preventive, repressive, curative and rehabilitative.
Various studies of corruption cases in Indonesia show that corruption trails negatively towards national development through leakage of state finances, almost economic growth, inefficiencies in economic resources, impedes investment, high cost economy, extends the distance of rich and poor, destroys society, and destroys life state.
Therefore the effort to Eradicate Corruption is an urgent matter to become a national agenda. The efforts that have been carried out so far are felt to be not optimal and must be further enhanced by involving more parties both in the government and the government. In addition, these efforts need to be complemented by various scientific studies that underlie each of these corruption eradication activities, in line with our consistency in law enforcement in Indonesia (law enforcement in Indonesia).
Conclusion
The authority possessed by the Corruption Eradication Commission in eradicating corruption over the authority possessed by the Police and the Attorney General's Office has resulted in a norm vacuum related to the authority of the Corruption Eradication Commission in the appointment of its own investigators. invited, has a strategic position as Constitutional Importance whose position is the same as other State institutions mentioned explicitly in the 1945 Constitution.
In the authority of the Corruption Eradication Commission the need for preventive strategic measures must be made and carried out directed at matters that cause corrupt practices, each of the causes of corruption that are identified must be preventive, so as to minimize the causes of corruption. Besides that, efforts need to be made to minimize the opportunities for corruption. Detective strategies must be made and implemented especially by being directed so that if an act of corruption already occurs then the action will be known in a short and accurate time, so that it can be followed up properly
|
2019-06-07T23:16:03.897Z
|
2019-04-10T00:00:00.000
|
{
"year": 2019,
"sha1": "2d8b30c3201a49b6a33d691c53649f1572583f7f",
"oa_license": "CCBYSA",
"oa_url": "http://jurnal-umbuton.ac.id/index.php/Volkgeist/article/download/125/90",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7bcba5b7733238d8ce64b3602d22fdf3d50da602",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
22758441
|
pes2o/s2orc
|
v3-fos-license
|
Expression and purification of turkey coronavirus nucleocapsid protein in Escherichia coli
Purification of turkey coronavirus (TCoV) nucleocapsid (N) protein, expressed in a prokaryotic expression system as histidine-tagged fusion protein is demonstrated in the present study. Turkey coronavirus was partially purified from infected intestine of turkey embryo by sucrose gradient ultracentrifugation and RNA was extracted. The N protein gene was amplified from the extracted RNA by reverse transcription-polymerase chain reaction and cloned. The recombinant expression construct (pTri-N) was identified by polymerase chain reaction and sequencing analysis. Expression of histidine-tagged fusion N protein with a molecular mass of 57 kd was determined by Western blotting analysis. By chromatography on nickel-agarose column, the expressed N protein was purified to near homogeneity as judged by sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis. The protein recovery could be 2.5 mg from 100 ml of bacterial culture. The purified N protein was recognized by antibody to TCoV in Western blotting assay. The capability of the recombinant N protein to differentiate positive serum of turkey infected with TCoV from normal turkey serum was evident in enzyme-linked immunosorbent assays (ELISA). These results indicated that the expressed N protein is a superior source of TCoV antigen for development of antibody-capture ELISA for detection of antibodies to TCoV.
Introduction
Turkey coronavirus (TCoV) causes an acute and highly infectious enteric disease. Turkey coronaviral enteritis is the most costly disease of turkeys encountered in Minnesota between 1951 and 1971. Coronavirus-associated outbreaks of poult enteritis remained as a major concern in the turkey industry. The clinical signs usually appear at 7-28 days of age and include inappetence, wet droppings, ruffled feathers, decreased weight gain, growth depression, and uneven flock growth. There is currently no specific treatment or vaccination available to control and prevent this disease. Rapid diagnosis and monitoring of immune status of a flock is critical for controlling outbreaks.
The immunofluorescent antibody (IFA) test is currently the most important serologic diagnosis of TCoV infection. The IFA procedures need antigen prepared from infected tissues, highly trained personnel, and expensive equipment.
When the test is applied to evaluate large number of clinical samples, it is labor-intensive and time-consuming. Development of an antibody-capture enzyme-linked immunosorbent assay (ELISA) for rapid diagnosis and effective control of turkey coronaviral enteritis is essential. However, large amount of highly purified viral antigen for coating ELISA plate requires propagation of TCoV in cell culture, which is not available at the present time. Alternatively, molecular cloning and expression of major structural proteins of TCoV was carried out for preparation of large quantities of highly purified viral proteins.
Coronavirus is enveloped and positive-stranded RNA virus that possesses three major structural proteins including a predominant phosphorylated nucleocapsid (N) protein, peplomeric glycoprotein (spike protein, S), that makes up the large surface projections of the virion, and membrane protein (M) (Dea and Tijssen, 1988;Saif, 1993). The N protein is abundantly produced in coronavirus-infected cells and is highly immunogenic. The N protein binds to the viral genomic RNA and composes the structural feature of helical nucleocapsid. The complete sequence of TCoV N gene was recently obtained in this laboratory (Akin et al., 2001). The nucleotide and deduced amino acid sequences of TCoV N gene shared high (>90%) similarity with those of infectious bronchitis coronavirus (IBV) N gene (Boursnell et al., 1987).
The N protein is a preferred choice for developing a group-specific serologic assay in account of highly conserved sequence and antigenicity. The nucleocapsid proteins of various RNA viruses, such as mumps, rabies, vesicular stomatitis, measles, Newcastle disease, and IBV viruses, have been used as coating antigens in diagnostic ELISA (Linde et al., 1987;Reid-Sanden et al., 1990;Hummel et al., 1992;Ahmad et al., 1993;Errington et al., 1995;Ndifuna et al., 1998). The N protein gene of TCoV had been expressed in baculovirus system recently (Breslin et al., 2001). A complicate and competitive ELISA was demonstrated with this baculovirus-expressed N protein (Guy et al., 2002). However, the expression level of the cell culture-based baculovirus system is usually lower than that of prokaryotic system and the purity of this recombinant N protein was not clear. It is cheaper and more convenient to prepare large amount of pure recombinant protein in prokaryotic system. In addition, the antigenic integrity of N protein expressed in prokaryotic system is expected to be maintained because it is not glycosylated. The complete sequence of TCoV S and M genes has not been reported. The purpose of the present study was to express TCoV N gene with a prokaryotic expression system for preparation of large quantities of highly purified viral protein, which can be used as coating antigen for development of Ab-capture ELISA for serologic diagnosis of TCoV infection.
Virus propagation and purification
The TCoV isolate was obtained originally from field outbreak in Southern Indiana. The agent was maintained in the laboratory by blind passages in turkey embryo as described previously (Loa et al., 2000).
Construction of N gene in the expression vector pTriEx
Total RNA was extracted from the partially purified TCoV by a modified method using guanidinium thiocyanate and acid-phenol (Chomczynski and Sacchi, 1987;Akin et al., 1999). Primers NF (TCTTTTGCCATGGCAAGC) and NR (TTGGGTACCTAAAAGTTCATTCTC) containing restriction sites Nco I and Kpn I, respectively, were designed according to nucleotide sequence of TCoV N gene as reported (Akin et al., 2001). Turkey coronavirus N protein gene was amplified by reverse transcription-polymerase chain reaction (RT-PCR) with these two primers NF and NR. The amplified product containing the entire open reading frame (1,230 bp) was digested with Nco I and Kpn I and analyzed by agarose gel electrophoresis. The digested TCoV N gene fragment was purified and cloned to Nco I and Kpn I sites of plasmid pTriEx-1 (Novagen, Madison, WI). The pTriEx expression system allows the expression of recombinant N protein with a six histidine-tagged sequence on the C-terminal end. The construct was transformed to competent Escherichia coli strain Origami (DE3)pLacI (Novagen). Transformants were grown in LB medium containing 100 g/ml ampicillin, 34 g/ml chloramphenicol, and 1% glucose. Plasmids were purified by QIAquik mini-prep kit (Qiagen, Chatsworth, CA) and sequenced by DAVIS sequencing (Davis, CA) to confirm that the inserted TCoV N gene was in frame. The correct construct was referred as pTri-N
Expression of recombinant N protein in E. coli
For expression of the recombinant protein, Origami bacteria transformed with pTri-N plasmid DNA were inoculated in a tube containing 3 ml of LB broth supplemented with 100 g/ml ampicillin, 34 g/ml chloramphenicol, and 1% glucose and cultured overnight at 37 • C in a shaking incubator (225 rpm). The 3 ml culture was transferred to a 500 ml flask containg 100 ml of LB broth supplemented with 100 g/ml ampicillin and 34 g/ml chloramphenicol. The flask was shaken at 37 • C until the culture reached an O.D. 600 of 0.5. Protein expression was induced by addition of 1 mM isopropyl -D-thiogalactopyranoside (IPTG). Before the addition of IPTG and at 30 min, 1, 2, or 4 h after the addition of IPTG, 1 ml of the culture was collected and centrifuged. The bacteria pellet was resuspended in Laemmli sample buffer (Laemmli, 1970) and boiled for 5 min before analysis with sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and Western blotting.
Extraction of recombinant N protein from bacteria cell lysate
The bacteria were harvested by centrifugation at 10, 000× g for 10 min. The supernatant was discarded and the cell pellet was resuspended in Bugbuster reagent (Novagen) with a volume of 1 ml for every gram of pellet (wet weight). After complete resuspension of the pellet, a mixture of nuclease solution, Benzonase (Novagen), was added to remove the viscous nucleic acids at a volume of 1 l for every 1 ml of Bugbuster reagent. The mixture was gently rotated at room temperature for 20 min. The lysate was then centrifuged at 16, 000 × g for 20 min at 4 • C. The supernatant and inclusion body pellet were analyzed by SDS-PAGE and Western blotting for the presence of recombinant N protein.
Purification of recombinant N protein by chromatography with nickel-agarose column
The inclusion bodies containing the recombinant N protein were dissolved in Binding buffer containing 5 mM imidazole, 0.5 M NaCl, 20 mM Tris-HCl, and 6 M urea at pH 7.9. The dissolved inclusion bodies were filtered through a 0.45 nm syringe filter (Millipore, Bedford, MA) and loaded on a nickel chelating agarose column (10 mg protein/ml of gel) equilibrated in Binding buffer. The column was washed sequentially with 10 bed volumes of Binding buffer and Washing buffer (20 mM imidazole, 0.5 M NaCl, 20 mM Tris-HCl, and 6 M urea at pH 7.9). The recombinant N protein was eluted from the column with Eluting buffer containing 1 M imidazole, 0.5 M NaCl, 20 mM Tris-HCl, and 6 M urea at pH 7.9. Fractions eluted from the column were analyzed by SDS-PAGE on a 10% polyacrylamide / bisacrylamide gel (Laemmli, 1970). Identity of the recombinant N protein was confirmed by SDS-PAGE of fractions eluted from the column and Western blotting analysis of electrotransferred protein on nitrocellulose membrane (Millipore) with reagent specific to histidine tag, horseradish peroxidase-conjugated nickel-NTA (Qiagen).
SDS-polyacrylamide gel electrophoresis and Western immunoblotting
The samples were solubilized in sample buffer containing 62.5 mM Tris-HCl, pH 6.8, 1% SDS, 10% glycerol, 0.001% bromophenol blue, and 1% 2-mercaptoethanol and boiled for 5 min. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis was carried out using the discontinuous buffer system (Laemmli, 1970). Polypeptide bands were revealed by staining the gel with Coomassie brilliant blue G-250. For immunoblotting, polypeptides separated by SDS-PAGE were electrotransferred onto nitrocellulose membrane (Millipore) with transfer buffer containing 50 mM Tris, 384 mM glycine, and 20% (v/v) methanol, pH 8.3. Electrotransfer was carried out at 65 V for 1 h. The nitrocellulose membrane was incubated for 1 h in PBS buffer containing 0.05% Tween 20 (PBS-T). After washing three times in PBS-T, membrane was incubated for 2 h at room temperature with turkey anti-TCoV antiserum or chicken anti-IBV antiserum (SPAFAS, Storrs, CT) at 1:500 of dilution in PBS-T. Three times of washing was followed by addition of horseradish peroxidase-conjugated goat anti-turkey or chicken IgG (Kirkegaard & Perry Laboratories, Gaithersburg, MD). After incubation of 2 h at room temperature, the membrane was washed three times and covered with the peroxidase substrate, 3,3 -diaminobenzidine (DAB). The blot was allowed to develop and the reaction was stopped by washing the membrane in distilled water.
Enzyme-linked immunosorbent assay
The purified recombinant N proteins were diluted with PBS buffer, coated on 96-well microtiter plates, and evaluated for capability to differentiate turkey anti-TCoV antiserum from normal turkey serum by ELISA. The coating concentrations of N protein were at 2-fold serial dilutions from 1.25 to 40 g/ml. Serum positive (PC) for TCoV was the hyperimmune serum prepared from turkey experimen-tally infected with TCoV. The negative control (NC) serum was collected from a 4-month-old normal healthy turkey raised in an isolation room in the laboratory. Serum samples were serially diluted at 2-fold from 1:200 to 1:1,600 in dilution buffer containing 150 mM phosphate buffer, 0.85% NaCl, 1% BSA, and 0.02% Tween-20. One hundred microliters of diluted serum sample was added to the well in duplicate and plates were incubated at 37 • C for 1 h. After incubation, wells were emptied, washed three times with PBS-T. Horseradish peroxidase-conjugated goat anti-turkey IgG (Kirkegaard & Perry Laboratories) diluted at 1:20,000 in dilution buffer was added to each well. Plates were incubated and washed as in the previous step, followed by the addition of 100 l of enzyme substrate, tetramethyl benzidine (TMB) solution, to each well. After incubation at room temperature for 30 min, a 2 N HCl solution was added at 100 l/well. The absorbance value of each well was measured at 450 nm using a spectrophotometer (Vmax TM kinetic microplate reader, Molecular Devices Corporation, Menlo Park, CA). The absorbance values and ratios of PC and NC serum samples were calculated.
Construction and expression of N gene in the expression vector pTriEx
The entire open reading frame corresponding to TCoV N gene ligated to Nco I and Kpn I sites of plasmid pTriEx was confirmed by sequencing of both strands. The reading frame of N gene was in frame with the downstream six histidine-tagged sequence in the vector. Expression of the construct, pTri-N, in the host cell Origami (DE3) pLacI was induced with IPTG. Time course studies of induction of the recombinant fusion protein by IPTG indicated that the expression of N protein increased from 30 min to 4 h according to the analysis of SDS-PAGE and Western blotting with reagent specific to histidine tag (Fig. 1). The induction with IPTG for 4 h was selected in order to produce more N protein.
Extraction and purification of recombinant TCoV N protein
Soluble and pellet (inclusion body) fractions obtained by centrifugation in the extraction were examined by SDS-PAGE and Western blotting analysis (Fig. 1). The results indicated that recombinant N protein was not readily soluble in the buffer. Most of the protein was found in the inclusion body. The inclusion body was dissolved in the 6 M urea-containing buffer and further purified by chromatography on a nickel-agarose column. About 85% of the proteins loaded on the column passed through during the loading and washing steps (Table 1). Pure N protein was eluted with 1 M imidazole-containing buffer. As shown in Fig. 2, SDS-PAGE analysis indicated the presence of a single protein band with a molecular mass about 57 kd, which is similar to the expected histidine-tagged fusion N protein. The pure N protein band was recognized by reagent specific to histidine tag in the Western blotting analysis (Fig. 2). Determination of protein recovery indicated that 2.5 mg of pure N protein could be purified by chromatography on nickel-agarose column from 100 ml of bacterial culture (Table 1). Table 1 Purification of expressed nucleocapsid (N) protein from a representative 100 ml of E. coli culture by chromatography on nickel-agarose column Step
Antigenic cross-reactivity of recombinant TCoV N protein with antibodies to different avian coronaviruses
As shown in Fig. 3, the purified N protein reacted with antibodies to TCoV or IBV in Western blotting. The normal turkey serum and chicken serum did not react with the N protein in Western blotting (data not shown).
ELISA
The differentiation of PC from NC serum samples in the ELISA assay was observed at a coating concentration of N protein as low as 5 g/ml when serum dilution was 1:200 (Fig. 4). The capability of the recombinant N protein to differentiate PC from NC was markedly enhanced at higher coating concentrations from 5 to 40 g/ml with apparently higher ratios of PC/NC. The highest ratio of PC/NC was observed at 65 when coating concentration and serum dilution were 20 g/ml and 1:200, respectively.
Discussion
Cloning and expression of TCoV N protein as a histidine-tagged fusion protein in E. coli and the purification by chromatography on nickel chelating agarose column is demonstrated in the present study. Studies on the diagnosis, prevention, and control of TCoV infection have been hampered by the failure to propagate TCoV in cell culture. Without cell culture of the virus, molecular cloning and expression is the most important method for preparation of large quantities of highly purified viral antigens. The expression and purification procedures as described in the present study provide a simple and efficient method to obtain pure N protein of TCoV in large quantity. The yield from 100 ml of bacterial culture could be 2.5 mg of pure N protein after extraction and column chromatography.
The observed molecular mass at 57 kd of the expressed fusion N protein is within the expected range. There are 30 additional amino acids for the histidine tag in the C-terminal of the expressed fusion N protein. These extra amino acids increase the molecular mass of expressed target protein by approximately 3.3 kd. The molecular mass of IBV N protein has been reported to be from 51 to 54 kd (Saif, 1993). The size of N protein gene of TCoV and IBV is the same at 1,230 nucleotides. The molecular mass of N protein of TCoV and IBV is expected to be similar or the same on the basis of sequence information. The predicted molecular mass of the expressed fusion N protein of TCoV was therefore from 54.3 to 57.3 kd. It was reported that two proteins with molecular mass at 52 and 43 kd were produced in the expression of TCoV N gene from baculovirus system (Breslin et al., 2001). The difference of molecular mass between this 52 kd protein from baculovirus expression and the fusion protein expressed from prokaryotic system in the present study is mainly caused by the histidine tag. In contrast, there is only one single polypeptide band in the purified N protein in the present study.
It has been reported that TCoV and IBV are antigenically related in the studies of IFA (Guy et al., 1997;Loa et al., 2000; or ELISA (Loa et al., 2000;Ismail et al., 2001). Sequence analysis of a conserved region of RNA polymerase gene (Stephensen et al., 1999), a segment spanning from 3 end of M gene to 5 end of N gene (Breslin et al., 1999a, Ismail et al., 2001, or N gene (Breslin et al., 1999b;Akin et al., 2001) indicated that TCoV and IBV are genetically related. The observations that the recombinant N protein of TCoV reacted with antibodies specific to TCoV or IBV in the present study extend these previous findings of close antigenic and genetic relationship between TCoV and IBV.
Based on the close antigenic relationship between TCoV and IBV, an antibody-capture ELISA for detection of antibodies to TCoV was established using commercially available ELISA plates coated with IBV antigen (Loa et al., 2000). However, an antibody-capture ELISA using TCoV antigens, instead of IBV antigens, should still be pursued in order to improve the sensitivity and specificity of the assay. Development of such ELISA system depends on readily available preparations of pure antigens. The recombinant N protein of TCoV as prepared in the present study was reactive with antibody to TCoV, suggesting intact antigenic integrity, and could be prepared inexpensively in large quantity. A preliminary ELISA method using the recombinant N protein as coating antigen could well differentiate the positive control serum from normal turkey serum. It is feasible to use the recombinant TCoV N protein for development of antibody-capture ELISA for serological diagnosis of TCoV infection.
|
2018-04-03T03:03:27.156Z
|
2004-01-16T00:00:00.000
|
{
"year": 2004,
"sha1": "e9d5d5c34a5273117c3594bceddd629f4dec21ca",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.jviromet.2003.11.006",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca0ef4f6b70d6404799c3b96c31647f2423846af",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
2185992
|
pes2o/s2orc
|
v3-fos-license
|
Prognostic value of high-sensitivity cardiac troponin T in patients with endomyocardial-biopsy proven cardiac amyloidosis
Objective To investigate prognostic predictors of long-term survival of patients with cardiac amyloidosis (CA), and to determine predictive value of high-sensitivity cardiac troponin T (hs-cTnT) in CA patients. Methods We recruited 102 consecutive CA cases and followed these patients for 5 years. We described their clinical characteristics at presentation and used a new, high-sensitivity assay to determine the concentration of cTnT in plasma samples from these patients. Results The patients with poor prognosis showed older age (56 ± 12 years vs. 50 ± 15 years, P = 0.022), higher incidences of heart failure (36.92% vs. 16.22%, P = 0.041), pericardial effusion (60.00% vs. 35.14%, P = 0.023), greater thickness of interventricular septum (IVS) (15 ± 4 mm vs. 13 ± 4 mm, P = 0.034), higher level of hs-cTnT (0.186 ± 0.249 ng/mL vs. 0.044 ± 0.055 ng/mL, P = 0.001) and higher NT-proBNP (N-terminal pro-B-type natriuretic peptide) levels (11,742 ± 10,464 pg/mL vs. 6,031 ± 7,458 pg/mL, P = 0.006). At multivariate Cox regression analysis, heart failure (HR: 1.78, 95%CI: 1.09–2.92, P = 0.021), greater wall thickness of IVS (HR: 1.44, 95%CI: 1.04–3.01, P = 0.0375) and higher hs-cTnT level (HR: 6.16, 95%CI: 2.20–17.24, P = 0.001) at enrollment emerged as independent predictors of all-cause mortality. Conclusions We showed that hs-cTnT is associated with a very ominous prognosis, and it is also the strongest predictor of all-cause mortality in multivariate analysis. Examination of hs-cTnT concentrations provides valuable prognostic information concerning long-term outcomes.
Introduction
Amyloidosis is a rare disease with alterations in protein conformation that result in systemic deposition of abnormal fibrils. Cardiac involvement was reported in 50% of amyloidosis cases and was associated with the worst prognosis with reported median survival of 6 to 24 months, with 4 months in the setting of advanced heart failure (HF). [1][2][3] The outcome of patients with cardiac amyloidosis (CA) is heterogeneous, so it is crucial to find non-invasive evaluation tools in order to obtain an early recognition of the disease progression. A new generation of sensitive assays for cardiac troponins has been introduced recently, and is significantly associated with the incidences of cardiovascular death and HF. [4,5] There are few data from studies assessing the use of such assays for the evaluation of prognosis of CA patients. The aim of this study was the identification of independent predictors of all-cause mortality for CA patients.
Diagnosis of CA
We enrolled 102 patients with CA at our institute from January 2006 to January 2008. Endomyocardial biopsy (EMB) is the gold-standard for diagnosis of CA. Most patients with CA were confirmed by invasive EMB: routine paraffin processed sections were stained with hematoxylin eosin for cell morphology, and amyloid apple-green birefringence of Congo red stained under polarized light is indicative for the presence of amyloid.
Baseline clinical features
Baseline clinical data included clinical manifestation, laboratory profile and the treatments during their stay in hospital. The clinical manifestations were focused on symptoms and signs of HF, and the New York Heart Association (NYHA) functional class. HF was defined as the presence of symptoms (breathlessness at rest or on exercise, tiredness, and ankle swelling) and/or classical physical signs (tachyhttp://www.jgc301.com; jgc@mail.sciencep.com | Journal of Geriatric Cardiology cardia, pulmonary rales, jugular venous distention, peripheral edema, and hepatomegaly). Renal failure was defined as estimated glomerular filtration rate (eGFR) < 30 mL/min. Laboratory profile included the results of electrocardiogram (ECG) and transthoracic echocardiography (TTE) for every patient. Low voltage of ECG was defined as peak-to-peak QRS amplitude being < 5 mm in limb leads and < 10 mm in precordial leads. We measured left ventricular (LV) end-diastolic diameter, end-systolic diameter, left atrial diameter, thickness of interventricular septum (IVS), pericardial effusion and ejection fraction (EF) by TTE. Restrictive left ventricular filling pattern was defined as early diastole deceleration time < 130 ms and the peak early velocity and peak atrial velocity (E/A) ratio ≥ 2. We used a new, high-sensitivity assay to determine the concentration of cardiac troponin T (cTnT) in plasma samples from CA patients. The lower detection limit of the highly sensitive cTnT (hs-cTnT) assay was 0.001 ng/mL. Electrochemiluminescence (ECL) technology was used to measure troponin T (Elecsys ® , fifth-generation Roche Troponin T) for risk assessment in patients with CA. The 99 th percentile value of a healthy population is 0.014 ng/mL. Plasma N-terminal pro-B-type natriuretic peptide (NT-proBNP) level (pg/mL) was measured by ECL technology (BNP test kit of Bayer).
Follow-up
This research was an observational study, and it was approved by the local institutional review board. These patients signed informed consent as part of the 5-years follow-up. Data were obtained by regular doctor visits or telephone calls to patients or their relatives. Clinical interviews were performed once a month. To avoid misclassification of the cause of death, all-cause mortality was selected as the endpoint. Patients with HF were mainly treated with β-blockers, angiotensin converting enzyme inhibitors (ACEI) or angiotensin receptor blocker (ARB) based on guidelines, unless there were contraindications to these drugs. We try to explore predictors of long-term survival of patients with CA through a 5-year follow-up observation.
Statistical analysis
Clinical characteristics and laboratory features of our study population were analyzed. Data were expressed as mean ± SD for continuous variables and as percentages for discrete variables. Continuous variables were compared between groups using the Student t test (for normal distribution) or Mann-Whitney rank sum test (for non-normal distribution). Comparisons between continuous variables among groups were performed with the analysis of variance (ANOVA) using the LSD statistic, while the proportions were compared by means of the Chi-square test, using Fisher exact test. All analyses were performed with the software SPSS 19.0 Statistical Package. Predictions of long-term survival were assessed with logistic regression using Kaplan-Meier survival curves, and a significant P value was set at 0.05.
Study population
Positive biopsies for amyloidosis were obtained from EMB in 102 patients, which are presented in Figure 1. The patients were predominantly men (63.72%). Mean age at presentation was 53 years. The majority (60/102, 58.82%) presented with HF. Twelve patients (11.76%) had renal failure. Seventy patients (68.63%) were classified as primary amyloidosis, 22 patients (21.57%) were classified as chronic inflammatory disease associated amyloidosis and the remaining 10 patients (9.80%) were classified as multiple myeloma associated amyloidosis. The results of ECG showed that 48 of 102 patients (47.06%) had low QRS voltages, 74 of 102 patients (72.55%) had atrial fibrillations. The results of TTE showed that the mean thickness of IVS was 14.6 mm; restrictive left ventricular filling patterns were present in 41% of cases. Medical treatments generally included ACEI (35%), beta-blockers (21%), calcium channel blockers (5%), digitalis (14%) and melphalan (8%). No patients had undergone bone marrow transplantations or heart transplantations.
Natural history
Ninety-one of 102 patients died during the follow-up of 60 months, while 11 patients survived over a 5-year period. Seventy-nine of the 91 patients (87%) died from a cardiac cause (cardiac arrhythmia, cardiac failure or heart block), and the other 12 patients (13%) died from non-cardiac causes. Necropsy confirmed CA in 23 patients of them. Survival from all-cause death was 58%, 35% and 11% at 12, 24, and 60 months. The patients were divided into two groups based on survival of < or > two years. Baseline clinical/laboratory features and therapies of the two subgroups were described in Table 1. Patients with a poor prognosis were characterized by older age (56.29 ± 12.15 years vs. 4 9 . 9 2 ± 1 5 . 2 0 y e a r s , P = 0 . 0 2 2 ) , N Y H A
Identification of CA prognostic predictors
By multivariate Cox survival analysis, HF (hazard ratio (HR) = 1.78, P = 0.021), greater thickness of IVS (HR = 1.44, P = 0.0375) and higher hs-cTnT levels (HR = 6.16, P = 0.001) at diagnosis were independently predictors for all-cause mortality in our five-year follow-up observations. The HR associated with relevant variables in multivariate analyses are listed in Table 3. Among baseline variables, strong association between mortality and the level of hs-cTnT was observed, even after adjustment for other factors. Even mild elevations of troponin level could affect the prognoses of CA patients.
Discussion
Despite the advances in treatment, the prognosis of CA remains poor, particularly in the presence of HF. The median survival time from diagnosis among all patients is approximately one year, especially in untreated patients. [6] The main causes of death in our study were congestive HF, ventricular tachyarrhythmia, bradyarrhythmia and severe hypotension.
It is necessary to estimate the prognosis through non-invasive clinical examinations for CA patients. Our observational study showed that older age, higher level of BNP, pleural effusion and HF were independently associated with poor prognosis, which was consistent with the reported literature. [7,8] Other prognostic factors included male gender, recent syncope, restrictive left ventricular filling pattern, LA enlargement, and left ventricular ejection time as reported. [7][8][9][10][11] The previous observations showed traditional troponin is a powerful tool in clinical and prognostic assessments of patients with CA. Increased proponing level was associated with poor survival in patients with amyloid light-chain (AL) amyloidosis, and was a significant predictor for all-cause mortality. [7,12] The predictive value of the determination of hs-cTnT levels in this study was even superior to the traditional troponin assay in the evaluation of all-cause mortality (HR = 6.16).
Our results emphasized the role of hs-cTnT for accurate risk stratification in evaluating patients who were potential candidates for cardiac death. We have improved the previous model of discriminating patients using biomarkers of hs-cTnT instead of serum cTnT. In our study, most of pa-tients are hs-cTnT positive (88.23%, 90/102). Because hs-cTnT is more sensitive than the traditional test for troponin, we could discriminate among patients with different outcomes through the measurement of hs-cTnT levels.
The mechanisms responsible for the release of very low levels of cTnT in patients could include: (1) cardiomyocyte damage due to extracellular deposition of insoluble beta-febrile proteins in the heart; (2) inflammatory processes; (3) cardiomyocyte apoptosis and increased myocardial strain due to pressure or volume overload. [13] The level of hs-cTnT depends on the velocity of amyloid substance deposition in the myocardium, and is highly positively correlated to the progress of the disease. [14] In our study, the thickness of IVS and HF are both associated with hs-cTnT level. Even a minimally increased cTnT level may represent subclinical cardiac injury and has important clinical implications. [15][16][17] Examination of the hs-cTnT level, as an important tool, plays an important role in the prognostic classification of patients affected by amyloidosis.
Study limitation
A major limitation of the study was the small sample size from a single center. However, CA represents a rare cardiac disease and there is a lack of large number of studies in the literatures. The long follow-up period of the study was mitigated by the relatively small sample size. The sample size limited the ability to add more clinical variables to be tested in a Cox model and led to the lack of statistical power. We were not able to establish the cause of death for all patients, so the endpoint event was not cardiovascular mortality, but the all-cause mortality. We did not compare hs-cTnT levels with traditional troponin T levels, and did not analyze the differences between the two examinations.
Conclusion
We showed that the high level of hs-cTnT was associated with a very ominous prognosis, and it was the strong predictor of all-cause mortality in multivariate analysis. Examination of the hs-cTnT level provided valuable prognostic information concerning long-term outcome.
|
2018-04-03T03:15:38.604Z
|
2014-06-01T00:00:00.000
|
{
"year": 2014,
"sha1": "d498fbe6a0dc8cf11312a77d5be711edbe17c937",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d498fbe6a0dc8cf11312a77d5be711edbe17c937",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219906115
|
pes2o/s2orc
|
v3-fos-license
|
Teacher Development Potential (Creativity and Innovation) Education Management in Engineering Training, Coaching and Writing Works through Scientific Knowledge Intensive Knowledge Based on Web Research in the Industrial Revolution and Society
This research is an empowerment of the potential of vocational school teachers in the form of scientific writing to function as a vehicle for communication and dissemination of works and ideas for teachers or others. The problems are: (1) the making of scientific papers of any type is partly, vocational teachers do not understand much and are unable to write good scientific works, are still done traditionally, whereas writing web-based scientific papers/research is urgently needed in the revolutionary era and society now and in the future. (2). Until now, SMK (Vocational High Schools) have not strengthened the ability to respond to the needs of the world of work, business, industry with innovation and digital-based interdisciplinary curriculum, education in SMK should be as a reference for innovation, and most responsive to the development of knowledge and technology according to the needs of industry and civil society. is the ability to change midset action in writing scientific papers, which are designed through implementation
Introduction
Analysis of the implementation of education in Vocational Schools in measuring the accountability of productive teacher performance is still static in several aspects. Mental revolution needs to be balanced through training to address the era of globalization which has the character of the industrial revolution 4.0, wherein this era new literacy is needed by using a thorough analysis of data and making conclusions to be related to communication skills, collaboration, critical thinking, creative and innovative (Jill, Danil, Jasman, Martin, & Powell, 1999). The role and professional teacher becomes very strategic in preparing quality human resources (HR), the development of sustainable teacher professionalism can also be done through the following matters. (1) Personal development, which includes functional education and training, such as courses, training, upgrading and other forms of education and training.
(2) Attending workshops or workgroup meetings or teacher work meetings or in-house training for teacher professional development activities. Teacher professionalism is marked by improving the quality of self through writing scientific papers. One of the institutions/organizations that need guidance, assistance and training (creativity, motivation) of teachers who have the professional competence to be developed is the Vocational High School (SMK) in the City of Bandung Secondly, the lack of understanding and ability of teachers in making scientific work is (1) Lack of knowledge about the concept of scientific work, substance, and systematics. (2) Not yet developed the culture of writing in schools. (3) Seminar and workshop activities that are often followed by teachers are the development of innovative learning and PTK (Classroom Action Research) not directed to the development of published scientific papers (4) Lack of writing practice and the difficulty of confusion in thinking, this factor often occurs so that writing seems chaotic and unclear logic flow used. (5) Lack of awareness of teachers towards government regulation Number 16 of 2009 governing the Teacher's Functional Position and Credit Score; (6) The lack of cooperation between developers of PLPG (Teacher Professional Education and Training) organizers and school agencies to assist post-certified teachers in PKB (Sustainable Professional Development), especially in making scientific works ; (5) Constrained by weak understanding and knowledge of research; (6) in the ability to write scientific papers teachers certified teachers do not yet fully have an understanding of the concept of scientific work, (7) in the development of professionalism of teachers continues to encounter obstacles, including problems of time, funding, age, school infrastructure, motivation, policy leadership, and internet access network (Rowe, 1986), (Buzan, 1986), (Yunianto, 2007), (Claxton, 2006).
Thirdly, from thousands of teachers only dozens have shown this ability, willingness, and writing habits. Most teachers still find it hard and difficult to write, for example, technical guidance programs for writing scientific papers for teachers should be a vehicle for teachers to recycle frames of reference and frames of experimentation with research concepts. In line with the needs of the revolutionary and society era 5.0 in educational services that are characterized by vocational schools, they require professional teacher figures that are teachers who can work autonomously (free but according to independent expertise up to date, rich in science and technology, writing current scientific papers to devote themselves to users services (state and society) with responsibility for their professional abilities as research-based professionals (Sumardjoko, 2017), (Kemendiknas, 2010).
Formulation of the Problem
Problems in the Science and Technology Implementation Program namely the Revitalization of the Industrial Revolution 4.0 and the growth of society 5.0 have changed the face of the world where information technology has become the basis in human life. Everything becomes borderless with unlimited use of computing power and data because it is influenced by the development of the internet and massive digital technology as the backbone of human and machine movement and connectivity. The quality of vocational high school (SMK) graduates as a service organization is currently undergoing several changes. Not only due to the rapid development of science, technology, art, but also because of changes in community expectations of the role of graduates from vocational schools that are slightly absorbed in the industry in pioneering the future of the nation and state (Collins & Quillian, 1969). The demands of the Vocational School are not only limited to the ability to produce graduates who are measured academically, but also the entire program must be able to prove the quality of education is supported by existing accountability, quality control, quality improvement and quality satisfaction are key issues for the education sector in the future (Collins & Quillian, 1969), (Craft, 2005), (Ellis & Barrs, 2008).
The role of universities in collaboration with vocational schools is very important, especially in the development of knowledge and technology. The demand for quality learning processes is increasingly higher along with the development and changing times. How to create a more contextual and scientific learning process to shape the character of students with the spirit of scientists and the demands of producing quality graduates. The urgency in this research is the challenges faced by the government and universities, namely how to prepare and map the workforce in the teaching profession of the education graduates in the face of the industrial revolution 4.0 and society 5.0. Situation analysis in writing skills for teachers is very important because it is a profession's demand. For career development and to keep learning, teachers must meet the requirements to write scientific papers. This requirement is often a barrier to the promotion of the teacher's level given the low ability and interest in writing among teachers. In addition to being a requirement for career development, writing is also a means for self-development of a teacher, the teacher has a lot of potentials that he has to develop optimally by writing (Yunianto, 2007). This is supported by the many conditions of teachers who strengthen the opportunities for developing writing skills, namely: (1) teachers always interact with science that can be material for writing; (2) the teacher always interacts with students when learning activities in the classroom that can be used as a source of writing, (3) the teacher often interacts with the dynamic world of education and policy, always demanding critical thinking, issuing innovative ideas; (4) many writing contest opportunities, organized by the Education office; (5) the mass media provides many educational rubrics that make it possible for teachers to express their innovative ideas; (6) many writing opportunities exist before the eyes of the teachers, but these opportunities have not been widely exploited by teachers complaints about writing among teachers are of course without cause (Ferrari, Cachia, & Punie, 2009)- (Hattie, 2009). In general, several obstacles can be found that make the level of writing participation among teachers low. The low interest in reading and writing, writing activities are not separated from reading activities. During this time the teacher is more preoccupied with teaching activities in the classroom so that 15 reading obligations for her development become unfulfilled, the limited availability of reading material that can be written material lack of self-confidence and lack of experience to write, low motivation to write. The availability of teachers/instructors, teacher competence is also in doubt. Many Productive Teachers are not up-to-date in the development of technology used in their expertise programs so that it affects the teaching-learning process which also influences the competency of students' graduates to be absorbed in the industry today. The complaints above also occur in vocational teachers in the City / Regency of Bandung.
Based on these conditions, it is necessary to carry out teacher coaching in the form of Teacher Potential Empowerment (Creativity and Innovation) in Training Techniques, Fostering the Writing of Scientific Papers through Understanding of Intensive Web-Based Knowledge / Research in the Revolutionary Era of Society 5.0, bearing in mind the type that is needed by teachers to utilize these opportunities. At least regular training for teachers/instructors who teach in the field of vocational education from the business world and the world of industry to cultivate entrepreneurship in vocational schools. Certificates held productive Teachers in the learning system can not be applied morally maximum should be able to guarantee that teacher competence is by applicable educational management standards among professionals (Nesbit & Adesope, 2006).
Research Purposes
In general, the objectives of this study are: 1) Providing support, strengthening, and mentoring the implementation of priority programs in the implementation of cooperation in making the Progran / Draft / Strategic Plan registered in the Local Curriculum for Training / continuous learning and comprehensive expertise of productive teachers is useful to keep teachers up to date with world developments business and industry, academic development, career development and personal development in accordance with the program area of expertise (Leven & Long, 1981); 2) Providing solutions to priority problems including analyzing the use of a millennial community of teachers to be up to date on the development and development of science and technology capable of revolutionizing Disruption 4.0, so that dissatisfaction driven by the spread of digital technology and the dynamics of information sharing characterized by social media can be resolved (Kaufman & Beghetto, 2009); 3) Improve thinking, reading and writing skills or other skills that are needed (soft skills and hard skills) that is analyzing programs in changing skills to change the mindset needed by the labor market that has entered an era of knowledge based society (economy) that is open (digital) and relies on free competition, arousing teacher performance in the utilization of results documented work results, improve professionalism especially vocational teachers in professional activities of teacher competence with understanding of web-based knowledge / research (Kampylis, Panagiotis;Berki, 2014).
In detail these objectives include the following: 1) Analyzing the effect of training and coaching and guidance, mentoring through workshops, about learning scientific writing techniques for vocational teachers, which are then applied to the making (textbooks, books, modules, CAR, final project, thesis / dissertation, journals, articles , papers and scientific writing) so as to improve understanding of the function of educational values, to the importance of workshops in innovation and the technique of writing Scientific work as a teacher has a function of values integrated in a kaffah; 2) Analyze the extent of the response, motivation, creation and follow-up of the results of the workshop on the techniques of writing scientific papers on the vocational teachers, after being applied so as to foster interest that has values, philosophies, theories, and practice of writing scientific papers so that vocational teachers can able to make scientific papers properly and correctly and can develop their potential in their duties as a professional teacher; 3) Analyze the extent to which vocational teachers are able to: identify, select and formulate topics and titles, compile the outline, gather material, write scientific and edit, improve the ability to search for references in various media and others (Michalko, 2001); 4) Evaluating the effect of implementation in coaching, training, making and implementing the results of the activities of writing scientific work workshops with personality development, towards fostering teacher intellectuality in educational services, in enhancing creativity, PBM innovation, mastering methodology and service design in other communities, and mastering practical ways and tips on successful writing of scientific papers that are good and right, planned in several stages including for all participants about providing training material in engineering, making scientific papers in the form of articles / journals provided by the project implementation team and given by experts in scientific writing. competent in their fields. Managing data, information and knowledge culture associated with cultivating, techniques for writing scientific papers in an organization; (2) What are the types of knowledge in organizational culture management; (3) How is the history of development in the management of knowledge culture related to information and knowledge in writing scientific papers; (4) How important is the framework of knowledge in the culture management system for building a web-based scientific writing system for vocational teachers in the world of education.
Research Methods
The method of approach through intensive and integrated training with learning methods, learning techniques are implemented in the classroom with an axiomatic approach, and methods are procedural, and the techniques are operational, through stages and strengthening research methodology, as basic knowledge in the framework of writing works scientifically.
Field observations, implementation of teacher empowerment activities through training in writing Scientific Works), starting with teachers in vocational schools so that the needs and problems of participants (vocational teachers) can be identified.
Recruitment of participants after field observations, new recruitment of participants is held, together with the Principal of Vocational Schools to ensure participants who need the training, so that it takes several days expected by 50 participants to fulfill the quota. Implementation of the training in the form of workshops.
The implementation of the training activities including the provision of material by experts, held a pretest to find out the level of understanding of participants, only after the pre-test, training in writing scientific papers and in April the implementation of teacher empowerment activities through training in writing scientific papers, starting with Post-test and proceed with the Implementation material (direct practice) to make scientific papers articles/journals / PTK (Classroom Action Research) starting with the writing systematology and methodology.
Training on writing scientific papers also trains participants to create research instruments to collect raw data on a Likert scale, as well as various other instruments to support the collection of raw data, such as structured interviews, in-depth observations, and commentary.
Training in the method of writing scientific papers, training participants to be able to process raw data using the SPSS and Excel programs, so that what has been felt difficult by the participants in preparing scientific papers, in addition to knowledge, especially those choosing with quantitative methods, can become valuable solution for participants, but when used for action research it can be useful for processing field data.
Empowerment of vocational school teachers is provided through intensive scientific writing training to the educational scientific journals of the participants to be able to draft writing. Empowerment of training provided is implemented directly, both at the beginning of the training through pre-test and post-test at the end of the training so that it can be as a formative and summative evaluation to find out the extent of participants' understanding of the training of writing scientific papers.
A phenomenological approach was adopted for this study, as researchers wanted to gain personal insights about participants' life experiences. The purpose of phenomenology is to understand human experience (Dowling, 2007). This research has explored the experiences of students who have limitations undergoing general vocational training.
The solution to solve the problems faced is 1) Implementation of training (workshops) in one of the vocational schools in the city of Bandung under the auspices of the local education office with an agenda according to the theme of the activity; 2) Conduct training on scientific article writing and supporting skills. The planned activities include holding workshops, training scenarios using the in class on the job training system (practice in local PGRI containers) and presenting results on the job at the end of the workshop, providing motivation and creativity with a web-based scientific writing / research writing competition using science and technology media that are built through vocational teachers are the focus of training and coaching; 3) The targets of this activity are state and private vocational school teachers in the City / District of Bandung, with a participatory approach through: (1) Lectures and questions and answers; demonstration; (3) Exercise / Practice or intensive tutorial with details of the specified training schedule; (4) service-form workshops: (1) Research-based Scientific Work Guidelines Formulation; (2) Training on Writing Ilmia's work by using Web media; (3) Consultation Services for Writing Scientific Papers, (4) inviting as key note speakers / experts writing scientific papers from LLdikdi / Kemenristek / other relevant institutions; 4) Therefore, the training material includes the following three things: (1) Making scientific papers and their systematics; (2) The rules of writing scientific articles in accordance with the rules of Indonesian language are good and right; (3) Strategy to find sources of reference in accordance with the rules of scientific writing; (4) Practice writing scientific papers. This activity is carried out in the form of informative explanations related to the methodology and technical editorial reporting delivered in the form of discussion. Also, this activity becomes a means of discussion.
Results and Discussion
The depiction of vocational teachers viewed from the aspects of potential, service, educational background, and training, rank, academic position and other existence during their duties, the development of learning and the development of educators' competencies and phenomena that illustrate the relationship between the factual conditions that develop in society with the results of evaluations research in coaching, training, manufacturing and assisting in writing scientific papers can compensate according to the needs of the era of digitalization, the industrial revolution and the future 5.0 according to the vision, mission of each school.
Profile of Vocational Teachers in Strategic Vision and Mission of their profession, has shown readiness to make changes, improvements, fostering and quality of education, teaching and PBM in their school environment internally and externally, as well as accuracy and accuracy of decision making towards management and service functions education in schools in improving their competence and profession, especially related to academic development in the field of writing scientific papers.
Empowering Teacher Potential (creativity and innovation) in Training Techniques, Guiding the Writing of Scientific Papers through Understanding of Intensive Web-Based Knowledge / Research in the Revolutionary Era of the Society 5.0 in improving the competency and profession of public and private vocational school teachers have been used in harmony with personal development, academic, and his career so that it can affect the change in mindset, making a need in the task, life, life, to the development and education services (Michalko, 2001), (Sternberg, 2012).
The results of the implementation of the workshop of the teachers have been able to apply to the making of various kinds of scientific work to improve understanding of the function of educational values, to the importance of these innovative activities and the technique of writing Scientific work as a teacher has an integrated value function.
Analysis of Internal, External factors; in compiling a planning strategy, a systematic framework, can create activities in writing scientific papers to develop knowledge, increase motivation, creation, response giving birth to a more up to date theoretical framework, relevant to the development of competence science and technology, developing the ability of vocational teachers especially in compiling the work scientific writing, in the field of pedagogic and professional can already be improved (Education, 2012).
Can discover new aspects/concepts of techniques and methods/creativity/innovation in writing scientific papers, to improve the activities of developing science, technology, arts and skills for teachers (SMK) in the effectiveness and quality of teachers to be published through the Journal National / Internasional / accredited / reputable The solution for teachers who have difficulty in starting to write a scientific work is to take training in the form of workshops that are carried out and begin by following the acceptance of material from scientific writing techniques experts, following the practice of directly looking for research problems to serve as titles research, as well as formative evaluation, participants can start writing a scientific work, especially articles/journals / CAR. The evaluation of scientific writing training activities is as follows: (1). The implementation of training activities in addition to increasing the ability and willingness of teachers to write good and true scientific work, is also a need that has been eagerly awaited by SMK teachers; (2) Scientific writing training activities have received positive responses from teachers, for that as an evaluation, its implementation needs to be developed in the future at the location of groups of teachers who are in schools that need this training; (3) The contribution of this research activity as a scientific work is expected to produce several propositions that can be used as a reference for researchers specifically who study the Teacher's Empowerment Potential (Creativity and Innovation) in Training Techniques, Guiding the Writing of Scientific Papers through the Understanding of Intensive Knowledge-Based Web / Research in the Revolutionary Era of Society 5.0. (Nilsson, 2010). (Ferrari et al., 2009)
Conclusion
The results of research in training and coaching in writing research-based scientific papers/websites serve as benchmarks for determining the "levels" of teacher competence that have an impact on graduate recipients being able to find out whether graduates are by field needs (the real conditions of the industrial revolution and society 5.0). For researchers can be used as a reference, the discovery of theories/concepts / other aspects that can develop education and teaching, policy, evaluation in its implementation in the field of study and learning in the professional duties of lecturers in higher education, especially in the field of Education / Management education which is very relevant in the development of school resource management, organization, personnel, leadership, HR policy, strategic management, and seminar development proposal for Thesis and Decentralization, Paradigm and Management Education, Assessment of Research Methods or Community Service and educational statistics in the National and Regional / Regional Education Development Plan as well as being an inspiration can be further developed by other researchers.
It is hoped that the acquisition of teachers' skills in making scientific works is helped to find out to what extent particularly the level of understanding of vocational school teachers in Bandung on the method of writing scientific papers by industry and community needs. Thus teachers in principle need guidance and training so that they are skilled and confident in the demands of the times. And in essence human memory is limited, so there needs to be encouragement and stimulation from outside to remember again. The implementation of this training should be used as a means of self-development to improve the professionalism of educators in facing the challenges of world development in the future. It is expected to further improve themselves in providing materials and methods that are suitable for teachers who are the target of conducting research, namely: (1) The realization of human resources (teachers) of SMK in writing web-based scientific/ research and digitalization in multicultural education management and national identity; (2) The realization of research-based and digital learning by strengthening education and national identity in the era of the Industrial Revolution and Society 5.0 today (Srivastava, de Boer, & Pijl, 2015).
Research in empowering teacher potential is practically expected to be an input for those involved in the world of education, especially in vocational schools or in other educational environments. The teacher and lecturer educators, the results serve as one of the enrichment guidelines for implementing more innovative educational quality development; references in making scientific papers in various strategies and dysfunctions, competency development and development of science and technology, social, community and other educational stakeholders, a strong and responsible willingness and skills to make scientific papers, increasing knowledge increasing their income as a consequence of promotion. As overview of science and technology in this study are: 1) Improving the ability of HR comprehensively to create performance optimization through developing rewards; strengthen partnership networks with regional governments, related institutions, business, and industry; Increase cooperation with other universities; 2) Encourage and facilitate to develop innovations in various scientific disciplines and their application in the field/community to improve the community's economy; 3) Encourage to be able to compete in getting the economy of the local community at local, national and international levels; 4) Develop soft or hard information systems found in the teacher or facilitate; 5) Empowering and developing networking with various institutions and universities in abroad; 6) Publish the results of program activities; Strengthening the synergy of Tridharma PT which includes education, research, and Community Service; 7) Priority for the Development of the Strategic Plan (Renstra) of Education in its implementation based on the principles of overall quality development through management modernization, productivity improvement, efficiency, effectiveness, and accountability found in the community and stakeholders in need; 8) Creating a climate in schools by providing an environment that supports innovation can be very powerful, the teacher seeks rather than valid original thoughts and ideas. Having learning rather than performance orientation helps to create an environment where creativity is encouraged; 9) Making connections: mind mapping is a flexible and powerful tool for representing information and nurturing creative and critical thinking. Originally popularized and developed (Buzan, 1986), mind maps are designed to 'utilize various cortical skills' by using keywords, colors, images, numbers, logic, rhythm and spatial awareness. A mind map is a diagram that organizes information visually. They usually consist of a central concept, which is expressed in keywords or short phrases. Related ideas branch off from this, spreading throughout the paper, which is usually in a landscape format to provide optimal space for ideas to be written. Each main branch that emerges from a central theme can then develop further into related sub-sections.
Semantic network model theories (Collins & Quillian, 1969) help explain why mind maps are effective. Each student has their unique understanding of any subject at any given time based on their connections and connections. The act of constructing a mind map creatively requires students to think hard about what they are learning and build new connections. Students will find it easier to remember information by building representations of their understanding. It is not possible to create mind maps without the active involvement and thinking through mapped constructions. Building a large amount of information on a page also encourages creativity. Learners can make connections between topics, which they might not see when learning solid blocks of text. Mind maps can be used in several ways including taking notes. The act of creating a mind map requires a lot of information and concepts, which connect it. This can help develop understanding and help memorize information. It makes the recording process active rather than passive. At the end of the unit, a teacher may ask students, individually or collaboratively, to make a mind map of what they understand about a topic that has been discussed. By using keywords, students can enter large amounts of information onto one page, enabling them to get an overview of a topic and plan information strategically. Clarify, analyze and defend back problems or questions. This helps learners to uncover new perspectives, to build higher-level thoughts and to develop understanding, analysis, synthesis, and evaluation. Making connections. This supports the development of holistic and disciplinary understanding through connecting ideas from different topics or different subjects.
|
2020-06-04T09:12:44.686Z
|
2020-06-02T00:00:00.000
|
{
"year": 2020,
"sha1": "2702e018e0dd7ecb99553ff2db2b8fa5c14d011f",
"oa_license": null,
"oa_url": "http://www.sciedu.ca/journal/index.php/ijhe/article/download/17099/11053",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9267f8e38da10ed7ac1a53395a1d0dc8749a0a6e",
"s2fieldsofstudy": [
"Computer Science",
"Education",
"Engineering"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
14419208
|
pes2o/s2orc
|
v3-fos-license
|
Ovariectomy Results in Variable Changes in Nociception, Mood and Depression in Adult Female Rats
Decline in the ovarian hormones with menopause may influence somatosensory, cognitive, and affective processing. The present study investigated whether hormonal depletion alters the nociceptive, depressive-like and learning behaviors in experimental rats after ovariectomy (OVX), a common method to deplete animals of their gonadal hormones. OVX rats developed thermal hyperalgesia in proximal and distal tail that was established 2 weeks after OVX and lasted the 7 weeks of the experiment. A robust mechanical allodynia was also occurred at 5 weeks after OVX. In the 5th week after OVX, dilute formalin (5%)-induced nociceptive responses (such as elevating and licking or biting) during the second phase were significantly increased as compared to intact and sham-OVX females. However, chronic constriction injury (CCI) of the sciatic nerve-induced mechanical allodynia did not differ as hormonal status (e.g. OVX and ovarian intact). Using formalin-induced conditioned place avoidance (F-CPA), which is believed to reflect the pain-related negative emotion, we further found that OVX significantly attenuated F-CPA scores but did not alter electric foot-shock-induced CPA (S-CPA). In the open field and forced swimming test, there was an increase in depressive-like behaviors in OVX rats. There was no detectable impairment of spatial performance by Morris water maze task in OVX rats up to 5 weeks after surgery. Estrogen replacement retrieved OVX-induced nociceptive hypersensitivity and depressive-like behaviors. This is the first study to investigate the impacts of ovarian removal on nociceptive perception, negative emotion, depressive-like behaviors and spatial learning in adult female rats in a uniform and standard way.
Introduction
Circulating ovarian hormones not only play a pivotal role in reproductive behavior and sexual differentiation, they also contribute to emotion, memory, neuronal survival and the perception of somatosensory stimuli [1][2][3]. Ovarian hormones have been shown to alter nociceptive behaviors using a variety of models [4][5][6][7][8]. However, the findings of these investigations have been variable. Depending on the type of noxious stimulation, behavioral test employed, species and strain of the animal and periods from ovariectomy (OVX), gonadectomy increased [4,7,9], decreased [10,11] or had no effect [12] on nociceptive responses.
Ovarian hormones have also been suggested to regulate affective disorders and learning memory beyond their role in pain modulation [3]. Human studies have demonstrated that the lifetime prevalence rate of mood disorders is approximately two times more frequent in women than in men [13,14]. The increased risk of affective disorders in women is related to hormonal changes who are premenstrual, postpartum and hypoestrogenic due to the medical surgery or menopause [15]. Data from animal experiments have also shown that OVX increases depressive-like behavior in several tasks [16,17]. In addition, very low estradiol levels or very high levels were associated with impaired spatial ability [18,19].
In humans, menopause causes depletion of estrogens, whereas in experimental animals OVX is a common method to deplete animals of their gonadal hormones. In females the absence of the ovaries induces a drastic decrease of circulating estrogens [20]. Our previous studies demonstrated that exogenous 17b-estradiol acutely enhanced excitatory synaptic transmission in spinal dorsal horn and anterior cingulate cortex (ACC) neurons and facilitated nociceptive responses and pain-related aversive in intact rats either sex [21,22]. In this study, we are further to compare intact female rats to OVX ones to determine if hormonal depletion could significantly modify the nociceptive, depressive-like and learning behaviors under controlled experimental conditions. The Tail-flick test was used to determine the thermal pain threshold; the Von Frey test was used to measure the mechanical response threshold; formalin test was used to examine tonic pain. Chronic constriction injury (CCI) of the sciatic nerve was used to induce neuropathic pain. Formalin-induced conditioned place aversion (F-CPA) and foot-shock-induced CPA (S-CPA) was respectively used to reflex the pain-related and fear-related aversion. Force swimming test, and open-filed test (OFT) were used to evaluate depressive-like behaviors [23]. Morris water maze was used to test spatial learning ability.
Experimental Animals
Adult female Sprague Dawley rats (weighting 200-220 g on arrival, from the Experimental Animal Center of the Chinese Academy of Sciences were housed in groups in a temperature-and humidity-controlled room with a 12:12 light-dark cycle (lights on 06:00) and with food and water available ad libitum. All animal experiments were approved by the Shanghai Animal Care and Use Committee and followed the policies issued by the International Association for the Study of Pain on the use of laboratory animal [24]. To control for possible effects of time of day, rats were trained and tested at approximately the same time of day (3-6 h after lights on). Gonadally intact females were randomly cycling. All the following behavioral testing described herein were performed by the same experimenter blinded to the group assignment to minimize between-experimenter.
Surgery
Ovariectomies (OVX) were performed under isoflurane anesthesia. Anesthesia was confirmed by a reduced respiratory rate and lack of response to gentle pinching of the foot pad. A mid-ventral incision was made, and the bilateral ovaries and ovarian fat were removed. The ovaries were isolated by ligation of the most proximal portion of the oviduct before removal. The same procedure was carried out for the sham groups except for the removal of the ovaries. The surgical incision was sutured and postsurgical recuperation was monitored daily.
Chronic constriction injury (CCI) of the sciatic nerve was performed according to previous protocols [25]. The right sciatic nerve was exposed at the mid-thigh level, and four chromic gut (4-0) ligatures were tied loosely around the nerve approximately 1 mm apart, proximal to its trifurcation. For sham group rats, the sciatic nerve was isolated without ligation. The surgical incision was sutured and postsurgical recuperation was monitored daily.
Oestrous cycle evaluation
Stages of the oestrous cycle (diestrus, proestrus, oestrus and metestrus) in female rats [26] were determined using vaginal smears collected about 2 h after lights on every day. Cycles were followed for at least 2 weeks before animal treatment. Only those who maintained at least two consecutive 4-or 5-day oestrous cycles were considered to be regulatory cycling and were used in the present study.
Plasma estrogen concentration and weight of body and uterus
Rats were anesthetized with isoflurane and blood samples were collected by femoral artery puncture and put into tubes pretreated with heparin. The samples were let stand for one hour before centrifuging at 14000 rpm for 10 minutes. The blood serum was then collected and stored at 280uC until use. Serum estradiol levels were determined by double-antibody radioimmunoassay kits according to protocols provided by the manufacturer (National Atomic Energy Research Institute, Beijing, China).
The body weight of each animal was recorded every week throughout the experiment and the uterus weight was measured postmortem at the end of the 5 th week as indices of the efficiency of the OVX.
Estrogen replacement
Estrogen replacement was carried out in the OVX rat 4 week after surgery and by subcutaneous (s.c.) injection of 17b-estradiol (E2, 30 mg/day for 7 days) in the dorsal neck region. The same volume of vehicle (sesame oil) was injected into OVX rats daily as a control. Behavioral tests were carried out in the 1 week after E2/vehicle treatment.
Von Frey test
The hind paw withdrawal threshold (PWT) was determined using a calibrated series of Von Frey hairs (Stoelting, IL, USA) ranging from 2 to 26 g. Rats were placed individually into wire mesh-bottom cages. A series of calibrated Von Frey hairs were applied to the plantar surface of the hindpaw in ascending order (2,4,6,8,10,15, and 26 g) with a sufficient force to bend the hair for 2 s or until paw withdrawal. A withdrawal response was considered valid only if the hindpaw was completely removed from the customized platform. Each hair was applied 5 times and the minimal value that caused at least 3 responses was recorded as the PWT.
Tail Flick Tests
Rats were gently handled on the platform of tail flick testing apparatus (BME-410C, Institute of Biomedical Engineering, CAMS, Tianjin, China) with tails exposed to thermal stimuli. Radiant heat was applied to the proximal (3.0 cm from the root) or distal (3.0 cm from the tip) of the rat's tail to evoke the tail-flick (TF) reflex. When a ''flick'' reaction occurs, an on-board sensor turns off the bulb, the second counter is stopped, and the tail-flick latency is determined to the nearest 0.01 sec. The intensity of radiant heat was adjusted to elicit a baseline TF latency (TFL) of approximately 4-5 s. Cutoff time was 10 s to avoid tissue damage.
Von Frey and Tail Flick tests were performed in the same groups of OVX/sham/naïve rats, started with Von Frey test followed by TFL test with an interval of 2 hours in home cages.
Formalin Test
Formalin (5% 50 mL) was injected into i.pl. of the unilateral hindpaw. The lifting and licking time of the affected paw during each 5 min interval for 45 min after injection was recorded. A mirror was positioned below a chamber at a 45u angle for unobstructed observation of the rat's paws. The responses to formalin injection were manually monitored by measuring the time the animal spent on lifting, licking, and shaking the affected paw. The behaviors of each animal were simultaneously monitored by a video camera. A weighted pain score for each animal was calculated using the following formula [27]: formalin pain score = [the time the animal spent on elevating injected paw+26 (the time the animal spent on licking or biting injected paw)]/300. The different groups of OVX/sham/naïve rats were used for this test.
Conditioned place aversion
Conditioned place aversion (CPA) was conducted as described previously [28]. The place conditioning apparatus consists of three opaque acrylic compartments. Two large ones are conditioning compartments (30630630 cm) and a smaller one is a neutral choice compartment (15620630 cm, length x width x height). The conditioning compartments are placed in parallel and separated by a wall with a square door (10610 cm). The neutral compartment is laid in front of the two conditioning compartments with two doors (10610 cm) to them. A movable transparent ceiling covers each compartment. The two conditioning compartments are both painted black, but one is decorated with a transverse white band and contains an odor produced by 1.0% aceticacid; the other is decorated with a white vertical band and has an odor of cinnamon. The floors of the conditioning compartments are also different: one is made from Plexiglas, and the other is from a polyester board with a metal band on it, which can provide an electric shock. Thus, the two conditioning compartments have distinct visual, olfactory and tactile cues. The neutral compartment is white, absent of distinctive odor and has a solid acrylic floor with a slope. Under each of the floors of the conditioning compartments, there was a spring balance. When the animal stepped on the floor, it induced the movements of the balance. The mechanical energy was transformed into electrical signals. The signal trigged a timer, which automatically recorded the time spent in this compartment by the animal.
The experimental process consists of three distinct sessions: a preconditioning session, a conditioning session and a test (postconditioning) session. CPA task processing takes 4 d. Day 1 is a preconditioning day. At the beginning, a rat was placed in the neutral compartment. After habituating for 2 min, the entrance to each conditioning compartment was opened. When the rat enters any conditioning compartment, the doors connecting neutral and conditioning compartments were closed. The rat was allowed to explore the two conditioning compartments freely for 15 min. A timer automatically recorded the time spent in each of the compartments in a blind manner. Rats that spent .80% (720 s) on one side on that day were eliminated from the subsequent experiments [27,28]. Day 2 and 3 are conditioning days. For formalin-induced CPA (F-CPA) experiment, the rat received a unilateral hindpaw intraplantar injection of normal saline (NS, 50 ml) on day 2, and was randomly confined to one of the conditioning compartments for 45 min. On day 3, the rat was given a unilateral hindpaw intraplantar injection of 5% formalin (50 ml) or NS (control) and then restrained in the other conditioning compartment for 45 min. For electric foot-shockinduced CPA (S-CPA) experiment, the rat received no treatment on day 2, and was randomly confined to one of the conditioning compartments for 45 min. On day 3, the rat received an electric shock (0.5 mA for 2 s) every 8-10 min in the other conditioning compartment during the 45-min training session. The compartments were counterbalanced. Day 4 is postconditioning day. The procedure is the same as day 1. The time animals spent in each compartment was measured. The different groups of OVX/ sham/naïve rats were used for F-CPA and S-CPA tests.
Open-field test
The open field apparatus consisted of a grey Plexiglas quadratic box (1006100640 cm), which was evenly illuminated to 15 lux. The bottom of the box was divided into 16 squares. Each rat was placed in the center of the apparatus and allowed to explore the field for 3 min. During the test time, the number of crossings (defined as at least three paws in a quadrant) and rearing (defined as the animal standing upright on its hind legs) were measured. The behaviors of each animal were simultaneously monitored by a video camera. The test apparatus was cleaned with a 10% ethanol solution and water to remove any olfactory cues between animals.
Forced swimming test
The design of the forced swimming test was adapted from previous description [29]. Briefly, rats were forced to swim individually in a cylindrical glass container (40 cm height, 18 cm diameter), which contained tap water (2462uC) to a depth adjusted for the weight of the individual animal, so that its hind paws could just touch the bottom of the container. After 15 min in the water, rats were removed and allowed to dry for 15 min in a heated container before being returned to their home cages. Rats were replaced in the cylinders 24 h later and total duration of immobility (immobility time) and activity (activity time) was measured during a 5-min test. The behaviors of each animal were simultaneously monitored by a video camera. Immobility was defined as rat not making any active movements other than that necessary to keep the head and nose above the water (e.g., when rats were floating in a vertical position). Activity included climbing (presenting active movements with the forepaws in and out of the water, usually directed against the walls) and swimming (showing active swimming motions, more than those necessary to keep the head above water, i.e. moving around in the cylinder or diving).
Open-field and forced swimming tests were performed during two consecutive days in the same groups of OVX/sham/naïve rats. Day 1 started with open-field test (OFT) followed by 15-min forced swimming training with a rest for 2 hours in home cages. On Day 2, forced swimming test was performed.
Morris water maze task
The water maze consists of a black round tank, which has a diameter of 150 cm and height of 54 cm and is filled with water (2462uC) to a depth of 38 cm. The water is made opaque by black food coloring so that the submerged platform (9.0 cm in diameter, 2.0 cm below the water surface) is invisible. Fixed, extra-maze visual cues were present at various locations around the maze (i.e., computer, video camera, posters) in the room with constant brightness (25 lux). The training and testing protocols were essentially as described [28]. The training procedure consists of two sessions with a 30 min interval in between, each session consisting of six consecutive trials. The submerged platform is located at the central position of the southeast quadrant of the tank. The starting position is randomly selected, but counterbalanced among the four positions. A rat was allowed to search for the submerged platform for 60 s. If successful in locating the platform within 60 s, the rat was allowed to stay on the platform for 30 s; if not, it was directed to the platform and allowed to stay there for 30 s. Thereafter, the rat was returned to a holding cage. The next trial began after an intertrial interval of 30 s.
A three-trial retention test was conducted 24 h after the training. For each rat at each trial, the submerged platform was fixed at the target quadrant and the starting point was at the position opposite to it. Each rat was giving 60 s to locate the submerged platform. If successful in finding the platform within 60 s, the rat was immediately returned to a holding cage for 60 s before next trial began. If unsuccessful in locating the platform within 60 s, the rat was directed to the platform and allowed to stay there for 30 s, and then was returned to a holding cage for 30 s before the next trial.
Immediately after the retention test was completed, the rat was tested in a visible platform version of the Morris water maze. The platform was raised to above the water surface and covered with white gauze to make it highly visible. Each animal was placed on the visible platform for 30 s before testing. The starting position for any given rat from the groups was selected randomly, but once selected it was fixed for that rat, whereas the visible platform was randomly placed among the four quadrants. The rat was allowed to locate the visible platform for 60 s in each trial. If successful in finding the platform, the rat was returned immediately to a holding cage; if not, the rat was removed from water and returned to a holding cage. The next trial began after an intertrial interval of 60 s. A total of three trials were conducted for each rat. Navigation of each animal in the water maze was monitored using a video camera, a tracking system and tracking software (Ploly-Track Video Tracking System, San Diego Instruments). Using the tracking software, escape latency, and swimming traces and speed were recorded for subsequent analysis.
Statistical analysis
Data are presented as mean 6 SEM. Student's t-test, paired t-tests, one-way, two-way ANOVA or repeated measures ANOVA (RM ANOVA) followed by post hoc Student-Newmann-Keuls test were used to identify significant differences. In all cases, P,0.05 was considered statistically significant.
Results
Mechanical response thresholds and thermal pain thresholds were not affected by estrous cycle Intact female rats were primarily classified as in proestrus, estrus, metestrus or diestrus on the day of testing according to the cellular characteristics of their vaginal smears. All rats had 4-5 day cycles. Baseline measures of paw withdrawal thresholds (PWTs) to von Frey hairs on both hindpaws did not differ across estrous phases (One-way ANOVA, Left: F 3,39 = 0.0916, p = 0.964; Right: F 3,39 = 0.0732, p = 0.974) (Fig. 1A). Radiant heat-evoked TF reflexes were measured in proximal and distal of the rat's tail with a time interval of 15 min between tests. Similar to PWTs, neither proximal nor distal tail-flick latencies (TFLs) was altered by estrous cycling (One-way ANOVA, distal: F 3,39 = 0.323, p = 0.809; proximal: F 3,39 = 0.108, p = 0.955) (Fig. 1B).
Surgical ovariectomies were confirmed by measure of plasma estrogen concentration, uterus weight and vaginal smears. After OVX, regular 4-day estrus cycle disappeared within one week and the plasma estrogen concentration decreased significantly at all the time-points we examined (One-way RM ANOVA, F 8,63 = 10.676, p,0.001) (Fig. 2G). In addition, consistent with previous reports [30], OVX caused a significant decrease in uterus size and increase in body weight (Fig. 2H and 2I).
Ovariectomy enhanced formalin-induced tonic pain but did not affect nerve injury-induced neuropathic pain Intraplantar (i.pl.) injection of 5% formalin (50 mL) into unilateral hindpaw produced typically two phases of nociceptive behavioral response including licking, shaking, elevating and clutching and favoring the affected paw. An early response (phase 1) lasted about 5 min followed by a 5-10 min period of decreased activity, and then, a late response (phase 2) lasted about 40 min. As shown in Fig. 3A, OVX rats showed higher pain scores than intact and sham rats. Two-way RM ANOVA analysis revealed the significance of the factor OVX (F 2,24 = 6.252, p = 0.001). In E2-treated OVX rats, formalin pain scores were significantly decreased as compared to vehicle controls (Two-way RM ANOVA, F 1,14 = 8.402, p,0.01) (Fig. 3B).
Surgery for chronic constriction injury (CCI) of the sciatic nerve was performed at 4 weeks after OVX or sham-OVX. Strong mechanical allodynia developed in the ipsilateral hindpaw on day 5 either in OVX or sham-OVX rats (Fig. 3C). No statistical difference in the PWTs was found between OVX and sham-OVX groups.
Ovariectomy attenuated pain-related aversion but did not affect fear-related aversion Intradermal injection of diluted formalin is painful in human [31]. The fact that formalin injection produces both conditioned place aversion (CPA) and other nociceptive behaviors in animals indicates that formalin is aversive to animals in a manner resembling the response to noxious stimuli in humans. Thus, formalin-induced CPA (F-CPA) is believed to reflect the emotional component of pain [32]. When a unilateral i.pl. injection of formalin (5%, 50 mL) was paired with a particular compartment in the place conditioning apparatus, all the intact, sham-OVX and OVX rats spent less time in this compartment on the post-conditioning day compared with the pre-conditioning day (Paired t-test, intact: t 0.01,14 = 5.517, P,0.001; sham-OVX: t 0.01,14 = 5.488, P,0.001; OVX: t 0.01,14 = 3.701, p = 0.002). Control animals with an i.pl. injection of NS did not exhibit CPA (Fig. 4A). However, the CPA score [the time spent in the treatment paired compartment on the pre-conditioning day minus the time spent in the same conditioning compartment on the postconditioning day] in OVX group was significantly lower than that in intact and sham-OVX groups (Fig. 4B) (One-way ANOVA, F 2,42 = 3.884, P = 0.028), implying that ovarian hormones in adult female rats may be involved in pain-related negative emotion.
The electric foot-shock at the low intensity of current (0.4-0.8 mA) is routinely used as a fear conditioning. When an electric foot-shock (0.5 mA) was paired with a particular compartment in the place-conditioning apparatus, the rats spent significantly less time in this compartment on the post-conditioning test day as compared with the preconditioning test day (Fig. 4C). The CPA scores were not significantly different between OVX and control (intact and sham-OVX rats) groups (Fig. 4D) (One-way ANOVA, F 2,21 = 0.0776, P = 0.926), suggesting that ovarian removal did not affect fear-related aversion. Ovariectomy increased depressive-like behaviors but did not impair spatial performance in the Morris water maze The forced swimming test was used to evaluate behavioral despair. Depressive-like behavior (behavioral despair) was defined as an increase in the time (in seconds) spent immobile. At the 5th week after OVX, rats showed a significant increase in immobility time and decrease in activity time as compared to sham-OVX and intact rats (One-way ANOVA, Immobility: F 2,49 = 4.771, P = 0.013; Activity: F 2,49 = 9.954, P,0.001). Following E2-treated (s.c. 30 mg/day for 7 days from the 4th week after OVX), rats showed a decreased immobility and increased activity time at the 5 th week as compared to vehicle-treated OVX (Student's t-test, Immobility: t 0.05,18 = 2.728; Activity: t 0.05,18 = 2.730) ( Fig. 5A and 5B).
The open-field test (OFT) was use to determine locomotion and exploratory behavior. At the 5th week after OVX, rats showed significant decrease in the number of crossings and rearings as compared to sham-OVX and intact rats (One-way ANOVA, Crossings: F 2,49 = 3.264, P = 0.047; Rearings: F 2,49 = 4.828, P = 0.012). Similar change was also seen in the vertical activity(One-way ANOVA,). Following E2 replacement, the number of crossings was significantly increased as compared to vehicle-treated OVX rats. (Student's t-test, t 0.05,18 = 2.327). E2 treatment also increased the number of rearings of OVX rats, although this increase in rearings did not reach statistical significance ( Fig. 5C and 5D).
At the 5 th week after OVX or sham-OVX surgery, rats were run in the Morris water maze. As shown in Figure 5E, three groups of rats including intact and received OVX or sham-OVX surgery were able to learn to find the submerged platform in the Morris water maze; the escape latencies became shorter with increased numbers of training trials in all groups (Two-way RM ANOVA, groups: F 2,21 = 0.697, p = 0.514; group x trials: F 22,231 = 0.828, p = 0.688). Moreover, when memory retention was tested 24 h after training, the OVX rats also displayed escape latencies that were similar to those of the intact and sham-OVX rats (Fig. 5E). To evaluate visual ability of the rats, a visible platform version of the test was performed. The three groups showed no distinction in performance in visible platform test, nor in swimming speed in the training and retention trials ( Fig. 5F and 5G).
Discussion
Numerous studies on both animals and human subjects have demonstrated the potential effects of ovarian hormones on pain transmission, emotion, learning and memory, but the literature is not unanimous [1,3,14]. In the present study, we observed the impacts of ovarian removal on nociception, negative emotion and spatial learning memory simultaneously in the same experimental conditions, providing a series of normalized behavioral data. Figure 3. OVX enhanced formalin-induced nociceptive responses during phase 2 but did not affect CCI-induced mechanical allodynia. (A) OVX rats showed a higher formalin pain scores during phase 2. * p,0.05, ** p,0.01 versus sham. (B) Subcutaneous injection of E2 (30 mg/day for 7 days) significantly suppressed formalin pain scores during phase 2. * p,0.05, ** p,0.01 versus vehicle control. E2 and vehicle were injected from the 4 th week after OVX, and formalin test was performed at the 5 th week. (C) Both sham-OVX and OVX groups developed mechanical allodynia on day 7 after CCI. There was no significant difference in PWTs between sham and OVX rats. CCI surgery was performed at 4 weeks after sham-OVX or OVX. ## p,0.01. doi:10.1371/journal.pone.0094312.g003
Ovariectomy and pain
One of the main observations of our study is that ovariectomy (OVX) of adult female rats induced a robust nociceptive hypersensitivity characterized by mechanical allodynia in hindpaws and thermal hyperalgesia in proximal and distal tail, as well as enhanced formalin pain scores. We were unable to detect significant differences in the nerve injury-induced mechanical allodynia between OVX and sham-OVX using the CCI model. Following unilateral CCI, all intact, sham-OVX and OVX rats developed mechanical allodynia within 5 days without differences in paw withdrawal thresholds (PWTs) of the three groups, which is consistent with a previous study [8]. We have also observed no estrous cycle differences in PWTs and tail-flick latencies (TFLs) in normal animals, confirming Sanoja and Cervero' s study in adult female mice [7].
Although there are studies reporting no changes or decreases in sensitivity to nociceptive stimuli following OVX [10,12], there are many more studies demonstrating somatic and visceral hyperalgesia in OVXs. For example, Sanoja and Cervero [7,33] described that OVX induced an increased visceral sensitivity and robust mechanical allodynia and hyperalgesia in the abdomen, hindlimbs and tail in mice. Bradshaw and Berkley [34] observed variable amounts of vaginal hyperalgesia measured by escape responses to vaginal distension and Ceccarelli et al. [4] reported that OVX rats showed longer pain responses to the formalin test. In the present study, we further showed that estrogen replacement for 1 week significantly reversed OVX-induce nociceptive hypersensitivity, suggesting that supplementary of estrogen produces antinociception in OVX animals. Consistently, previous studies have demonstrated that estradiol significantly decreased formalininduced nociceptive responses in OVX rats [35][36][37]. Also, the antinociceptive effect of estradiol on adjuvant-induced hyperalgesia was reported [38]. Very recently, a study of Yan and colleagues indicated that estrogen can trigger vagus-mediated antinociception in a rat model of visceral pain [39]. Taken together, a decrease in plasma estrogen level may parallel an increase in nociceptive sensitivity, while elevated estrogen level may antagonize ovarian hormones depletion-induced pain facilitation.
Intriguingly, we observed a significant thermal hyperalgesia in tail at the 2 nd week lasting for 7 weeks after OVX, paralleled to the time course of decrease in plasma estrogen levels. However, mechanical allodynia in hindpaws took 5 weeks to develop, implying differences in the effects of endogenous ovarian hormones on mechanical and thermal nociceptive processing. Differently, Ma et al. observed that robust decrease in PWTs in Von Frey test appeared at the second week and persisted for at least 6 weeks after OVX without associated thermal hyperalgesia [9]. While a study from Chen et al. showed that both mechanical allodynia and thermal hyperalgesia were detected from the first to 8 th week after OVX [40]. It is worth noticing that heat thresholds may be influenced by skin temperature. It has been reported that OVX significantly elevated tail skin temperature in rats [41]. In the present study, it could not be rule out that elevated tail skin temperature might contribute thermal hyperalgesia of OVX rats in TF reflex test.
Pain is a complex experience that incorporates both sensory and affective dimensions. As mention above, intradermal injection of diluted formalin produces both conditioned place aversion (CPA) and a two-phase nociceptive behavioral response in rats. In the present study, we observed that OVX produced a significant facilitation of formalin nociceptive behaviors during phase 2 and a decreased F-CPA score, implying that ovarian hormones in adult female rats may be not only involved in pain sensation but also in pain-related negative emotion. It was reported that subcutaneous injection of 17b-estradiol (E2) induced rat conditioned taste aversion in both sexes [42]. Our previous study further demonstrated that microinjection of E2 into the bilateral rostral anterior cingulate cortex (rACC) induced CPA by upregulating NMDA receptor function and blockade of estrogen receptors by ICI 182,780 prevented F-CPA [21]. The present study further demonstrated that depletion of ovarian hormones markedly suppressed aversive emotion evoked by formalin noxious stimulation. Differently, OVX did not influence foot-shock-induced CPA (S-CPA), a fear-conditioning task. Thus, our present results that OVX suppressed F-CPA but did not affect S-CPA suggest a particular role of ovarian hormones in pain-related aversive learning but not in fear learning processes.
Ovariectomy and depression
The role of ovarian hormones on the regulation of affective disorders has been established particularly in vulnerable women [43] In fact, the higher incidence and in some cases, severity of depression is associated with the presence or absence of ovarian hormones. Three to five times incidence of major depression is reported in perimenopause than that in other periods of reproductive life [44]. Depressive-like behaviors in rodents, as a consequence of ovarian hormone withdrawal, have also been reported. For instance, in forced swim test and tail suspension test, an increase in immobility time and a decreased in active behaviors those were thought to be indicative of depressive-like behaviors have been observed 2-4 weeks after OVX in both rats and mice, and substitution treatment with E2 partially attenuated these parameters [16,17,45,46]. The present results showed decreased exploratory and active behaviors in open-field test (OFT) and increased immobile behavior in FST at 5 weeks after OVX, and E2 replacement significantly retrieved these behavior disorders. These data confirmed the role of ovarian hormones, especially estrogen, in the regulation of depressive-like affective disorders.
With regard to the development of depressive-like behaviors, there were different reports depending on animal species/strains, behavioral test paradigms and test periods. Estrada-Camarena et al. reported that the immobile behavior was observed only at 1 week but not at 3 and 12 weeks after OVX in female Wistar rats [30]. Also, another study in female Wistar rats showed that the immobile behavior in FST 15 months after OVX did not differ from sham-OVX [47]. In adult female Long-Evans rats, anxietyand depressive-like behaviors were observed 6 months after OVX [15]. In the current study, OVX-induced depressive-like behaviors were observed at the 5 th week after surgery in adult Sprague Dawley female rats, enriching previous published studies.
Ovariectomy and learning memory E2 has been reported to promote the formation of new dendritic spines and excitatory synapses in the hippocampus [48][49][50]. E2 also increases hippocampus synaptic strength [51]. Our recent study observed similar changes in the cingulate cortex [21]. These results suggest that hippocampus-or cortex-dependent learning and memories are associated with ovarian hormones. Actually, in humans [52,53] and animals [54,55], long-term ovarian hormone loss following surgical menopause has been reported to impair cognition and learning memories. Morris water maze is an experimental method commonly used to evaluate spatial learning and memory in animal models [56]. Despite there were some studies showed an impaired spatial performance in OVX rats and an improved efficiency of spatial learning performance by E2 therapy [57,58], we did not observed spatial ability deficit in OVX rats by Morris water maze test in the present study. Consistently, Herlitz et al. showed that there were no considerable differences in cognitive performance between premenopausal and postmenopausal women [59]. A possible explanation for this discrepancy is that the cognitive impairment after OVX may be delayed for a longer period of time. A previous study from Markowska and Savonenko [55] indicated that after OVX the cognitive impairment was gradual (taking several months to be detected), initially occurred in tasks that placed more demands on working memory, and then was detected progressively in the easier tasks. A deficit first occurred 4 months after OVX in working memory, while even up to 9 months no differences in spatial reference memory were observed [55]. In our current study, OVX and intact or sham-OVX rats had similar spatial ability in Morris water maze task at the 5 th week after surgery. In support of these, a decreased synaptic strength at the hippocampal CA3-CA1 synapses was found only in long-term ovarian hormone loss rats (5 months after OVX) but did not in short-term OVX (7-10 days), suggesting OVX-induced hippocampus-dependent learning and memory deficits might be delayed [60].
|
2017-04-13T00:46:20.426Z
|
2014-04-07T00:00:00.000
|
{
"year": 2014,
"sha1": "e802ae1ae5110b2c25b2ea4753f58ec4a58ebd7c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0094312&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e802ae1ae5110b2c25b2ea4753f58ec4a58ebd7c",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54623323
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of aerosol-cloud-rainfall association over Indian Summer Monsoon region
15 Monsoonal rainfall is the primary source of surface water in India. Using 12 years of in-situ and satellite observations, we examined association of aerosol loading with cloud fraction, cloud top pressure, cloud top temperature, and daily surface rainfall over Indian summer monsoon region (ISMR). Our results showed positive correlations between aerosol loading 20 and cloud properties as well as rainfall. A decrease in outgoing longwave radiation and increase in reflected shortwave radiation at the top of the atmosphere with an increase in aerosol loading further indicates a possible seminal role of aerosols in deepening of cloud systems. Significant perturbation in liquidand ice-phase microphysics was also evident over ISMR. For the polluted cases, delay in the onset of collision-coalescence processes and 25 enhancement in the condensation efficiency, allows for more condensate mass to be lifted up to the mixed-colder phases. This results in the higher mass concentration of bigger sized icephase hydrometeors and, therefore, implies that the delayed rain processes eventually lead to more surface rainfall. Numerical simulation of a typical rainfall event case over ISMR using spectral bin microphysical scheme coupled with Weather Research Forecasting (WRF-SBM) 30 model was also performed. Simulated microphysics also illustrated that the initial suppression of warm rain coupled with increase in updraft velocity under high aerosol loading leads to enhanced super-cooled liquid droplets above freezing level and ice-phase hydrometeors, resulting in increased accumulated surface rainfall. Thus, both observational and numerical analysis suggest that high aerosol loading may induce cloud invigoration and thereby 35 increasing surface rainfall over the ISMR. While the meteorological variability influences the strength of the observed positive association, our results suggest that the persistent aerosolassociated deepening of cloud systems and intensification of surface rain amounts was applicable to all the meteorological sub-regimes over the ISMR. Hence, we believe that these results provide a step forward in our ability to address aerosol-cloud-rainfall associations 40 based on satellite observations over ISMR
The reviewer's concerns about disproportional averaging of data samples would have been valid only if the AOD range (0-1) has been divided into 50 linear bins (each of width 0.02) which would have certainly led to different number of data samples in various AOD bins. Instead, our methodology of creating 50 scatter points of equal percentiles (i.e. each AOD bin constitute of 2 percentiles of the total number of data samples) ensures that each AOD bin is the average of equal number of data samples. Many previous studies have also used this methodology for aerosol-cloud associations (for instance Koren et al., 2014andKoren et al., 2010b. We have further revised the caption of figures 3, 4 and 11 for better clarity. Associations of (A) daily rainfall, (B) precipitation rate, (C) cloud fraction, (D) cloud top pressure, and (E) cloud top temperature with AOD. The collocated data points for these five variables (A-E) were sorted as a function of AOD over ISMR during JJAS 2002. The total number of collocated data points (50n) are then used to create 50 AOD bins of 'n' number of samples (2 percentile) each. Each scatter point is the average of these equal 'n' numbers of data points mentioned in each respective panels.
Introduction
Aerosol-cloud-rainfall interactions and their feedbacks pose one of the largest uncertainties in understanding and estimating anthropogenic contribution of aerosols to climate forcing [Forster et al., 2007;Lohmann and Feichter, 2005]. A fraction of aerosol particles gets activated as cloud condensation nuclei (CCN) to form the fundamental requisite for cloud droplet formation. Thus, perturbations in regional aerosol loading not only influence the radiation balance directly but also indirectly via perturbing the cloud properties and thereby the hydrological cycle [Ramanathan et al., 2001].
Increase in aerosol loading near cloud base decreases the cloud droplet size and increases the cloud droplet number concentration [Fitzgerald and Spyers-Duran, 1973;Squires, 1958;Squires and Twomey, 2013;Twomey, 1974;1977;Warner and Twomey, 1967] These microphysical changes initiate many feedbacks. The narrowing of the droplet size distribution was suggested to delay the onset of droplet collision-coalescence processes and thereby enhancing the cloud lifetime [Albrecht, 1989] and the delay of raindrop formation [Khain, 2009;Rosenfeld, 1999;2000]. However, recent studies show that aerosol-induced initial stage suppression of raindrop formation provides the feedback mechanism for a change in microphysical-dynamical coupling within convective clouds, and results in the formation of deeper and wider invigorating clouds [Andreae et al., 2004;Koren et al., 2005]. For convective clouds with warm base, the activation and water supply all start in the warm part near the cloud base. The enhancement in droplet condensation releases more latent heat and, therefore, enhances updraft [Dagan et al., 2015;Pinsky et al., 2013;Seiki and Nakajima, 2014]. At the same time, smaller droplets will have smaller effective terminal velocity (i.e. better mobility) and, therefore, will be lifted higher in the atmosphere by the enhanced updrafts [Heiblum et al., 2016;Ilan et al., 2015]. Stronger updrafts and smaller effective terminal velocity result in more liquid mass being pushed up to the mixed and cold phases. Smaller sized droplets will freeze higher in the atmosphere [Rosenfeld and Woodley, 2000] releasing the freezing latent heat in relatively colder environment, boosting the updrafts and further invigorating the cloud system Andreae et al., 2004;Khain et al., 2008;Koren et al., 2005]. Hence, aerosol abundance can eventually cause intensification of precipitation rate due to cloud invigorating effect under convective conditions Koren et al., 2012;Li et al., 2011].In contrast, under low cloud fraction condition, the presence of high concentration of absorbing aerosols induces aerosol semi-direct effect causing cloud inhibition [Ackerman et al., 2000;Koren et al., 2004;Rosenfeld, 1999] and thereby reduction in surface rainfall. Thus, the aerosol-cloud associations observed over any given region is the net outcome of these competing aerosol effects on clouds [Koren et al., 2008;Rosenfeld et al., 2008]. Our present understanding of the sign as well as the magnitude of change in accumulated surface rainfall due to aerosols is inadequate. Besides, aerosol-cloud-rainfall associations are highly sensitive to variation in thermodynamical and environmental conditions, cloud properties, and aerosol types [Khain et al., 2008;Lee, 2011;Tao et al., 2012], further complicating these interactions. Moreover, clouds and precipitation can also interact with aerosols through wet scavenging process [Grandey et al., 2013;Grandey et al., 2014;Yang et al., 2016]. Global model simulations illustrated that wet scavenging can cause a strong negative cloud fraction-AOD correlation over the tropics [Grandey et al., 2013]. Wet scavenging effect can also generate similar negative rain rate-AOD association in the tropical and mid-latitude oceans .
Indian summer monsoon is the lifeline for regional ecosystems and water resources, and plays a crucial role in India's agriculture and economy [Webster et al., 1998]. Indian summer monsoon from June through September (JJAS) fulfils about 75% of the annual rainfall over central-north India. Variation in daily rainfall during summer monsoon rainfall is directly linked to India's Kharif food grain production [Preethi and Revadekar, 2013]. A rapid increase in population and industrialization over the last two decades has also resulted in high anthropogenic aerosol loading over Northern India, particularly in the Gangetic basin [Dey and Di Girolamo, 2011].
Consequently, the net impact of such large continental aerosol loading on cloud properties and daily surface rainfall in India is an important question that requires utmost attention. Recent studies based on aerosol direct effect have shown different plausible pathways of aerosol impact on rainfall. Lau and Kim (2006) [Lau and Kim, 2006] have shown that aerosol-induced atmospheric heating over Himalayan slopes and Tibetan plateau during monsoon onset period, intensifies the northward shift of Indian summer monsoon, causing reduction in rainfall over ISMR. On the other hand, high aerosol loading also induces a solar dimming (absorbing) effect at surface [Ramanathan and Carmichael, 2008;Ramanathan et al., 2001], which can alter the land-ocean thermal gradient and weaken the meridional circulation, resulting in a drying trend in seasonal rainfall during Indian summer monsoon [Bollasina et al., 2011;Ganguly et al., 2012].
Presence of higher concentrations of absorbing aerosols over North India is shown to induce a stronger north-south temperature difference which fosters enhancement in moisture convergence from ocean and a transition from a break spell of ISM to an active spell of ISM [Manoj et al., 2011]. Further, this aerosol radiative effect causes increase in the moist static energy, invigoration of convection and eventually more rainfall over India during the following active phase [Hazra et al., 2013;Manoj et al., 2011]. These studies provide valuable insight on different pathways of aerosol's radiative impact on the monsoon dynamics and seasonal rainfall over India. However, the microphysical aspect of aerosol's impact on the sign and the magnitude of the monsoonal rainfall over the Indian summer monsoon region (ISMR) is largely unknown [Rosenfeld et al., 2014]. Nevertheless, a few recent studies have indicated existence of strong aerosol microphysical effect on cloud systems over ISMR [Konwar et al., 2012;Manoj et al., 2012;Prabha et al., 2012;Sarangi et al., 2015;Sengupta et al., 2013]. Conversely, summer monsoon plays an important role in determining variation in aerosol loading over India by bringing clean marine air and wet scavenging, which are as important as emission in determining aerosol concentration [Li et al., 2016]. It has also been shown that aerosols over the Indian Ocean interplay with seasonal changes over ISMR [Corrigan et al., 2006].
Here, we have used 12 years (JJAS) of gridded datasets of surface rainfall, aerosol and cloud properties to examine aerosol-related changes in cloud macro-, micro-and radiative properties, and thereby on daily surface rainfall over ISMR. Aerosol associated changes in onset of warm rain, microphysical profiles and cloud radiative forcing is analysed using observation and idealized simulations to investigate significance of aerosol microphysical effect over ISMR.
The role of meteorology and aerosol humidification effect due to cloud contamination in retrieved aerosol optical depth (AOD)is also estimated to ensure the causality of the observed associations. This comprehensive effort to understand aerosol-cloud-rainfall interactions over India will likely illustrate the significance of aerosol's impact on monsoonal rainfall via microphysical pathway under continental conditions.
A new high resolution (0.25 o × 0.25 o gridded) daily rainfall (RF) dataset prepared by India Meteorological Department (IMD) [Pai et al., 2013] was used to represent accumulated surface rainfall. Quality assured measurements of RF from in-situ rain gauge stations (~6955) across the country were interpolated using an inverse distance weighted interpolation scheme [Shepard, 1968], to create this gridded product. The daily surface rainfall from previous day (08:30 am, local time) till 08:30 am (local time) present day has been recorded as daily rainfall at all rain gauge stations maintained by IMD for 110 years . This product has been extensively validated against previous IMD rainfall products as well as the Asian Precipitation -Highly-Resolved Observational Data Integration Towards Evaluation (APHRODITE) rainfall dataset [Pai et al., 2013]. IMD daily rainfall gridded datasets have been widely used by several investigators to study the rainfall climatology and its inter-seasonal and intra-seasonal variability over Indian summer monsoon region [Goswami et al., 2006;Krishnamurthy and Shukla, 2007;Pai et al., 2014;Rajeevan et al., 2008]. The precipitation rate (PR) at 12 PM local time was also obtained from the Tropical Rainfall Measuring Mission (TRMM) [Huffman et al., 2010]. RF as well as PR datasets were linearly re-gridded to the 1 o ×1 o grid for consistency in our correlation analysis.
For the correlation analysis between any two variables, only those spatio-temporal grids were considered where collocated measurements of both variables were available.The collocated variables RF, PR, CF, CTP and CTT were then sorted as a function of AOD and averaged to create total 50 scatter points. AODs > 1.0 (~ 5%)were omitted to reduce possibility of inclusion of cloud contaminated data in our analysis. Shallow clouds with CTP > 850 hPa (about 7 %) were also not considered in this analysis. Previous studies have also reported aerosol microphysical effect using such correlation analysis based on satellite datasets [Chakraborty et al., 2016;Feingold et al., 2001;Kaufman et al., 2002;Koren et al., 2010a;Koren et al., 2014;Koren et al., 2004;Koren et al., 2012;Myhre et al., 2007]. Importantly, the availability of the ground based in-situ daily rainfall dataset enables us to further investigate the aerosol-cloudrainfall association over ISMR spanning from 17° N to 27° N in latitude and 75° E to 88° E in longitude (bounded by black box in Figure 1). Here, we have excluded regions with mountainous terrain (Himalayan terrains to the north) and desert/barren land use regions (Thar Desert and nearby arid regions). This was done to avoid inclusion of extreme orographic precipitation as well as retrieval error in the satellite products (e.g. lower sensitivity over brighter land surfaces for MODIS aerosol products). ISMR has previously been extensively studied by several investigators [Bollasina et al., 2011;Goswami et al., 2006;Sengupta et al., 2013] as the rainfall variability over this region is highly correlated with that of the entire India rainfall during June to September [Gadgil, 2003].Generally, aerosol loading over ISMR is very high (climatologically mean AOD of 0.56, Figure 1A), particularly over densely populated Gangetic basin. At the same time, ISMR has a high cloud cover (CF of 0.72, Figure 1B) and receives widespread rainfall (RF of 9.4 mm, Figure 1C) during monsoon. This implies rapid buildup of aerosol concentration over this region after every rainfall event mainly due to high emission rate and geography-induced accumulation of anthropogenic aerosols. Thus, collocation of heavy pollution and abundant moisture over ISMR makes it an ideal region to investigate aerosol-cloud-rainfall associations [Shrestha and Barros, 2010].
Analysis of aerosol impact on cloud radiative forcing
Clouds increase Earth's albedo and cool the atmosphere by reflecting solar radiation to space as well as warm the atmosphere by absorbing Earth's outgoing longwave radiation [Trenberth et al., 2009]. Thus, aerosol microphysical effect in convective clouds will manifest itself in association between cloud radiative forcing and aerosol [Feingold et al., 2016;Koren et al., 2010b]. Here, the Clouds and the Earth's Radiant Energy System (CERES) [Wielicki et al., 1996] retrieved outgoing shortwave (SW) and longwave (LW) radiation at top-of-the-atmosphere (TOA) was also used to illustrate the aerosol-induced changes to cloud radiative forcing. The CERES fluxes were sorted and averaged as a function of AOD (similar to correlation analysis detailed in Section 2.1) for two different scenarios, i.e. all sky and clear sky. While aerosol radiative forcing during clear sky scenario includes only aerosol direct effect, radiative forcing due to aerosol indirect effect can be estimated from the net difference between all sky and clear sky scenario.
2.3.Analysis of aerosol impact on liquid-and ice-phase cloud microphysics
MODIS observations of cloud top liquid effective radius (R e )as a function of cloud top pressure for convective cloud fields can be assumed as a composite R e -altitude profile obtained from tracking the space-time evolution of individual clouds [Lensky and Rosenfeld, 2006].
Insensitivity of R e to spatial variations at any particular altitude is also reported during CAIPEEX campaign over ISMR [Prabha et al., 2011]. CTP and R e was segregated into groups of low (AOD<33 percentiles) and high (AOD>67 percentiles) aerosol loading regime using collocated AOD values. R e as a function of CTP was compared between low and high aerosol regimes. The aerosol associated differences in growth of cloud droplets with height from these CTP-R e profiles were used to infer aerosol-induced differences in warm cloud microphysical processes and the initiation of rain over ISMR [Rosenfeld et al., 2014 and references therein].
CloudSat-retrieved profiles of liquid-phase and ice-phase water content as well as icephase effective radius (R e, ICE ) available at 75 meters vertical resolution within ISMR [Austin et al., 2009;Stephens et al., 2002] were also segregated in to low (AOD<33 percentiles) and high (AOD>67 percentiles) aerosol loading conditions. The mean microphysical variables along with their variability(profiles indicating 25 th and 75 th percentile) for low and high aerosol bins were plotted against altitude to visualize the net increase or decrease in liquid-phase water content, ice-phase water content and size of ice-phase hydrometeors at different altitudes with increase in aerosol loading. The (two sample) Student's t-test was used for statistical hypothesis testing about mean of the groups in each subplot.
Modeling aerosol-cloud-rainfall associations: A case study of a heavy rainfall event over ISMR
The WRF model is a regional numerical weather prediction system principally developed by the National Centre for Atmospheric Research (NCAR) in collaboration with several research institutions in U.S. The Advanced Research WRF (ARW) version 3.6 along with a newly coupled fast version of spectral bin microphysics (WRF-SBM) is used to perform three idealized supercell simulations of a typical heavy rainfall event over ISMR. The spectral bin microphysics scheme is specially designed to study aerosol effect on cloud microphysics, dynamics, and precipitation based on solving kinetic equations system for size distribution functions described using 33 doubling mass bins [Khain and Lynn, 2009;Khain et al., 2004;Lynn and Khain, 2007].
In fast SBM four size distributions are solved, one each for CCN, water drops, low density ice particles and high density ice particles. All ice crystals (sizes<150m) and snow (sizes<150 m) are calculated in the low density ice particle size distribution. Graupel and hail are grouped to the high-density ice, represented with one size distribution without separation. The empirical dependence N = N o* S k is used to calculated the initial (at time, t = 0) CCN size distribution [see Khain et al., 2000 for details]; where, N o and k are parameters which varies with aerosol number concentration and chemical composition, respectively and N is the concentration of nucleated droplets at supersaturation, S, (in %) with respect to water. At each time step the critical aerosol activation diameter of cloud droplets is calculated from the value of S (using Kohler theory). It explicitly calculates nucleation of droplets and ice crystals, droplet freezing, condensation, coalescence growth, deposition growth, evaporation, sublimation, riming, melting and breakup of the categorized hydrometeor particles. Details about the parameterizations used for these processes can be found in previous studies [Khain and Lynn, 2009;Khain et al., 2004;Lynn and Khain, 2007].
Figure 2.Skew-T -log-P diagram illustrating the initial conditions of dew point temperature (red hashed line) and atmospheric temperature (red solid line) used in all the three WRF-SBM idealized simulations. Blue, yellow, green, black and purple lines indicate lines of constant temperature (isotherm), potential temperature and equivalent potential temperature, pressure (isobar), and saturation mixing ratio, respectively.
We found that the mean relative humidity in lower troposphere remains high over ISMR during moderate and heavy rainfall events (RF>6 mm) using GDAS data ( Figure 2) which is typical of moderate to heavy rainfall event over ISMR. This particular period was selected because measurements of CCN spectrum near cloud base were also available from CAIPEEX campaign over the region [Prabha et al., 2012]. We performed three simulations with same initial thermodynamic conditions but different initial N o to represent low (N o = 4500 particles/cm 3 ), medium (N o = 9000 particles/cm 3 ) and heavy (N o = 15500particles/cm 3 ) aerosol loading conditions, hereafter referred to as Ex1, Ex2 and Ex3, respectively. The simulations were performed for 160 minutes at a resolution of 1 km over a domain of 300 km x 300 km. The number of vertical sigma levels was 41 and the top height was about 20 km. Rayleigh damping was used to damp the fluctuations reaching the upper troposphere in the idealized simulation [Khain et al., 2005]. An exponentially decreasing (both horizontally and vertically) temperature pulse of 3°C was used to trigger the storm [Khain and Lynn, 2009;Khain et al., 2004;Lynn and Khain, 2007]. A comparison of droplet size distribution, microphysical profiles, vertical velocity, column accumulated water content of various cloud species and surface rainfall from these simulations illustrate the process level linkage between aerosol increase and surface rainfall. The simulation output of mass size distributions of water droplets, low density ice particles and high density ice particles were recorded every 15 minutes of model time. Assuming that all the hydrometeors were spherical shaped, we calculated the number-size distribution from the mass-size distribution by using the bulk radius-density functions specified in SBM for each hydrometeor (shown in Figure 1 of [Iguchi et al., 2012] (1) Where, r i is a half of the maximum diameter and N i is the particle number concentrations of i th bin. For calculating R e of cloud droplets the bins with diameter <50 was considered. We used 1 st -17 th bins and17 th -33 th bins of low density ice hydrometeors size distribution, separately, to calculate R e, ice and R e, snow respectively. R e, graupel was calculated using size distribution of high density ice particles.
Analysis of possible caveats in correlation analysis
It is well documented that the aerosol-cloud correlation analysis using satellite data can be affected by one or more of the following factors: (1) positive correlation of variability in aerosol and cloud-rainfall fields with meteorological variations, which are the true modifiers of cloud and rainfall properties [Chakraborty et al., 2016;Kourtidis et al., 2015;Ten Hoeve et al., 2011] and (2) cloud contamination of retrieved AOD values due to aerosol humidification effect [Boucher and Quaas, 2013;Gryspeerdt et al., 2014]. (3) Inaccurate representation of wet scavenging effect in satellite retrieved AOD dataset [Grandey et al., 2013;Grandey et al., 2014;Yang et al., 2016].Therefore, we have critically investigated the plausible role of these factors in our analyses as presented below.
2.5.1.Influence of meteorological variability
Here, we obtained various meteorological fields from the NOAA-NCEP Global Data Assimilation System (GDAS) dataset [Parrish and Derber, 1992] as an approximation for the meteorological conditions at the same time and location of the satellite observations. Since the NOAA-NCEP GDAS assimilated product does not contain direct information on the aerosol microphysical effects, it is a suitable tool to investigate if the meteorological variations favoured aerosol accumulation under wet/cloudy conditions [Koren et al., 2010a]. GDAS variables at 1 o spatial resolution and 21 vertical model levels (1000 hPa -100 hPa) over ISMR from the 12:00 LT run were used. First, correlation of different GDAS meteorological variables with cloud fraction, daily rainfall and AOD, separately, using all grid points within ISMR at each model vertical level was performed. Based on the correlation analysis, the likely meteorological variables (with correlation coefficient > 0.25) which can affect cloud and rainfall properties in ISMR were identified. Next, we made narrow regimes of these key meteorological variables to constrain the variability in these meteorological factors and repeated the correlation analysis of AOD-cloud-rainfall gradients. This approach can be assumed to be similar to simulating the effect of increasing aerosol loading on cloud-rainfall system for similar meteorology
2.5.2.Cloud contamination of aerosol retrievals
Aerosol-cloud-rainfall studies based on satellite data are, in part, biased by aerosol humidification effect due to uncertainties in retrieved AOD from near cloud pixels. For instance, an increase in surface area of aerosol due to water uptake may cause elevated AOD levels measured in the vicinity of clouds [Boucher and Quaas, 2013]. The humidification effect on the AOD depends on the variability range of ambient RH [O Altaratz et al., 2013]. Here, we used radiosonde measurements (JJAS, 2002(JJAS, to 2013 from World Metrological Organization stations [Durre et al., 2006], within ISMR (Table 2)to identify profiles that had potential of cloud formation. Specifically, the selected profiles had unstable layer below lifting condensation level (LCL). However, the profiles suggesting low level clouds (mean RH below LCL>98%) were removed. A major portion of aerosols contributing to columnar AOD are usually present below 3 km altitude over ISMR during monsoon/cloudy conditions. Thus, we focused this analysis for RH below 3 km altitude. Also, the changes in mean RH values associated with the change in cloud vertical extent was calculated based on O Altaratz et al., (2013). The height above the level of free convection where the theoretical temperature of a buoyantly rising moist parcel (following wet adiabatic lapse rate) becomes equal to the temperature of the environment is referred to as equilibrium level. The height of atmospheric layer between LCL and the equilibrium level is referred to as the cloudy layer height (CLH). Also, in case of the presence of inversion layer, the top of the CLH is determined as the base of the lowest inversion layer located above the LCL. Based on median CLH, the selected profiles at each station were divided into two subsets of equal number of samples representing shallower and deeper clouds. The bias in mean RH between shallower and deeper clouds for each station was calculated to illustrate the influence of cloud height on the RH variability. Bar-Or et al. [2012] have parameterized RH in cloudy atmosphere as a function of the distance from the nearest cloud edges. Given the hygroscopic parameter, k, this parameterization can be used to simulate hygroscopic properties and model the humidified aerosol optical depth. Bhattu and Tripathi [2014] have reported that k of ambient aerosol over Kanpur (in Gangetic basin ) during monsoon is 0.14±0.06. Accordingly, we have considered minimum (maximum) k over ISMR as 0.1 (0.2), and have used the parameterization to estimate the change in AOD due to the observed variation in RH field. First, the range in RH variation was scaled as distance from the nearest cloud (using Figure 3 of Bar-Or et al. 2012) and then the change in AOD was estimated (using Figure 6 of Bar-Or et al. 2012) for each subset.
Effect of under-representation of wet scavenging effect on retrieved AOD values
Aerosols present below cloudy pixels are not visible to satellite. To circumvent this limitation in investigating aerosol-cloud-rainfall association, it ccould be reasonable to assume that the mean aerosol distribution below the non-raining cloudy pixels is similar in magnitude to the aerosol distribution of the non-cloudy pixels within a 1 o x 1 o grid box. Nevertheless, aerosols below cloudy pixels, where rainfall occurs, are subject to depletion due to wet scavenging effect.
Thus, wet scavenging effect might not be accurately represented in the MODIS retrieved AOD dataset used in our study. Modelling studies suggest that this artifact in the satellite retrieved AOD values can significantly affect the magnitude as well as the sign of the aerosol-cloudrainfall associations [Grandey et al., 2013;Grandey et al., 2014;Yang et al., 2016]. At the same time, Gryspeerdt et al., (2015) [Gryspeerdt et al., 2015] have recently illustrated that the aerosol in neighboring cloud-free regions may be more representative for aerosol-cloud interaction studies than the below-cloud aerosol using a high resolution regional model, justifying the methodology used in their study. The main limitation in investigating the impact of probable inaccuracy in representing wet scavenging effect on our analysis is lack of collocated measurements of aerosol-cloud-rainfall at temporal resolution of rainfall events from spaceborne measurements. Hence, we used collocated hourly measurements of aerosol and rainfall over Indian Institute of Technology, Kanpur (IITK) as a representative case study dataset to investigate the possible effect of wet scavenging on aerosol-rainfall associations within ISMR.
AErosol RObotic NETwork (AERONET), is a global network of ground based remote sensing stations that provides quality-controlled measurements of aerosol optical depth with high accuracy [Dubovik and King, 2000;Holben et al., 1998]. Hourly averages of AOD (550 nm) used in this analysis were obtained from the quality ensured Level-2 product of AERONET site deployed in the IITK campus. Rainfall events were identified from collocated rain gauge measurements near AERONET station within IITK campus between April-October; We have also included the months of April, May and October to increase the number of sample points. Rainfall amount of all the rainfall events were sorted as a function of collocated AERONET-AOD values (mean of AERONET-AOD measurements within ± 4 hour of the start/end of the rainfall) into 5 equal bins of 20 percentiles each. As AERONET-AOD measurements were available only between sunrise and sunset, we have used AOD values of late evening measurements as representative of aerosol loading during the first rainfall event (if any) at night-time. However, in case of more than one rainfall events at night, only the first rainfall event is considered in this analysis. Nearly half of the AOD-rainfall samples used here included AOD measurements within 4 hours after the end of any rainfall event, and therefore, this includes a wet scavenging effect of rainfall on AOD measurements. To reproduce another specific scenario, only the rainfall-AOD samples with availability of AOD measurements before start of rainfall events were collected and sorted as a function of AOD into 5 equal bins of 20 percentiles each. This restricted sampling does not include the wet scavenging effect as only the AOD-values before the start of rainfall in each rainfall event were used. The average of rainfall amount for each bin was plotted against mean AOD values under both scenarios to illustrate the difference in aerosol-rainfall association due to exclusion of wet scavenging effect within ISMR. Figure 3A shows the relationship between AOD and IMD RF. RF increased from 5.9 mm to 7.1 mm as AOD increased from 0.25 to 0.75. A similar relationship was also observed in case of TRMM PR in Figure 3B. Precipitation rate increased from 0.31 mm/hr to 0.38 mm/hr for the same amount of increase in AOD (0.25 to 0.75). Concurrent analysis of aerosol and cloud properties showed aerosol-induced modifications in cloud macrophysics. Widening of clouds was observed as cloud fraction increased from 0.78 to 0.92 with increase in AOD from 0.25 to 0.75 ( Figure 3C). A monotonic decrease in CTP and CTT ( Figures 3D and 3E), nearly by 200 hPa and 22 o K, respectively, for the same increment in AOD, further indicate vertical deepening of the cloud with increasing aerosol loading. Aerosol-cloud studies have reported reduction in cloudiness under high AOD for regions with high absorbing aerosol loading [Koren et al., 2004;Small et al., 2011]. Widespread cloud coverage over ISMR (CF of ~0.75 for AOD ~0.3 in Figure 3) induces substantial reduction in the incoming solar radiation [Padma Kumari and Goswami, 2010], which may result in reduced interaction between absorbing aerosols and shortwave radiation. This explains that, despite the high emission rate of absorbing aerosols over ISMR [Bond et al., 2004], the aerosol-induced cloud inhibition effect seemed to have been reduced to a second order process during Indian summer monsoon. For a sanity check, we have re-analyzed cloud and rainfall associations with aerosol loading by dividing ISMR into two sub-regions (shown in Figure 1A).A similar aerosol-cloud-rainfall associations (in both the regions) were observed to that seen in Figure 3. In addition, the analysis was also repeated by segregating the dataset into low level (850hPa>CTP>500 hPa) and high level clouds (CTP<500 hPa) (Figure not shown). Despite the considerable differences in mean CTP and CTT found between low-and high-level clouds, the general associations was similar in both the regimes (as in Figure 3).
3.1.Cloud, rainfall and radiation associations with aerosol loading
Analysis of individual months viz; June, July, August and September also illustrated similar positive associations as seen in Figure 3 indicating negligible intra-seasonality in the observed associations. [Kourtidis et al., 2015;Myhre et al., 2007;Ten Hoeve et al., 2011], CTP Myhre et al., 2007;Yan et al., 2014] and rainfall [Gonçalves et al., 2015;Heiblum et al., 2012]. These studies suggested that aerosol-induced changes in cloud dynamics and microphysics are the potential causal mechanism for the aerosolcloud-rainfall linear dependence. Over Indian region, previous studies have compared MODISobserved cloud microphysical properties between low and high aerosol loading to demonstrate aerosol microphysical effect and its linkage to inter-annual variations in seasonal rainfall [Abish and Mohanakumar, 2011;Panicker et al., 2010;Ramachandran and Kedia, 2013]. Aerosol impacts on cloud microphysics over central India based on ground based measurements is also evident [Harikishan et al., 2016;Tripathi et al., 2007]. Aircraft measurements during Cloud Aerosol Interaction and Precipitation Enhancement EXperiment (CAIPEEX) campaign over ISMR have provided unprecedented evidence of aerosol microphysical effect on cloud droplet distribution and warm rainfall suppression over ISMR [Konwar et al., 2012;Pandithurai et al., 2012;Prabha et al., 2011]. Recently, Sengupta et al. [2013]have also discussed the possible aerosol-induced deepening of clouds with evolution of Indian monsoon using MODIS retrieved CTP.
Next, aerosol-related convective invigoration was investigated using CERES retrieved outgoing radiative fluxes at the top of the atmosphere. Our analyzes showed that for every unit increase in AOD, reflected SW radiation increased by ~68 W/m 2 , whereas LW decreased by ~26 W/m 2 at the top of the atmosphere for all sky scenario ( Figure 4A). Taller clouds exhibit colder cloud tops as they are in a thermodynamic balance with the environment, therefore, the observed decrease in LW with increase in AOD further provides evidence of aerosol-induced cloud invigoration over ISMR Koren et al., 2010b]. Increased cloudiness was also evidenced as the cloud albedo increased, thereby reflecting back more SW radiation at the top of the atmosphere. A large number of small ice crystals formed in the upper troposphere due to cloud invigoration eventually get aligned as larger and longer-lived anvils detrained from cloud tops [Fan et al., 2013]. Such anvil expansion effect of aerosol [Rosenfeld et al., 2014] may also contribute to the aerosol-associated increase in SW radiative forcing. Quantitatively, the net cooling per unit increase in AOD ( Figure 4B) under clear sky scenario was ~13 W/m 2 ,whereas the net cooling for same change in AOD under cloudy condition was twice more than that under clear sky scenario i.e. ~30 W/m 2 . [2002][2003][2004][2005][2006][2007][2008][2009][2010][2011][2012][2013]. The collocated data points for both SW and LW as a function of AOD were first sorted. The total number of collocated data points (50n) are then used to create 50 AOD bins of 'n' number of samples (2 percentile) each. Each scatter point is the average of these equal 'n' numbers of data points mentioned in each respective panels.
3.2.1,Effect of aerosol-related changes in microphysical processes
Many studies have shown that the onset of warm rain and collision-coalescence process are dependent on the CCN concentration [Freud et al., 2011 and references therein]. MODIS retrieved droplet effective radius as a function of CTP grouped under low and high aerosol loading cases can be used to investigate the aerosol-induced differences in warm rain processes like diffusion and coalescence processes [Rosenfeld et al., 2014].In Figure 5, we present cloud microphysical changes for low and high aerosol loading using MODIS and CLOUDSAT datasets. Figure 5A illustrates that R e of liquid droplets near cloud base was smaller (6 µm) in clouds developed under higher AOD conditions which is in agreement with aerosol first indirect effect [Twomey, 1974]. In addition, the vertical growth of R e under polluted conditions increased at a gradual rate (~3 µm/100 hPa) for R e <14 µm compared to the vertical gradient of increase in R e (~10 µm/100 hPa) in relatively clean clouds (low aerosol loading). Also note that the altitude difference between cloud base and onset of warm rain was smaller under low AOD cases (~50 hPa) compared to that at high AOD cases (~250 hPa). Concurrently, the mean R e for high AOD cases was very small (~10 µm) near the freezing level compared to low AOD indicating increase in droplets of smaller size at higher levels with increase in aerosol loading ( Figure 5A). Thus, significant increase and sustenance of smaller supercooled liquid drops was found above freezing level under polluted conditions. Aircraft measurement of clouds developed under dirty conditions during CAIPEEX campaign over ISMR have also documented that R e remained below 14 µm up to 500 hPa altitude and formation of rain drops mainly initiated as supercooled raindrops at ~ 400 hPa [Konwar et al., 2012;Prabha et al., 2011].
From CLOUDSAT analyses, mean ice-phase effective radius (R e ,ICE ) for high aerosol loading was found to be 8-10% greater (significant at >95% confidence interval) throughout the cloud layer compared to that for low aerosol loading at the same altitude ( Figure 5B), indicative of the formation of bigger sized ice-phase hydrometeors under high aerosol loading. Figure 5C shows the difference (high aerosol -low aerosol) in mean liquid-phase and ice-phase water content. Significant enhancements in ice-phase water content was clearly evident under high aerosol loading ( Figure 5C). The increase in mass concentration of ice-phase hydrometeors was ~50 mg/m 3 at altitudes 8-13 km. Similar increase in number concentration of ice hydrometeors was also observed from CLOUDSAT observations (figure not shown).
Figure 5.Observed differences in cloud microphysical properties for low and high aerosol loadings cases. A) MODIS observed mean profiles of liquid-phase effective radius (R e ), B)
CLOUDSAT observed mean profiles of ice-phase effective radius (R e, ICE ) under low (blue) and high (red) aerosol loading conditions. The dotted lines represent 25 th and 75 th percentiles, respectively. C) Difference (high AOD -low AOD) in mean profiles of liquid-phase (black) and ice-phase (pink) water content as observed from CLOUDSAT.
3.2.2.Modelling aerosol microphysical effect for a typical rainfall event during ISM
In order to further investigate the process level insights to our observational findings, we conducted model simulations using WRF-SBM for a typical mesoscale convective system over ISMR. Three idealized supercell simulations (Ex1, Ex2 and Ex3 as explained above) were performed with the observed CCN spectra being lowest for Ex1 and highest for Ex3. Figure 6.A)Time evolution of column integrated domain averaged cloud water content (CWC; black), rain water content (RWC; blue), summation of ice water content, graupel water content and snow water content (IWC+GWC+SWC; green), vertical velocity (red) and accumulated surface rainfall (pink) for simulation Ex1. B) Same as Panel A, but for simulated differences between Ex2 and Ex1. C) Same as Panel A, but for simulated differences between Ex3 and Ex1. Figure 6A shows the time evolution of domain averaged mean columnar cloud water content (CWC), rain water content (RWC), summation of ice phase hydrometeors i.e. snow, graupel and ice water content (SWC+GWC+IWC), vertical velocity (W), and accumulated rainfall for low CCN (aerosol) condition. It can be seen that convection was strong after 50 minutes (consistent updrafts > 0.2 m/s), with corresponding enhancements in CWC, RWC and hydrometeors till the end of simulation. The domain-averaged accumulated rainfall was found to be ~0.8 mm/grid at the end of simulation. The simulated differences between high CCN and low CCN conditions ( Figures 6B and 6C)clearly show significant intensification in the microphysical and dynamic variables with increase in CCN concentration. The magnitude of W, CWC, RWC and ice-phase water content increased in both simulations (Ex2 and Ex3), as compared to simulation Ex1. The simultaneous increase in accumulated rainfall was also evident with increase in CCN concentrations, mainly during the last half of the simulations.
The observed increase in accumulated rainfall was found to be 0.68 mm and 0.28mm for an increase in AOD of 0.5 (Ex3-Ex1) and 0.3 (Ex2-Ex1), respectively, suggesting a nearly linear relationship in CCN-cloud-rainfall association as observed in Figure 2. Nevertheless, a closer look at Figures 6B and 6C reveal a temporal delay in initial formation of RWC, ice-phase hydrometeors and surface rainfall with increase in CCN concentrations. This can be understood from the negative values of differences in RWC, total water content of ice-phase hydrometeors and rainfall between 40-100 minutes of simulation. However, the increase in rainfall amount with increase in CCN concentration in later stage of simulation was manifold compared to the initial suppression of warm rainfall eventually leading to the enhancement of accumulated rainfall throughout the storm domain (Figure not shown). Figure 7: A) Mean droplet R e versus CTP for low (Ex1; blue), medium (Ex2; black) and high (Ex3; red) CCN scenario. B) Droplet size distribution spectra of Ex1 (blue), Ex2 (black) and Ex3 (red) simulations at 700 hPa (dashed lines) and 300 hPa (solidlines). The corresponding effective radius values are mentioned in the legends in square brackets. Fractional contribution is calculated by dividing the mass concentration of each bin with the total mass concentration. Figure 7A illustrates the simulated time and domain averaged profiles of droplet effective radius for Ex1 and Ex3. It can be seen that droplet R e in Ex3 simulation was lower compared to that of Ex1 throughout the cloud column and the differences increased with altitudes, indicative of the slower growth of cloud droplets for high CCN condition (Ex3) as compared to low CCN (Ex1), in line with our observation from MODIS analyzes. For instance, the difference in droplet at 700 hPa and 300 hPa was ~3 µm and ~8 µm, respectively ( Figure 7B). The simulated spectral width of the droplet size distributions for Ex3 and Ex1 also showed a significant shift of the droplet spectral toward lower R e with increase in CCN. It can be seen that increase in CCN concentration also leads to narrowing of droplet spectral at same altitude.
The aerosol-induced increase (Ex3-Ex1) in time and domain averaged CWC, RWC, IWC, SWC, GWC at different altitudes is shown in Figure 8A. Modelling results also show that the maximum increase in CWC (23 mg/m 3 ) was above freezing level at altitude ~7 km, which suggest that the increase in CCN caused increase in supercooled liquid droplets. Similar plots of mean W and temperature differences averaged over cloudy pixels ( Figure 8B) shows considerable increase in temperature and W at altitudes corresponding to increase in CWC (i.e. below 8 km), mainly due to enhanced release of latent heat of condensation. Figure 8. A)Simulated difference (Ex3-Ex1) in mean profiles of cloud water content, rain water content, ice water content, graupel water content, and snow water content. B) CCN induced difference (Ex3-Ex1) in simulated mean profiles of vertical velocity (black) and temperature (red) for cloudy pixels.
For ice-phase hydrometeors, the majority of the increase was observed in SWC, with a peak above ~12 km altitude. A maxima in the CCN-induced increase in vertical velocity and temperature was also found to be above~12 km (Figure 8). These results indicate that CCNinduced increase in latent heat of freezing occurred mainly above 12 km, in turn strengthening the updraft velocity of cloud parcels and hydrometeor formation. Further, snow R e profiles for simulations Ex1 and Ex3 illustrated that the effective mean radius of snow significantly increased with an increase in CCN concentration between 8 and15 km altitude ( Figure 9A).
The simulated particle size distribution of snow further explained this behavior as the mass of particles in bigger sized bins increased in the simulation Ex3 compared to Ex1 ( Figure 9B). Similar changes in graupel concentration and particle size distribution for high density ice particles was also found (figure not shown).
It has to be noted that the CCN-induced differences in cloud microphysics and rainfall from this idealized case study simulation should not be directly compared with the decadal scale observational analysis. Moreover, these results are subject to various assumptions and uncertainties within physical parameterizations of the microphysics module used. However, the qualitative similarities in results between the observed aerosol-cloud-rainfall associations and this idealized case study simulation provide confidence in our observational finding that aerosol loading can potentially alter the warm phase and cloud phase microphysics over ISMR.
These perturbations are consistent with processes typically associated with aerosol-induced cloud invigoration[O. Altaratz et al., 2014;Tao et al., 2012]. (dashed) and graupel (open square symbol connected by solid line) R e for low (Ex1, blue) and high (Ex3, red)CCN scenario. B) Simulated size distribution spectra of low density ice particles for Ex1(blue) and Ex3 (red) at 550 hPa (solid lines) and 200 hPa (dashed lines). Fractional contribution is calculated by dividing the mass concentration of each bin with the total mass concentration.
The following chain of processes may explain our observational and/or numerical findings. The growth of cloud droplets near the cloud base is dominated by condensation.
However, the growth of droplets near the onset of warm rain (R e approaches to ~14 µm)is dominated by coalescence Rosenfeld et al., 2014]. The observed differences in vertical gradient of droplet growth suggest less efficient collision-coalescence process and prolonged condensation process, leading to delayed raindrop formation [Rosenfeld, 1999;2000;Squires, 1958;Warner and Twomey, 1967]. Such prolonged condensational growth of droplets implies increased condensed water loading, causing more latent heat release and thereby stronger updrafts under higher aerosol loading [Fan et al., 2009;Khain et al., 2005;Martins et al., 2011;Rosenfeld et al., 2008;van den Heever et al., 2011;Wang, 2005] .
Concurrently, smaller droplet R e under polluted conditions results in lower effective terminal velocity and higher cloud droplet mobility [Heiblum et al., 2016;Ilan et al., 2015]. Under polluted conditions, then, the aerosol-induced stronger updrafts and enhanced buoyancy would push these smaller condensates above freezing level [Andreae et al., 2004;Rosenfeld and Lensky, 1998] which, in turn, would enhance liquid droplets above the freezing level. Nevertheless, the smaller droplets are less efficient in freezing causing delay in the ice-/mix-phase processes which provide sustenance for super-cooled liquid condensates above freezing level [Rosenfeld and Woodley, 2000]. These hydrometeors encounter more number of super-cooled liquid droplets while settling from comparatively higher altitude under gravity. Thus, increased ice-water accretion process [Ilotoviz et al., 2016], increases ice particle R e under high aerosol loading.
Increase in the water mass flux of the smaller droplets at higher altitudes, in principle, releases more latent heat of freezing, and further invigorates the cloud system [O. Altaratz et al., 2014;Rosenfeld et al., 2008]. Such aerosol-induced invigoration also imply the formation of ice-phase hydrometeors at higher altitudes by freezing of small droplets[O. Altaratz et al., 2014]. Such aerosol-induced invigorating of clouds ultimately result in wider and deeper clouds, with higher mass concentration of ice-phase hydrometeors, which eventually fall to the surface (Figures 3 and 5) [Andreae et al., 2004;Koren et al., 2005;Koren et al., 2012;Rosenfeld et al., 2008]. Thus, the observed increase in daily rainfall with increasing aerosol loading over ISMR (Figure 3) could stem from the observed differences in warm phase dynamics and microphysics, which, plausibly leads to cloud invigoration and thereby enhances mass concentration of mixed-phase hydrometeors.
Decoupling the role of meteorology
Observational and modelled evidences of microphysical impact of aerosol over ISMR suggest causality in the observed relationship between aerosol-cloud and rainfall properties( Figure 3). Here, we examined the plausible role of meteorology in our analyzes. Figure 10 shows correlation coefficients of RF, CF and AOD with GDAS meteorological variables. The meteorological conditions favorable for deeper clouds and heavy rainfall were found to be associated with reduction in AOD ( Figure 10). As expected, a positive correlation of CF and RF was observed with relative humidity. However, increase in RH was negatively correlated with aerosol loading, suggesting that cloudy/wet conditions were associated with the reduction in aerosol loading. While CF and RF was found to be negatively correlated with geopotential height (mainly below 500 hPa), AOD was linearly correlated. This suggests that the formation of low pressure zone / presence of high RH at lower atmosphere was favorable for cloud development and rain, but not for aerosol accumulation. These features are consistent with that of heavy rainfall periods of ISM, where, the presence of low pressure zone over ISMR (commonly known as monsoon depressions) is associated for advection of more moisture at lower altitudes, more cloud condensation and occurrence of more rainfall. A recent modeling study has also shown that the propagation of low pressure system from Bay of Bengal towards Indian landmass, which, brings moisture and heavy rainfall to the region during monsoon, is also associated with a decrease in aerosol concentration over the region [Sarangi et al., 2015]. The decrease might be a combined effect of ingestion by clouds, wet scavenging and dilution effect of relatively clean moist air masses from the Ocean. Positive correlation of wind speed with CF and RF at altitude above 400 hPa was also associated with reduced AOD (Figure 10). The high wind speed above 350 hPa ( Figure 10) appears to provide a shearing effect on the cloud development process. Based on the correlation analysis horizontal wind shear (between 500 hPa and 200 hPa), relative humidity and geopotential height (below 500 hPa) were identified as three key meteorological variables (magnitude of correlation coefficient >0.25) affecting cloud and rainfall properties in ISMR. Next, the datasets were segregated into low and high regimes of wind shear, calculated between 200 hPa and 400 hPa, as well as for geopotential height and relative humidity at 800 hPa pressure level (Figure 11). The low versus high regimes illustrated that steeper positive gradients in AOD-cloud-RF associations was observed for high relative humidity and low geopotential height conditions, but, the magnitude of positive gradient between RF (and PR)-AOD reduced under high wind shear cases. Spreading of the cloud due to high wind shear results in hydrometeors falling through relatively drier atmosphere making smaller droplets (in polluted condition) are more susceptible to evaporation [Fan et al., 2009], thereby, the reduction in PR and RF. Thus, an orthogonal meteorological impact [Koren et al., 2010a;Koren et al., 2014] was evident on gradients of AOD-cloud-rainfall associations over ISMR, where, the y-intercept indicates the meteorology effect and the slope of correlation represents aerosol effect. We have also considered the combined effect of all the three key meteorological variables by dividing the datasets into 8 regimes (alternate combination of higher and lower bins of RH, WS and GPH).
Our analysis illustrated (Figure not shown) similar results as seen in Figure 11; positive aerosolcloud-rainfall association was evident in all the 8 sub-regimes along with distinct orthogonal effect of ambient meteorological conditions. Ground based remote sensing, satellite observations, aircraft measurements and modelling studies have documented that aerosols are mainly located within the boundary layer during monsoon period over ISMR [Mishra and Shibata, 2012;Misra et al., 2012;Sarangi et al., 2015]. But, some recent studies have reported that transport of near surface aerosols to the free troposphere by mesoscale convection results in upper-level accumulation during summer monsoon, termed as Asian tropopause aerosol layer [Chakraborty et al., 2015;Vernier et al., 2015]. Therefore, another possible pathway through which meteorological co-variability can influence our correlation analysis over ISMR is due to the positive association between magnitude of Asian tropopause aerosol layer and AOD. However, the tropopause aerosol layer pathway results in insignificant enhancements of AOD during JJAS by ~0.01-0.02 over south Asia compared to the observed climatological mean AOD (~0.6) [Vernier et al., 2015;Yu et al., 2015]. Thus, contributions of Asian tropopause aerosol layer to the observed positive gradients ( Figure 3) can be assumed to be negligible. Figure 11. Associations of accumulated daily rainfall, precipitation rate, cloud fraction, cloud top pressure and cloud top temperature with AOD. (A) Data slicing by the wind shear for the lower regime (0-33%, Top) and the higher regime (67-100%, Bottom). (B) Same as A), except data slicing by the relative humidity (C) Same as A), except data slicing by the geo-potential height. The methodology of creation of the scatter points were similar to that used for Figure 3. Each scatter point is the average of these equal 'n' numbers of data points mentioned in each respective panels.
Examining the influence of cloud contamination effect
Here, we used radiosonde observations from eight stations in ISMR (Table 2) to illustrate humidification effect on satellite retrieved AOD.The total number of cloudy profiles varied from 270 (Ranchi) to 1065 (Kolkata). The mean and standard deviation in RH for these selected profiles were calculated (for each station data) in two layers of 1.5 km and 3 km, from surface.
The bias in mean RH between shallower and deeper clouds for each station is also presented in Table 2. The range of variation in mean RH for each layer has been presented in Table 3. We found that with increase in mean RH, the natural variance in RH decreased for both the layers within ISMR. The mean and standard deviation of RH in 1.5 km (3 km) layer was found to be 84.3±13.2% (84.7±13.5%) under cloudy conditions within ISMR. At the same time, the bias in mean RH (associated with vertical change in cloud layer height) in 1.5 km and 3.0 km layer was found to be 2.7 % and 2.5 %, respectively. It can be seen that the bias was negligible compared to the natural variation present in RH during cloudy conditions in ISMR. Using the parameterization developed in Bar-Or et al., (2012), the maximum change in AOD was estimated to be about 0.1 due to the humidification effect (Table 3). Thus, the uncertainties in our data analyzes due to aerosol humidification effect seems to be minimal. Note that the difference in clean and polluted conditions in this study (AOD of about 1.0) was nearly an order of magnitude higher than the estimated maximum change in AOD (~0.1) due to the humidification effect. Therefore, the observed positive associations between AOD and cloud/rainfall properties do not appear to be significantly affected by aerosol growth due to humidification during cloudy conditions. In fact, the observed negative relationship between AOD and increase in RH over ISMR (Figure 11) appears to dominate the otherwise expected higher hygroscopic growth of aerosols and supports the above argument. Table 2.World Meteorological Organisation (WMO) index number of radiosonde stations (WMO#), station latitude (Lat.), longitude Lon.), elevation above mean sea level (Elev.), number of radisonde profiles (N), number of cloudy profiles (N cloudy ), Mean RH and bias in RH for 1.5 km layer (RH 1.5 and RH 1.5,bias , respectively.) and 3.0 km layer (RH 3.0 and RH 3.0,bias , respectively) and median of cloud layer height (CLH) for each of the 8 radiosonde stations used in humidification analysis. -±" indicates standard deviation.
Investigating the effect of wet scavenging on aerosol-rainfall associations
Contrary to the positive aerosol-cloud-rainfall associations shown by many satellite data studies across the globe, recent studies have illustrated a negative aerosol-rainfall association mainly over tropical ocean region based on reanalysis dataset and global model simulations. This difference in sign of the association in modeling studies is mainly attributed to inclusion of wet scavenging effect in models and probable lack of the same in satellite samples [Grandey et al., 2013;Grandey et al., 2014;Yang et al., 2016]. However, global modeling studies have their own inherent limitations and uncertainties in addressing aerosol-cloud-rainfall associations. Due to computational constraints, the global model simulations use grids with coarse spatial resolution (~ 200 km) and fall short of explicitly resolving the fine-scale cloud processes. Moreover, the convection parameterizations used to simulate cloud formation generally do not parameterize the aerosol indirect effect on clouds and thus, on rainfall. On the contrary, the observed relations using satellite datasets are at fine scale and inclusive of the aerosol indirect effect. As a representative analysis, collocated AOD-rainfall measurements at hourly temporal resolution over IITK was used to illustrate the association between aerosol-rainfall with and without wet scavenging effect. Positive association was found between rainfall amount and mean AOD values measured before the start of rain events over IITK (NWS_IITK; red line in Figure 12).
Similar association was also found when all the available collocated AOD-rain amount samples over IITK were correlated (Cyan color line in Figure 12), but the gradient was reduced by almost 50 % when compared to that of NWS_IITK. Thus, positive association between aerosol-rainfall was evident even with the inclusion of wet scavenging effect in the sampling. Grandey et al., 2013[Grandey et al., 2013 have also shown similar amount of contribution of wet scavenging effect on the positive aerosol-cloud association. Correlation of MODIS-AOD with RF (black line in Figure 12) and PR (blue line in Figure 12) values over the IITK grid also illustrated positive association between aerosol and rainfall similar to the observed associations in Figure 3. High anthropogenic aerosol emission rate at surface [Bond et al., 2004] and the rapid aerosol buildup within a few hours after the individual rainfall event over ISMR [Jai Devi et al., 2011] might contribute towards reducing the impact of wet scavenging effect on the aerosol-cloud-rainfall analysis over ISMR. This argument is also supported by a pattern seen in model results that negative aerosol-cloud-rainfall associations were usually prominent over ocean regions and positive aerosol-cloud-rainfall associations were found over continental conditions in global simulations [Grandey et al., 2013;Grandey et al., 2014;Gryspeerdt et al., 2015;Yang et al., 2016]. Unlike continental conditions, lack of high emission rates at the ocean surface might also contribute to the dominant effect of wet scavenging on aerosol-cloud-rainfall association. In addition, the cloudy pixels where rainfall actually occurs under continental conditions are usually a small fraction of the total area within a 1 o x 1 o box, and therefore, the reduction in mean AOD value of the 1 o x 1 o box due to wet scavenging might not be a dominant phenomena affecting the aerosol-cloud-rainfall gradients in Figure 3. IITK-AERONET data analysis offers confidence to the observed positive association for aerosol-cloud-rainfall, and confirms that it was not a misrepresentation due to possible uncertainties involved for wet scavenging effect in using satellite retrieved AOD values. It indeed also showed that a more accurate representation of wet scavenging effect is essential to reduce uncertainty about the magnitude of the positive aerosolrainfall gradient observed over ISMR. Figure 12: Associations of rainfall with collocated AERONET-AOD measurements (within ± 4 hours of the start/end of rainfall event) over IITK. The Cyan color line illustrates the scenario with inclusion of wet scavenging effect (IITK) and the red color line illustrates the scenario with no wet scavenging effect (NWS_IITK). The association between daily rainfall and precipitation rate with MODIS-AOD over IITK grid is also shown in black and blue color lines, respectively. In each case, all the rainfall-AOD samples were sorted as a function of corresponding AOD values into 5 bins of 20 percentiles each. Each scatter point is the average of each bin and have n number of data points.
Summary
In this study, long-term satellite and in-situ observational datasets were systematically analysed to get new insights in aerosol-cloud-rainfall associations over ISMR. An important finding is that the MODIS retrieved cloud properties (CF, CTP, CTT), IMD in-situ surface accumulated rainfall as well as TRMM retrieved precipitation rate illustrated a positive association with increasing aerosol loading. Additional selective analysis over smaller spatial region within ISMR and by separating the dataset into relatively shallower and deeper clouds also illustrated similar aerosol-cloud-rainfall associations, plausibly highlighting the robustness of these associations. A decrease in outgoing long wave radiation and increase in outgoing short wave radiation at the top of the atmosphere, with increase in aerosol loading further suggested deepening of cloud systems over ISMR.
Further, MODIS and CloudSat observed microphysical differences between low and high aerosol loading were investigated to gain process level understanding of the observed associations. Comparison of mean profiles of CTP-R e illustrated that increase in aerosol loading is associated with slower growth of R e with altitude, indicating reduction of coalescence efficiency and delay in initiation of warm rain. CloudSat retrieved profiles showed that the liquid water content increased under high aerosol loading, mainly the supercooled liquid droplets above the freezing level. Simultaneously, the observed mass concentration and effective radius of ice-phase hydrometeors increased manifold under high aerosol loading. We also performed three idealized supercell simulation of a typical heavy rainfall event over ISMR by varying initial CCN concentrations. Modeling results were found to be in-line with our observational findings, showing that CCN-induced initial suppression of warm phase processes along with increase in updraft velocity lead to movement of more water mass across freezing level resulting in enhancement of ice-phase hydrometeor concentration and eventually in intensification of surface rainfall under high CCN loading.
We understand the limitation that influences of meteorological condition are ideally difficult to separate from that of aerosol on cloud-rainfall system. However, we have systematically shown that the positive aerosol-cloud-rainfall associations were present even in narrow regimes of key cloud forming meteorological variables like RH, geopotential height and wind shear. Further, the ambiguity involved in humidification effect on retrieved AOD can also affect the positive gradients between aerosol and cloud-rainfall properties. Besides, AOD also suffers from substantial uncertainty in being representative of CCN concentration near cloud base [Andreae, 2009] and in inclusion of wet scavenging effect in the AOD samples. These caveats may result in an overestimation of the observed positive gradients in aerosol-cloud-rainfall associations. Our analysis therefore cannot quantify the magnitude of gradients with confidence. However, this study certainly suggests a significant role of aerosol on rainfall properties via cloud invigoration over ISMR. As a future scope, more observational studies at cloud formation and rain event time scales are warranted to accurately quantify the magnitude of aerosol-cloud-rainfall association over ISMR. Moreover, consideration of aerosol microphysical effects is essential for accurate prediction of monsoonal rainfall over this region of climatic importance.
|
2018-12-03T10:23:16.174Z
|
2017-04-21T00:00:00.000
|
{
"year": 2017,
"sha1": "12b7f92b09a2e2155b20eedd61bba14e49b864a8",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-chem-phys.net/17/5185/2017/acp-17-5185-2017.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dbd33564a4b57e6f595ec3380236943f511dd7a0",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
11732370
|
pes2o/s2orc
|
v3-fos-license
|
Artificial Epigenetic Networks: Automatic Decomposition of Dynamical Control Tasks Using Topological Self-Modification
This paper describes the artificial epigenetic network, a recurrent connectionist architecture that is able to dynamically modify its topology in order to automatically decompose and solve dynamical problems. The approach is motivated by the behavior of gene regulatory networks, particularly the epigenetic process of chromatin remodeling that leads to topological change and which underlies the differentiation of cells within complex biological organisms. We expected this approach to be useful in situations where there is a need to switch between different dynamical behaviors, and do so in a sensitive and robust manner in the absence of a priori information about problem structure. This hypothesis was tested using a series of dynamical control tasks, each requiring solutions that could express different dynamical behaviors at different stages within the task. In each case, the addition of topological self-modification was shown to improve the performance and robustness of controllers. We believe this is due to the ability of topological changes to stabilize attractors, promoting stability within a dynamical regime while allowing rapid switching between different regimes. Post hoc analysis of the controllers also demonstrated how the partitioning of the networks could provide new insights into problem structure.
I. INTRODUCTION
C OMPLEX real world tasks can often be reduced to multiple interacting subtasks. It has long been realized that there are advantages to capturing the structure of this subtask decomposition within the topology of a neural network architecture, especially when compared with monolithic networks [1]. Conventionally, this is done using various kinds of modular neural network [2], [3], which typically structure solutions as a decision tree whose leaves are subnetworks, each trained to solve a particular subtask. In this respect, modular neural networks resemble the macrostructure of the human brain, which is also known to be structured as a hierarchy of special-purpose neural circuits [4]. The brain is not the only naturally occurring connectionist architecture known to solve complex tasks. Another prominent biological network, which we consider in this paper, is a cell's gene regulatory network. There are many similarities between neural and genetic networks; indeed, artificial neural networks (ANNs) have been used as a modeling tool for capturing the structure and dynamics of genetic networks [5]. However, there are also some prominent differences between the two [6]. One of these is the widespread existence of selfmodifying processes within genetic networks, whereby the cellular machinery expressed by the genetic network induces physical changes within the network's topology. In this paper, we focus upon a self-modifying process that is central to task specialization within biological cells: chromatin remodeling [7]. Chromatin remodeling is described in Section II in detail. However, in a nutshell, it is a mechanism that turns genetic subnetworks ON and OFF by regulating their exposure to the cell's gene expression machinery. Significantly, the biochemical components that control chromatin remodeling are expressed by the genetic network; so, in essence, the genetic network regulates changes to its own topology.
The premise of this paper is that the topological self-modification can be used as a novel mechanism for achieving task decomposition within connectionist architectures. In the conventional modular approach to task decomposition, processing is divided into independent subnetworks, which are always turned ON, and whose outputs are integrated by some higher level decision node. In our approach, by comparison, the subnetworks used to solve different subtasks can be overlapping, are only turned ON when in use, and the output is determined by whichever subnetwork is currently expressed. We expect the resulting approach to be useful in situations where there is no a priori knowledge of how a task can be decomposed, where there is significant overlap between subtasks, and where highly dynamic solutions are beneficial. We demonstrate this by showing that a self-modifying connectionist architecture is able to solve three difficult control This work is licensed under a Creative Commons Attribution 3.0 License. For more information, see http://creativecommons.org/licenses/by/3.0/ tasks. This paper builds upon an earlier model [8], in which task decomposition was prespecified, and upon initial results reported in [9]. This paper is structured as follows. Section II introduces genetic networks and the form of topological self-modification carried out by chromatin remodeling. Section III summarizes the related work on computational models of genetic networks, task decomposition in ANNs, and self-modification. Section IV describes the self-modifying mechanism used in this paper. Section V outlines the dynamical control tasks. Section VI presents the results and analysis. Finally, the conclusion is drawn in Section VII.
II. GENETIC NETWORKS AND CHROMATIN REMODELING
A gene is a region of DNA that describes a protein. In order for this protein to be expressed within a cell, the gene must be transcribed (and later translated) by the cell's processing machinery. In higher organisms, such as humans, this involves the binding of a group of around 5-20 interacting proteins, known as a transcription complex. These proteins, in turn, are the products of other genes. Hence, genes are regulated by other genes, and this pattern of regulatory interactions, when extended to all genes, forms the cell's genetic network. Genetic networks have many similarities with recurrent neural networks (RNNs). After abstracting away the detailed biochemical mechanisms, both can be considered to be a set of interacting nodes with connection weights, and both can be viewed as dynamical systems operating on a network. In many cases, gene regulation, such as neurone activation, can be modeled as a sigmoidal function [5].
However, there are also some fundamental differences between neural and genetic networks [6]. Perhaps most significantly, there is no analog of physical wiring (i.e., axons and synapses) in a genetic network. Rather, regulatory pathways emerge from stochastic spatial processes of dynamic molecular association and dissociation. In practice, this means that interactions between biochemical components are relatively unconstrained and, as a consequence, evolution is free to explore interactions between different cellular components. Chromatin modification is a good example of this. Chromatin is an assembly of structural proteins (histones) organized into spindles (nucleosomes) over which DNA is wound [10]. It was originally seen as a spatial compression mechanism that allows very long strands of DNA to fit into the cell's relatively compact nucleus. However, in recent years, it has become clear that the structure of chromatin is closely regulated by the genes it contains, whose products are able to locally wind or unwind the nucleosomes in order to permit or block access to the transcription machinery.
Hence, a different view of chromatin has emerged as a dynamic mechanism for modifying the complement of expressed genes and, hence, the topology of the cell's genetic network. Nowadays, chromatin remodeling is believed to play a significant role in determining cell fate. Exactly, how this is achieved in biological systems remains a topic of contemporary research; however, it has been hypothesized that it is due to the stabilization of the attractor states of the underlying genetic network [11], presumably by removing extraneous genetic pathways. Nevertheless, it is clear that chromatin remodeling plays a key part in this cellular analog of subtask specialization. Given the underlying similarities between genetic and neural networks, it is intriguing to consider whether an analogous mechanism could be used for task decomposition within ANNs.
A. Artificial Gene Regulatory Networks
Historically, the development of computational models of genetic networks has focused on their role in understanding biological systems, for instance inferring computational models from measurement data [12] or using computational models to understand systems-level properties of genetic networks [13]. Another, less well-known role, involves using these models to carry out computation, in a manner akin to the relationship between ANNs and biological neural networks. These artificial gene regulatory networks take on various forms (see [14] for a recent review). In some cases, representations are borrowed from the wider genetic network modeling community. For instance, Boolean networks, which model genetic networks as the networks of interacting logic functions, have been used to control robots [15]. In other cases, new models have been developed. This includes work on artificial genomes [16] and fractal gene regulatory networks [17].
Given the relative immaturity of the field, it is unclear which model is most suitable for doing a particular kind of computation. In practice, there are likely to be different tradeoffs between expressiveness, efficiency, compactness, and robustness. Since these models are often optimized using evolutionary algorithms, there is also a difficulty discriminating between the influence of expressiveness and evolvability. In this paper, we are interested in understanding the potential benefits of introducing topological modification to connectionist models. Hence, we make use of a relatively simple representation that closely resembles an ANN. Such models have previously been used to model genetic networks [5], and their optimization using evolutionary algorithms has also been well studied [18]. This approach also has the benefit that lessons learnt can be directly applied to the wider field of ANN research.
B. Task Decomposition Using Modular Neural Networks
We are interested in how topological self-modification can be used for achieving automatic task decomposition. As we have already remarked, the mechanism currently most used for achieving task decomposition in ANNs is the partitioning of a network into modules, each of which is used to solve a particular subtask. How this is done varies considerably [19], [20], although all approaches must have some mechanism for identifying modules and then determining which modules to use in a given situation. Modular ANNs are most often applied in domains, such as classification, where a priori knowledge of the task domain is available.
When a priori knowledge is available, it may be possible to identify the subtasks in advance and train modules accordingly. However, in the more general case, it is also necessary to determine the correct number of modules required to solve the task. This is arguably easier using neuroevolution approaches, where it is relatively easy to adapt the gross structure of the collective network. An interesting example of this is the use of coevolution [21], where a population of modules is evolved in parallel with a population of module combinators, allowing the algorithm to explore different combinations of modules in a relatively open-ended fashion. It should be noted, however, that coevolutionary algorithms are inherently complex, requiring significant expertise in order to avoid pathological conditions. Where a priori knowledge is available, choosing between modules may be as simple as matching input cases. Other approaches include asking modules to vote on their applicability for a particular subtask [1], or the construction of decision trees to determine transitions between modules [22].
Our approach differs from the existing modular ANNs in a number of ways. First, there is no need to explicitly identify the modules. In fact, there is no reason why modules should be completely segregated, since in many cases, it might be advantageous (in terms of size and training cost) for processing to be shared between subtasks. In this paper, this can be achieved using overlapping subnetworks. Second, transitions between subnetworks are handled by the subnetworks themselves, by turning ON or OFF other subnetworks. In effect, the system can transition between subnetworks at all points during execution, suggesting that this might lead to far more dynamic and context-sensitive solutions.
C. Topological Rewriting in Artificial Neural Networks
A number of authors have looked at applying topological rewriting processes during the learning phase of an ANN. This includes learning algorithms that add or remove nodes and links to or from the network. Examples include the early work by Ash [23] on dynamic node creation during backpropagation, and more recent work by Forti and Foresti [24] on dynamic self-organizing maps. Another prominent application of a self-modification process during learning is the work by Schmidhuber et al. [25], who looked at self-modification of the learning algorithm itself. There have also been a number of examples of self-modifying and self-organizing processes applied prior to the execution of an ANN, in the form of a developmental mapping. This includes the early work by Gruau [26] on rewriting grammars, and work by Astor and Adami [27] who made use of an artificial chemistry to determine the topology of an ANN. However, in all of these examples, the topology of the network remains fixed during the execution of the ANN.
There are very few examples of ANNs which use self-modifying processes during execution. GasNets [28] are perhaps the best known of these, an ANN model in which diffusive neurotransmitters are able to change the function of nodes within a network in a dynamic fashion. A more recent approach, termed artificial neural tissue [29], is also based around diffusive chemical gradients, but uses them to turn ON and OFF sparsely coded neural circuits in response to external cues. This is arguably the closest related work to our own, though prominent differences in our work include its relative simplicity (e.g., no developmental process), the use of comparatively small networks, and our emphasis on overlapping subnetworks and regulatory interactions between subnetworks.
D. Self-Modification in Artificial Biochemical Networks
The idea of self-modification has also previously been explored within computational models of biochemical networks. In [30], we considered an artificial biochemical network model in which a computational analog of a genetic network both expresses and modifies a computational analog of a metabolic network. This was effective at certain tasks, which benefited from decomposition; however, the complexity and appropriate parameterization of the system was a significant issue. Self-modification was also explored in [31] within the context of a model of mobile DNA applied to Boolean networks, in which the author found it to be beneficial in terms of access and stability of attractors. We have also explored a simpler model of chromatin remodeling in which subtasks are prespecified [8], and published initial results using the current approach [9] which have been significantly extended in this paper.
A. Architecture
In this section, we describe how an analog of epigenetic remodeling can be implemented within a connectionist architecture. We use a fairly conventional RNN as a baseline architecture, general enough to be considered an abstract model of both a biological neural network and a genetic network.
Formally, this RNN architecture can be defined by the tuple N, L, In, Out , where N is a set of nodes {n 0 . . . n |N| : n i = a i , I i , W i }, where a i : R is the activation level of the node, I i ⊆ N is the set of inputs used by the node, and W i is a set of weights, where 0 ≤ w i ≤ 1, |W i | = |I i |. L is a set of initial activation levels, where |L N | = |N|. In ⊂ N is the set of nodes used as external inputs. Out ⊂ N is the set of nodes used as external outputs.
Chromatin modules can be considered to be contextdependent switches that add or remove network components based on the network's current activation state. This form of topological self-modification can be introduced to the RNN model by adding extra nodes that act as Boolean switches, each adding or removing specified groups of nodes from the network based on the activation levels of one or more nodes.
The resulting artificial epigenetic network (AEN) architecture can be defined by the tuple N, S, L, In, Out , where S is a set of switches {s 0 . . . s |S| : 1} is the activation level of the switch, I s i ⊆ N is the set of inputs to the switch, W s i is the set of weights, where 0 ≤ w i ≤ 1, |W i | = |I i |, and C s i ⊆ N is the set of nodes controlled by the switch. The other variables are as defined for the RNN. Nodes N and switches S both use a sigmoid function. In the case of nodes, the activation level a i is the output of the sigmoid function applied to the weighted sum of its input activations. For switches, a threshold of 0.5 is applied. repeat breed child population 13: p1, p2 ← SELECT( P) rank-based selection 14: child ← RECOMBINE( p1, p2) 15: child ← MUTATE(chi ld) 16: until |P | = popsi ze 18: P ← P replace with child population 19: end for If the output of the sigmoid function is less than this value, the activation level a s i of the switch is 1; otherwise, it is 0. If a s i = 1, then the switch has no effect upon the network. If a s i = 0, then the activation levels of its controlled nodes are set to 0, i.e., ∀n i ∈ C s j , a i = 0. In effect, these nodes are removed from the network.
Note that the network uses a sparse encoding, i.e., zeroweighted edges are not included in the model. Compared with a fully connected network, this is a more appropriate model of the pattern of connectivity seen within most genetic networks.
B. Training
We train both the RNN and AEN models using an evolutionary algorithm. Traditional neural network training methods, such as backpropagation, do not readily generalize to nonstandard architectures. Evolutionary algorithms, by comparison, are relatively flexible in this respect. They are also less sensitive to local optima, and are able to optimize both the parameters of individual nodes and the topology of the network. Since biological evolution is the mechanism responsible for designing biological genetic networks, it is particularly fitting to use an evolutionary algorithm to train a connectionist architecture that is motivated by genetic networks.
The nondominated sorting genetic algorithm, version II (NSGA-II) [32] is used for the experiments reported in this paper. This is a multiobjective evolutionary algorithm, allowing solutions to be evaluated with respect to more than one objective. Algorithm 1 gives an outline of NSGA-II and describes how it is used to train AENs.
C. Encoding
Given the close relationship between biological evolution and genetic networks, there is value in considering how genetic networks are encoded in biological systems, since this is known to have a significant bearing on their evolvability. A notable aspect of this paper is that we use a low-level network encoding in which connections between networks nodes are defined indirectly. Hence, during evolution, the connections, I i , I s i , and C s i , are not represented by absolute node identifiers, but by locations within an indirect reference space. Several properties of the biological encoding of genetic networks motivate this approach.
First, gene-gene interactions are positionally independent, meaning that a gene retains its function irrespective of its position within a chromosome. This means that the positional changes due to biological recombination and mutation events preserve the existing structure of the genetic network. By comparison, when positionally sensitive encodings are used in evolutionary algorithms, ordering changes are generally disruptive, leading to child solutions with poor fitness [33], [34].
Second, and related to this, biological components recognize one another based upon their physicochemical properties. In effect, this physicochemical space is used as an indirect reference system in which genes, and other biochemical components, address one another. This observation has motivated a number of positionally independent encodings based upon the use of indirect reference spaces, including our own previous work on implicit context representation [34], and the template matching approach used in some computational models of genetic networks [35], [36].
Third, biological genetic networks display epistatic clustering [37], such that the genes that encode interacting gene products are often found located together within the genome [38]. This means that genetic pathways tend to be encoded in contiguous regions of DNA. From an evolutionary perspective, this leads to compartmentalization, which in turn promotes evolvability [39]. It also means that the winding and unwinding of chromatin modules tends to affect distinct subnetworks, arguably providing a less disruptive means of regulating biological function.
For simplicity, we use a 1-D reference space in which each node and switch has a location in the range [0, 1]. The inputs to a node or switch are defined as a continuous interval within this range. Furthermore, this interval overlaps with the location of the node or switch. Hence, nodes and switches located proximally within the reference space are encouraged to interact, generating an effect similar to epistatic clustering. In particular, this means that the epigenetic switches will tend to operate upon complete subnetworks. It should be noted that this results in biases in the network landscape, since some patterns of connectivity are more likely to occur than others. Nevertheless, our initial experiments showed that a lack of epistatic clustering results in poor performance, and that this outweighs any issues associated with network shape bias.
Before being executed, an encoded network is mapped into the directly connected form defined in Section IV-A. Consequently, the indirect encoding does not lead to a performance penalty during execution. There is an overhead associated with this mapping, but it is small compared with the execution time.
V. TASK DEFINITIONS
We hypothesize that topological self-modification will be useful in situations where task decomposition is not apparent a priori, where tasks are overlapping, and/or where there is a need to switch often between different behaviors. The control of dynamical systems is a class of problems that exhibits all of these characteristics. In particular, we focus on three interesting problems in dynamical systems control: 1) state-space targeting in a numerical dynamical system; 2) balancing a system of coupled inverted pendulums; and 3) controlling transfer orbits in a gravitational system. These are all challenging to solve, and each has qualitatively different dynamics. Although these kind of systems are traditionally controlled using analytical feedback methods [40], [41], our approach reflects the methodology of previous computational intelligence applications, such as [42] and [43], which do not require a priori knowledge about the state space of the system under control.
A. State-Space Targeting in a Numerical Dynamical System
This task involves controlling a trajectory so that it moves back and forth between two boundary points in Chirikov's standard map. This is a numerical dynamical system that models the behavior of a large class of conservative dynamical systems that have coexisting chaotic and ordered dynamics. While the exact definition of the task is to some extent arbitrary, it demonstrates the general concept of trajectory targeting in a complex state space.
Chirikov's standard map [44] is defined within the unit square by the following system of difference equations: For low values of k, the dynamics of the system are ordered, with initial points converging to cyclic orbits which remain bounded to small intervals on the y-axis. As k increases, islands of chaotic dynamics begin to appear. As k increases further, these begin to dominate the upper and lower regions of the map, with a band of ordered dynamics remaining in the central region. This central band prevents trajectories Table I at each time step, and exerts control by modulating the parameter k within the interval [1.0, 1.1]. This results in a small perturbation to the trajectory. In previous work using this map [9], [30], we have observed that different control interventions are required when moving through regions with different dynamical characteristics (e.g., chaotic, ordered, and mixed). In general, it is not obvious when these transitions in behavior should occur. This makes the problem challenging from a control perspective.
In order to generate a fair estimate of a controller's ability to guide trajectories, the task is repeated ten times with different starting positions randomly chosen within the regions shown in Fig. 2. Two objective values are then calculated for each controller: 1) the mean trajectory length when moving from the bottom to the top of the map and 2) the mean trajectory length when moving from the top to the bottom of the map. Controllers, which are not able to traverse the map in either direction within a limit of 1000 time steps, are assigned an arbitrary large value of 1000 for the corresponding objective. This penalty will also be applied if the trajectory in a particular direction moves beyond the y-axis bound of [0, 1].
B. Balancing a System of Coupled Inverted Pendulums
Balancing an inverted pendulum is a classic problem in control theory, and a proxy for various real world control problems, such as bipedal locomotion and missile control [45]. In this task, we consider a harder formulation of this problem that involves using multiple pendulums, mounting them on movable carts, and then coupling the carts together (Fig. 3). The aim is to move the carts in such a way that all the pendulums become upright, and then remain upright for a predetermined amount of time. This can be interpreted as a state-space targeting task, in which a trajectory must be guided from a stable equilibrium state (all pendulums pointing downward) to an unstable equilibrium state (all pendulums pointing upward), followed by a stabilization task that involves maintaining the trajectory at the unstable point.
In our formulation, the system has between one and five carts, arranged in a line. When the number of carts is greater than 1, they are connected to their nearest neighbor(s) with inelastic tethers. Each cart is controlled using an actuator with a differential input, allowing it to move toward or away from its neighbors based on the difference of its two inputs. Table II shows the physical parameters of the model. Each cart is controlled independently using the same evolved controller. The controller has access to a number of state variables. These are described in Table III, and their application to the cart can be seen in Fig. 4. The fitness of the controller is defined as an aggregate function over all the carts of the amount of time each pendulum spends in the upright position and scaled between [0, 1]. Hence, if a system contains three pendulums, of which one remains upright throughout simulation with the other two remaining hanging from the carts, a fitness of 0.33 would be assigned. If all pendulums are upright throughout the simulation, a fitness of 1 will be assigned and if all pendulums remain hanging throughout the simulation, a fitness of 0 will be assigned.
C. Controlling Transfer Orbits in a Gravitational System
As a more concrete example of control in a conservative dynamical system, we consider a formulation of the N-body problem in which the aim is to guide a trajectory through a system of planetary bodies. Gravitational systems with more than two bodies exhibit the kind of mixed chaotic and ordered dynamics seen in Chirikov's standard map. This presents a significant challenge when controlling spacecraft, since efficient orbital transfers require traversal of these complex dynamical regimes. In the example, we consider that there are four bodies: a spacecraft and three planets. The aim is to guide the trajectory of the spacecraft so that it moves repeatedly between two of the three planets. It is required to do this in the least amount of time, and by using the least amount of fuel. It can do this either by taking a direct path (see Fig. 5), or by sling-shotting around the third, more massive, planet. Either way, the spacecraft is under the influence of gravity from all three planets. To make simulation time tractable, the positions of the planets remain fixed. See Tables V and VI for model parameters.
The force exerted on the spacecraft is calculated using (2), where m is the mass of a body and q is a 3-D vector System of gravitational bodies used in the orbital control task. The aim is to guide an orbit so that it transitions repeatedly between planets A and B, while also under gravitational influence from planet C.
( j specifies an instance of a body, and k represents an instance which is not equal to the first, i.e., force i is the sum of all other forces k which are not force i ) From this, the acceleration of the spacecraft due to the gravitational forces of the other planets can be calculated using Newton's second law of motion. The equations are simulated using leapfrog integration, which is well suited to the problems of orbital mechanics due to its symplectic nature and time reversibility [46], [47].
The controller has access to the following state variables: 1) distance to target; 2) position of target; 3) spacecraft acceleration; and 4) spacecraft position. These are mapped to nine inputs (see Table IV). The target is determined by the spacecraft's current position, i.e., planet A if it is in orbit around planet B, and vice versa. The controller exerts control by adding thrust in one or more of the three dimensions, subject to an acceleration limit of ±25 ms −2 .
Two objective values are calculated for each controller: 1) the cumulative time taken to move between planets A and B over the course of the simulation and 2) the cumulative thrust used to maneuver the spacecraft. The spacecraft is assumed to be in a valid orbit when it is between 1 × 10 5 and 2 × 10 5 m from a planet's center of mass. If the spacecraft moves within 2 × 10 4 m of the planet's center of mass, it is assumed to have collided and the controller is assigned a fitness value of 0 for planetary hops (corresponding to the lowest possible performance) and positive infinity for the fuel used (corresponding to the worst possible performance). The same penalties are applied if it takes more than 8000 s to transition between the two planets. In initial experiments, it was found that evolution disproportionately favored solutions, which minimize the fuel usage objective by remaining relatively static. To discourage this behavior, we introduced a third objective, the product of the first two objectives. This especially penalizes solutions that do not achieve at least one orbital transition.
VI. RESULTS AND ANALYSIS
For each of these tasks, the aim is to evolve a closedloop controller that can guide the dynamics of the system in the specified manner. At each time step, the state of the controlled system is fed back to the controlling AEN or RNN by setting the activation levels of nodes in the input set (In). See Algorithm 2 for details. There is one input node for each of the sensory inputs given in the task definition. Control is then exerted by copying the activation levels of nodes in the output set (Out) to the governing parameters of the controlled system, scaling as appropriate.
A. Standard Map
Both AENs and RNNs were evolved to control trajectories within Chirikov's standard map. A population size ( popsi ze) of 200 was used, with the evolutionary process allowed to run for 100 generations (maxgen). Mutation and crossover rates were 0.05 and 0.5, respectively. In the initial generation, networks were created with lengths of between 10 and 20 nodes. In addition, initial AENs were seeded with 3-5 switches. Solution lengths were otherwise free to vary during evolution.
Since EAs are nondeterministic algorithms, 50 independent runs were carried out for each problem instance to give a fair portrayal of expected performance. Fig. 6 shows the distributions of fitness for the best solutions from these 50 runs, for both AENs and RNNs. This shows that the AEN model leads to better solutions both on average ( p = 2.04 × 10 −4 ) and overall. The best performing AEN controller traverses Algorithm 2 Evaluating an AEN on a Control Task 1: initialize control task 2: a ← L initialize AEN state 3: repeat 4: cout ← state variables from controlled system 5: In if a s i < 0.5 then modify topology 9: for each j ∈ C s i do 10: a j ← 0 11: end for 12: end if 13: end for 14: for i ∈ {0, . . . , |N|} do update nodes 15: a i ← SIGMOID(I i · W i ) 16: end for 17: ci n ← SCALE(Out) scale outputs to range 18: modify controlled system according to ci n 19: until control task finished or timed-out 20: f itness ← progress on control task objectives the map in each direction in ∼99 steps, on average, which is ∼20 steps faster than the best RNN. Fig. 7 shows an example of a controlled trajectory. Fig. 8 shows the change in mean and maximum controller fitness over time for the AEN and RNN runs. It is evident that the fitness for RNN-based controllers begins to converge considerably earlier than for the AEN-based controllers. It is also notable that the evolution of the best AEN controller is much smoother, in terms of fitness changes, than for the best RNN controller. This may be an indication of better evolvability for the AEN model, allowing controllers to evolve through gradual frequent changes rather than large infrequent changes.
Analysis of the dynamical behavior of evolved controllers gives some insight into these differences in performance. First, it is notable that all but one of the evolved AEN networks used their switches to alter the topology of the network during execution. This suggests that there is strong selective pressure toward using topological modification. Second, significant differences can be seen between the Fig. 7.
Example of an AEN controlling a trajectory within Chirikov's standard map, traversing from a region at the bottom to a region at the top in 94 steps. phase spaces of AEN and RNN controllers. For example, Fig. 9 shows the representative phase spaces reconstructed using time-delay embedding [48] from the outputs of an AEN and RNN controller, respectively. It can be seen that the AEN phase space is well conserved, apparently following attractors with well-defined topological characteristics as it navigates the map. The dynamical behavior of the RNN controller, by comparison, is relatively poorly conserved, indicative of a less stable attractor structure. This suggests that the topological modification may play a role in stabilizing different attractors as the controller navigates through the different dynamical regimes exhibited by the map.
To understand how topological self-modification is used by evolved AENs, the smallest working example of an AEN controller was first analyzed. Fig. 10 shows the time series of expression values for nodes and switches within this network. In this case, it is apparent that the single switch present in this network is not used simply to transition between subtasks. Rather, during much of the control period, it generates an oscillatory pattern of expression which turns ON and OFF one of the network's other nodes, thereby affecting the network's dynamics. This points to roles for topological modification beyond task decomposition.
However, it was also noticed that the switch elements of many evolved AENs could be used to manually switch between dynamical behaviors. For instance, by forcing a switch to remain ON, it is often the case that the trajectory will then remain within a certain dynamical region of the map. This suggests that the switches are used to move between different attractors during the course of controlling trajectories. It also points to an emergent role of these switches as a means for inferring task decomposition and allowing external control of transitions between subtasks.
It is also notable that the AEN-based controllers are significantly smaller than the RNNs, using an average of seven nodes, compared with an average of ten nodes in the RNN-based controllers. In addition to offering a small benefit in terms of efficiency, this may indicate that smaller overlapping subnetworks are used at different stages of the control task, rather than the single monolithic network used by an RNN.
B. Coupled Inverted Pendulums
A similar approach was used to evolve AEN-and RNN-based controllers for the coupled inverted pendulums problem. Recognizing the greater difficulty of this task, the generation limit was raised to 200, and initial networks were generated with between 15 and 25 nodes.
Fitness distributions for one-, three-, and five-cart variants of the problem are shown in Fig. 11. Unsurprisingly, control of a single cart is significantly easier than multiple carts, and effective control could be achieved using both architectures. Nevertheless, AEN-based controllers were able to balance the pendulum more consistently and, on average, significantly faster than the RNN-based controllers. For the multicart problems, RNN-based controllers were able to solve the three-cart variant only once out of 40 runs, and were unable to solve the five-cart variant. AEN-based controllers were also challenging to evolve; however, substantially more of these solutions were found for the three-cart problem, and several AEN-based controllers were also found for the five-cart variant. Fig. 12 shows the change in mean and maximum controller fitness over time for the AEN and RNN runs when solving the three-cart problem (although this is also representative of the one-and five-cart versions). It can be seen that the AENbased controllers evolve much faster and converge toward a higher fitness value. A smoother pattern of evolution is again seen for the best solution, with the RNN exhibiting large step changes. This supports the hypothesis that the AENs are more evolvable.
Analysis of evolved controllers again suggests that AENs solve the problem in a different way to RNNs. First of all, this can be seen in their use of sensory inputs (see Table III), with all AEN solutions using inputs 0 (pendulum angle sensor) or 9 (angular velocity), and the majority of RNNs using inputs 2 (pendulum angle), 3 (pendulum angle), 7 (cart velocity), and 8 (angular velocity) to solve the task. Neither favored inputs 4, 5, or 6. This suggests that the two different architectures are biased toward exploring different input-output mappings, with the AENs presumably more able to express mappings that result in higher fitness. Differences in behavior can also be seen in the reconstructed phase spaces of AEN and RNN controllers (see Figs. 13 and 14).
Figs. 15 and 16, respectively, show the network topology and time series of an AEN evolved to solve the 15. Typical minimum working example AEN evolved for the three pendulum task. Only the nodes and the switches, which are required to generate the optimal behavior, are shown. S0 and S1 switches, which when active, deactivate the genes within their bounds. V0 refers to the angular velocity in the top quadrants. A0 refers to the angular position in the top-left quadrant. N0 is a node with no external input, but is required to create the oscillatory behavior. M0 and M1 are the motor control for the cart. three-cart problem. This solution used two switches. One of these is activated when a pendulum is in the upward part of its swing, causing a transition between swinging and stabilizing behaviors. The other functions as an oscillator. Hence, we see both of the behaviors observed in the standard map controllers. Again, the first switch can be used to manually transition between the controller's behaviors; for instance, turning this ON during the stabilization phase causes the pendulum to transition to the swinging phase and remain there.
C. Controlling Transfer Orbits in a Gravitational System
Controlling transfer orbits was the hardest of the three problems, requiring a population of 500 and a generation limit of 200. Other parameters were the same as for coupled inverted pendulums problem. Fig. 17 shows the Pareto front of solutions generated by 40 subsequent evolutionary runs for both AENs and RNNs, showing the tradeoffs between orbital repetitions and fuel used by each controller. The Pareto front for AEN-based controllers is significantly further to the bottom-right, indicating solutions were found that could travel further and with less fuel than the RNNs.
It was observed that the majority of evolved controllers used a gravitational slingshot around planet C to conserve fuel while transitioning between planets A and B. Fig. 18 shows the behavior of one of the best controllers, guiding the trajectory over the course of nine planetary transitions.
Topological self-modification was seen to occur in 36 of the 40 evolved AEN controllers, again suggesting a strong selective pressure toward making use of this behavior. In the remaining four controllers, the switches remained permanently OFF. In all instances where topological modification was applied, its effect included changing the expression state of Fig. 19. Structure of an AEN, which was able to perform nine orbital hops in succession. Only the nodes and switches, which are required to generate the behavior, are shown. A0 refers to the distance to the target, and A1 and A2 refer to the absolute position of the spacecraft. N0 is a processing node. R0-R2 are the outputs of the network, which specify the acceleration of the spacecraft. Fig. 20. Expression of each node and switch of the AEN shown in Fig. 19 over 50 000 times steps (sampled at one every ten steps) with nonfunctional nodes and switches removed. It can be seen that the switch is modifying the topology of the network dynamically.
one of the output nodes-in effect, forcing an abrupt direction change within the trajectory. Fig. 20 shows a time series view of node and switch activations in one of the best evolved AEN controllers. It can be seen that the switch changes state on only three occasions during the period of control. This suggests that the mechanism is being used in a sensitive manner, inducing only small infrequent changes within the dynamics of the network. In this case, the switch is regulated by node A2, which indicates the spacecraft's position in the z plane. It becomes active when the trajectory reaches its extremal position, guiding the spacecraft back into a close orbit.
VII. CONCLUSION
In this paper, we investigated the potential benefits of introducing topological self-modification to RNN architectures, in the form of an AEN. The AEN approach is motivated by the process of chromatin modification within genetic regulatory networks, particularly the manner in which genes are able to regulate transcriptional access to other genes through chromatin modification. This, in effect, is analogous to adding or removing nodes to or from the network.
The AEN approach was applied to three different dynamical control tasks, using evolutionary algorithms to design the topology and parameters of the networks. A clear pattern seen in these experiments is that the AEN model allows the evolution of better solutions than a conventional RNN model, both in terms of average performance and ability to express more general solutions.
We propose several hypotheses for why this is the case. First, it seems likely that the topological change stabilizes attractors, making it easier for a controller to maintain a stable behavior. This reflects biological understanding of the role of chromatin modification in achieving cell specialization. Second, the topological change can lead to rapid behavioral change, presumably faster than that which can be achieved by following the natural dynamics of a network with fixed topology. This is likely to be beneficial in controllers that require rapid responses to their environment. Third, we see that the evolved AENs express behaviors, which are not seen in evolved RNNs, presumably because the topological change makes these behaviors easier to discover. These behaviors, in turn, appear particularly useful for the kind of control problems we have considered in this paper.
By considering how controllers evolved over time, it also became evident during our experiments that the AEN controllers evolve more smoothly than the conventional RNN architectures. This, in turn, may suggest that the AENs are more evolvable than the RNNs and, therefore, more suitable for use with evolutionary algorithms. It is known that evolvability and robustness are closely related system properties, so this may also be related to an AEN's ability to maintain different stable behaviors and robustly transition between them.
A further benefit of the approach was discovered during post hoc analysis of the evolved controllers, where it was noted that manually changing the activation state of the topological switches led the controllers to transition between different phases of the control task. This gave insights into the natural decomposition of the tasks, and could potentially be used as a general mechanism for exploring and understanding the internal structure of problems. This is something we plan to explore further in the future work.
Although this paper has focused on solving computational problems, it is feasible that computational models such as this could also be used to develop better understanding of epigenetic mechanisms in biology. Epigenetics is a relatively new field of study, and experimental limitations make it difficult to infer general principles from biological data alone. Computational models could help to fill this gap by allowing the exploration of systems-level properties, in much the same way that Boolean network models have helped to understand genetic networks. As a start, we have begun to look at more detailed computational models of epigenetic processes that model the spatiotemporal behavior of chromatin modifying protein complexes.
The results presented in this paper show the potential for using self-modifying processes within connectionist architectures. However, an AEN is only one way of achieving this, and in the future work, we also plan to investigate a broader range of self-modifying connectionist models. These need not be limited to switching ON and OFF different parts of an existing network. They could also, for instance, be used to dynamically create new nodes or subnetworks using processes analogous to development or growth.
|
2017-02-22T04:00:03.689Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "047bd67ebf0b6936cf41efb8eff046ee0119c473",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1109/tnnls.2015.2497142",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cd9d6e0fc61166fcf969f2b26633a15d7b6b9ef4",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
33572332
|
pes2o/s2orc
|
v3-fos-license
|
Microarray analysis supports a role for ccaat/enhancer-binding protein-beta in brain injury.
CCAAT/enhancer-binding protein-beta (C/EBPbeta) is a transcription factor that plays an important role in regulating cell growth and differentiation. This protein plays a central role in lymphocyte and adipocyte differentiation and hepatic regeneration and in the control of inflammation and immunity in the liver and in cells of the myelomonocytic lineage. Our previous studies suggested that this protein could also have important functions in the brain. Therefore, we were interested in the identification of downstream targets of this transcription factor in cells of neural origin. We performed cDNA microarray analysis and found that a total of 48 genes were up-regulated in C/EBPbeta-overexpressing neuronal cells. Of the genes that displayed significant changes in expression, several were involved in inflammatory processes and brain injury. Northern blot analysis confirmed the up-regulation of ornithine decarboxylase, 24p3/LCN2, GRO1/KC, spermidine/spermine N(1)-acetyltransferase, xanthine dehydrogenase, histidine decarboxylase, decorin, and TM4SF1/L6. Using promoter-luciferase reporter transfection assays, we showed the ornithine decarboxylase and 24p3 genes to be biological downstream targets of C/EBPbeta in neuroblastoma cells. Moreover, the levels of C/EBPbeta protein were significantly induced after neuronal injury, which was accompanied by increased levels of cyclooxygenase-2 enzyme. This strongly supports the concept that C/EBPbeta may play an important role in brain injury.
CCAAT/enhancer-binding proteins (C/EBPs) 1 are a family of transcription factors that belong to the (bZIP) (basic leucine zipper) class and are known to couple extracellular signal transduction pathways to numerous cellular processes. C/EBPs have been involved in the regulation of various aspects of cell differentiation and function in multiple tissues. Six different members of the family (C/EBP␣ to C/EBP) that give rise to different isoforms corresponding to full-length and amino-ter-minally truncated proteins have been isolated (1) and found to play a role in growth arrest and cell differentiation. Like other bZIP proteins such as c-Jun, c-Fos, and cAMP-responsive element-binding protein/activating transcription factor, C/EBPs have the ability to form homo-or heterodimers with C/EBPs and other bZIP proteins. They bind to a C/EBP-binding site, also termed the interleukin (IL)-6-responsive element, which fits with the consensus sequence T(T/G)NNGNAA(T/G) (2) present in various promoters, resulting in activation or repression of transcription.
C/EBP, also known as NF-IL6, IL-6DBP, LAP (liver-enriched activating protein), CRP2, and nuclear factor-M, is expressed in numerous tissues, including liver, adipose tissue, ovary, lung, kidney, mammary gland, and hematopoietic tissues (reviewed in Ref. 3). Transcription of the intronless C/EBP gene results in a single 1.4-kb mRNA that can produce at least three isoforms: full-length 38-kDa LAP1, 35-kDa LAP2, and 20-kDa LIP (liver-enriched inhibitory protein) (1,4). Both LAP isoforms contain both the activation and bZIP domains, whereas only the later is present in LIP. LIP can therefore act as a dominant-negative inhibitor of C/EBP function by forming nonfunctional heterodimers with the other members. The C/EBP protein has been linked to hepatocyte-specific gene regulation because it shows high expression in liver cells (5) and binds to several control elements of liver-specific genes (4). Additionally, it was suggested that this protein contributes to the regulation of the acute-phase response of the liver (6,7). Meanwhile, it has become clear that C/EBP also plays a role in other tissues. Results obtained from experiments in cell culture and with knockout mice demonstrated that C/EBP is very important in the process of lymphocyte (8 -10) and adipocyte (11,12) differentiation. However, much less is known about the expression and function of C/EBP in mammalian brain. In the invertebrate Aplysia, it has been shown that C/EBP plays an essential role in the consolidation of stable long-term synaptic plasticity (13). Several more recent studies in mammals show that C/EBP mRNA is widely expressed in adult mouse brain (14) and that this protein could be implicated in long-term synaptic plasticity and memory consolidation in rat hippocampus (15). Also, our previous study (16) showed that the overexpression of C/EBP in a mouse neuroblastoma cell line induces neuronal differentiation, and Menard et al. (17) have shown that activation of C/EBP promotes the generation of neurons versus astrocytes in progenitor cells isolated from embryonic mouse cortex. Altogether, these results suggest that the transcription factor C/EBP could have important functions in the brain.
Obviously, knowledge of the network of genes altered by C/EBP in the brain is required to fully understand the physiological functions of C/EBP in this organ. To identify C/EBP downstream targets in this study, we compared the gene ex-pression of two stable C/EBP-transfected clones (C22 and CE) obtained from a neuroblastoma cell line (18) with vector alonetransfected cells (CB) using cDNA microarrays. In 10 replicate experiments, the expression levels of 48 genes were up-regulated in both clones C22 and CE. Our experiments also revealed that C/EBP induction is highly correlated with the expression of many genes involved in inflammatory processes in the brain and that the expression of this protein is induced after a neuronal injury. Thus, our data suggest the possibility that C/EBP plays a role in the regulation of the processes that follow brain injury through hitherto unknown mechanisms.
Cell Culture and Transfections
Mouse TR cells (18) were propagated and maintained in Dulbecco's modified Eagle's medium (Invitrogen) containing 10% fetal bovine serum, 40 g/ml gentamicin, and 2 mM glutamine at 37°C and 5% CO 2 . For the selection of stably transfected cells, 0.5 ϫ 10 6 cells were seeded into a 6-cm diameter tissue culture plate, incubated overnight, and transfected with 3 g of pZeoSV2-C/EBP using the calcium phosphate precipitation technique as described previously (16). Twelve hours after transfection, the medium was replaced with regular growth medium, and the cells were incubated for 24 h, after which they were subcultured at 1:10 dilution with the addition of Zeocin (350 g/ml). The growth medium was renewed every 3 days, and fresh Zeocin was added. Individual colonies were transferred into 24-well plates, expanded, and screened for C/EBP expression.
For transient transfection experiments (24p3 and ornithine decarboxylase (ODC) promoter constructs), semiconfluent cells were transfected by the calcium phosphate precipitation technique in 12-well plates. After 8 h of exposure to the Ca 2ϩ /DNA mixture, cells were washed with buffer A (20 mM Tris-HCl (pH 7.4), 1 mM Na 2 HPO 4 , 140 mM NaCl, and 5 mM KCl), incubated for another 24 h in complete medium, and harvested for determination of luciferase and -galactosidase (to determine transfection efficiency) activities as described previously (19). Each transient transfection experiment was repeated at least three times in triplicate.
The mouse 24p3 promoter was PCR-amplified from mouse genomic DNA using high fidelity Platinum ® Pfx DNA polymerase (Invitrogen). The primers used were 5Ј-TAT ATG GAT CCA ATG AAA GCA GCC ACA TCT AAG G-3Ј (forward sequence) and 5Ј-TTC TTA GCT CGA GAG GAG GAA GAG CAG AGA GTG AGG-3Ј (reverse sequence). They were designed according to the published sequence of the mouse 24p3 promoter (GenBank TM /EBI accession number X81627). PCRs were performed according to the manufacturer's recommendations, and the amplification product was cloned and sequenced in both directions. The entire promoter fragment (Ϫ719 to ϩ18, p24p3) was subcloned in the promoterless luciferase reporter vector pXP1. The construct pODClux1m (containing 1.2 kb of the rat ODC promoter) was a gift of Dr. A. P. Butler (Anderson Cancer Center, University of Texas) (20).
Preparation of the "Mouse 15K cDNA Array"
Generation of PCR Products-We obtained a 15,247-mouse cDNA clone set from NIA, National Institutes of Health. 2 The bacteria were subcultured, and then the clone inserts (sizes of 0.5-3 kb) were amplified by PCR using the Expand high fidelity system (Roche Applied Science) together with universal plasmid primers. The 5Ј-primer was C12 amino modified. Additionally, different control PCR fragments were prepared from organisms foreign to mouse ( Table I). Purification of the PCR fragments was performed in a 96-well format on a Biomek FX robot (Beckman Coulter, Fullerton, CA) based on a silica gel matrix system (NucleoSpin Robot 96-B extract kit, Macherey Nagel, Dueren, Germany). Subsequently, aliquots of the eluates were run on 192-well agarose gels (1%) to check the size, quality, and concentration of the PCR fragments. Around 500 PCRs had to be improved in a second run. For spotting, aliquots of the purified PCR fragments were transferred from 96-to 384-well plates, and 12ϫ SSC buffer (1.8 M NaCl and 0.18 M sodium citrate (pH 7.0)) was added to a final 2-fold concentration.
Spotting and Immobilization-PCR fragments were spotted onto amino-silanized glass slides. Spotting was performed on an OmniGrid Spotter (GeneMachines, San Carlos, CA) using 32 Telechem split needles (SMP3). Controls were spotted in various dilutions (undiluted, 1:2, 1:4, and 1:8) into each subgrid; control 2 was placed into every subgrid, whereas controls 1 and 3-5 were distributed into distinct subgrids. Spotting was performed at 21°C with 40% relative humidity. For attachment of the DNA to the surface, slides were subsequently incubated at 50°C in a humid chamber for 3.5 h and then at 100°C for 10 min. Until usage, they were stored in a dry dark place.
Isolation of RNA, Labeling of cDNA Probes, and Hybridization to cDNA Arrays
Total RNA was extracted from clones CB, C22, and CE by homogenization in guanidine thiocyanate as described previously (19). Probes for microarray hybridization were generated using 25 g of total RNA, which was first annealed with 4 g of oligo(dT) 12-18 primer (Invitrogen) at 70°C for 10 min. Afterward, cDNA synthesis was carried out at 42°C for 2 h in buffer containing 600 units of Superscript II RNase H Ϫ reverse transcriptase (Invitrogen); 100 M Cy3-dUTP or Cy5-dUTP (Amersham Biosciences); 10 mM dithiothreitol; 250 M dTTP; and 625 M each dATP, dCTP, and dGTP (Amersham Biosciences). The reaction was stopped by the addition of NaOH and EDTA (50 and 1 mM final concentrations, respectively) at 65°C 10 min. Labeled cDNA was purified using GFX columns (Amersham Biosciences) following the manufacturer's instructions. Dye reversal experiments were performed as a control.
Hybridizations were carried out in a 40-l volume hybridization mixture that included final concentrations of 50% formamide, 6ϫ SSC, 0.5% SDS, 5ϫ Denhardt's solution, 0.1 g/l salmon sperm DNA, and 0.25 g/l poly(dA). The mixture was applied to the array, which was covered with a 24 ϫ 60-mm glass coverslip and placed in a sealed chamber to prevent evaporation. The microarrays were then incubated at 42°C for 16 h. Hybridized slides were washed by shaking once for 10 min in 0.1ϫ SSC and 0.1% SDS and twice for 5 min in 0.1ϫ SSC at room temperature. Washed slides were centrifuged and analyzed as described below.
Data Analysis
Microarray slides were scanned using a Gene Pix 4000B simultaneous dual wavelength scanner (Axon Instruments, Inc., Union City, CA), and the data obtained were analyzed using ChipSkipper analysis software. For data integration, a segmentation-based spot/background detection was performed for each spot separately. Data normalization was carried out in two steps analyzing logarithmic intensity (base 2) (Supplemental Fig. S2).
Northern Blot Analysis
Northern blot analysis was performed as described previously (19). Briefly, 20 g of total RNA were electrophoresed on a 2.2 M formaldehyde and 1% agarose gel in 1ϫ MOPS buffer at 100 V for 3-4 h and transferred to nylon membranes (Biodyne, Pall Corp.). Labeled probes were generated using random primers and hybridized with the membranes for 20 h at 42°C (50% formamide, 3ϫ SSC, and 0.2% SDS). Methylene blue staining of the membranes was used as a loading control. The cDNAs used as probes for 24p3/LCN2, GRO1/KC, spermidine/spermine N 1 -acetyltransferase, xanthine dehydrogenase, histidine decarboxylase, decorin, and TM4SF1/L6 antigen were obtained by re-2 Available at lgsun.grc.nia.nih.gov/index.html.
verse transcription-PCR analysis using the Superscript one-step reverse transcription-PCR kit (Invitrogen) and the primers described in
Immunoblot Analysis
TR cells were lysed in phosphate-buffered saline containing 1% Nonidet P-40, 0.5% sodium deoxycholate, 0.1% SDS, and protease inhibitors and spun at 12,000 ϫ g for 20 min, and the supernatants were removed and stored at Ϫ70°C until analysis. Equal amounts of protein (20 g) were electrophoresed on 10% SDS-polyacrylamide gels and blotted onto nitrocellulose membranes. After transfer, immunoblotting with a polyclonal antibody to rat C/EBP was carried out as described previously (16). The polyclonal anti-cyclooxygenase (COX)-2 and monoclonal anti-␣-tubulin antibodies were obtained from Santa Cruz Biotechnologies (Santa Cruz, CA) and Sigma, respectively. Values in text are the average of the quantification of at least three independent experiments corresponding to three different samples.
Analysis of the Intracellular Localization of C/EBP by Confocal Microscopy
CB, C22, and CE cells were plated on glass coverslips in 24-well cell culture plates and grown in regular medium for 24 h. The cells were then washed, fixed for 10 min with methanol at Ϫ20°C, and permeabilized with 0.1% Triton X-100 for 30 min at 37°C. After a 1-h incubation with the primary antibody (16), cells were washed with phosphatebuffered saline and incubated with a fluorescein-labeled secondary antibody for 45 min at 37°C. Subcellular localization was determined using a TCS SP2 laser-scanning spectral confocal microscope (Leica Microsystems). The images were obtained using a series of 0.5-m (depth) spaced cell fluorescent slices (z axis).
Scratch-Wound Model
Confluent monolayers of clones CB, C22, and CE were wounded by scratching with a plastic pipette (yellow) tip along lines at right angles to each other. Cultures were then rinsed with buffer A, maintained in regular medium for the indicated times, and processed for Western blot and immunocytochemical analyses as described above.
RESULTS
To investigate the cellular action of the C/EBP protein in neuronal cells, we stably introduced a pZeoSV2 vector encoding C/EBP into TR cells (see "Experimental Procedures"), and different Zeocin-resistant clones were tested for C/EBP expression using an antibody specific for C/EBP (16). In subsequent experiments, two independent clones (C22 and CE) were used, with empty pZeoSV2 vector-transfected TR cells (CB) used as control cells. The immunoblot in Fig. 1A shows that the content of the C/EBP protein was very low in parental TR cells, and its levels were markedly increased in clones C22 and CE. We next performed immunofluorescence analysis to study the intracellular localization of the C/EBP protein in the different clones. The confocal images shown in Fig. 1B show that, in control cells, there was almost no signal, but clones C22 and CE presented a strong immunofluorescence, which was localized mainly in the nucleus and distributed in multiple bright foci.
To analyze global gene expression in C/EBP-overexpressing neuronal cells, cDNA microarray hybridizations were per-formed using an NIA 15,247-mouse cDNA clone set (22). The cDNA probes derived from clones CB and C22 or clone CE were fluorescently labeled with Cy3-dUTP (green) or Cy5-dUTP (red), respectively. These probes were applied simultaneously onto the microarray, and the two fluorescent images were scanned with a fluorescence laser-scanning device (Supplemental Fig. S1). Red and green fluorescent signals indicated genes whose expression levels were relatively higher in cells overexpressing C/EBP and the backbone vector, respectively. The differential expression of each gene was calculated from the relative intensity of the Cy5 versus Cy3 fluorescent signal. Ten independent experiments were conducted comparing vectortransfected and C/EBP-overexpressing cells. Fig. 2 shows one representative plot of the differential expression of the 15,247 genes in one of the 10 experiments. Overall, the expression of most genes was not altered by C/EBP. Other plots are shown in Supplemental Fig. S2.
A total of 48 genes in clones C22 and CE had a differential expression value compared with vector-transfected cells in all the experiments and are considered up-regulated by C/EBP (Table III and Supplemental Table). The group of genes upregulated by C/EBP was further analyzed on the basis of functional similarity using the DRAGON Database. 3 As shown in Table III, these genes can be clustered into several groups, as genes involved in the immune/inflammatory response, metabolism, signal transduction, extracellular and membrane proteins, intracellular transport and cytoskeleton, protein degradation, and others. Some of these cDNA sequences also correspond to expressed sequence tags for which the full-length sequence is not available in public domain data bases and to unknown genes. It is of note that many of the up-regulated genes encode proteins involved in inflammatory and brain injury processes.
In view of the data commented on above and to verify the results of the microarray experiments, we performed Northern blot analysis on RNA isolated from CB, C22, and CE cells using several cDNA probes encoding genes involved in inflammation and brain injury that were shown to be up-regulated by C/EBP in Table III. As shown in Fig. 3, the mRNA levels of ODC, 24p3/LCN2, GRO1/KC, xanthine dehydrogenase, histidine decarboxylase, decorin, and TM4SF1/L6 antigen showed a significant induction in C22 and CE cells compared with control CB cells. The induced expression of spermidine/spermine N 1 -acetyltransferase was observed only in C22 cells.
The strongest C/EBP-induced increase in expression was seen for the 24p3 gene, which belongs to the lipocalin family of proteins (23). Lipocalins are small secreted proteins that play a role in diverse biological processes through binding of small hydrophobic molecules, interaction with cell-surface receptors, and formation of macromolecular complexes (24). The 24p3 protein has been implicated in processes such as programmed cell death and inflammatory response (25,26). The ODC gene 3 Available at pevsnerlab.kennedykrieger.org/dragon.htm.
Identification of Target Genes for C/EBP in Neuronal Cells
also shows a significant increase in expression (27). ODC is one of the key enzymes in the polyamine biosynthetic pathway catalyzing the formation of putrescine from ornithine. Interestingly, there are numerous data showing an involvement of ODC in brain injury (28 -30), and neuropathological and clinical studies have demonstrated that the ODC/polyamine system is heavily involved in various human brain diseases (31,32). Other genes also involved in inflammatory processes and brain injury were induced by C/EBP, including the chemokine Gro1 (33)(34)(35); regulatory enzymes such as histidine decarboxylase (36 -39) and xanthine dehydrogenase (40 -44); an enzyme implicated in polyamine metabolism, spermidine/spermine N 1acetyltransferase (45); as well as two genes (decorin and Tm4sf1/L6 antigen) involved in structural integrity (46 -49) (Table III).
To test whether the C/EBP protein regulates transcription of the 24p3 and ODC genes, we next performed transient transfection experiments with the reporter plasmids pODClux1m (20) and p24p3luc, containing 1.2 and 0.8 kb, respectively, of the promoter regions of the ODC and 24p3 genes. As shown in Fig. 4A and 5A, the reporter activities of both constructs were significantly enhanced: 3.4-fold (24p3 gene) and 1.6-fold (ODC gene) by cotransfection with an expression vector for C/EBP. We next tested 24p3 and ODC promoter activities in CB (control), C22, and CE cells (Figs. 4B and 5B). In these experiments, we also noted a very strong increase in luciferase activity in cells stably expressing the C/EBP protein: 25-fold (C22) and 13-fold (CE) increases with the 24p3 promoter construct and a 2-fold (C22 and CE) increase with the pODClux1m construct. These data are in agreement with the higher levels of 24p3 and ODC mRNAs observed in these clones.
We next studied the possible involvement of C/EBP in the process of neuronal injury. To this end, we used an in vitro model of neuronal reaction to injury. The model we utilized here was first introduced by Yu et al. (50) and has been widely used since to analyze neural reactions to lesions (51,52). Experimental lesions were made by scratching of clones CB, C22, and CE, and the expression of C/EBP was explored at different times after scratching by Western blot and immunofluores-cence analyses. As shown in Fig. 6A, the levels of C/EBP protein in clones CB, C22, and CE started to increase between 1 and 3 h after scratching and remained elevated 6 h after in clones C22 and CE. The increase in the levels of C/EBP protein after injury was more pronounced in clones overexpressing C/EBP. At 3 h, the levels of C/EBP in clones CB, C22, and CE increased by 1.3-, 2.0-, and 2.4-fold, respectively, compared with basal (0 h) values. These results were further confirmed by immunocytochemical analysis. The confocal images in Fig. 6B show that, in clone C22, the C/EBP signal was significantly induced 1 and 3 h after lesion.
Finally, we analyzed whether the increased expression of C/EBP was associated with induction of the expression of COX-2, an enzyme that plays a prominent role in many forms of inflammation. Western blot analysis using a COX-2-specific antibody showed that, under basal conditions, the COX-2 protein could be detected in control CB cells (Fig. 7A), and its content was significantly higher in clones C22 and CE. Three hours after injury, a transient increase could be observed in CB cells (1.3-fold) (Fig. 7B), which was more evident in clones C22 (1.8-fold) and CE (1.5-fold) (3 and 6 h after injury, respectively) and which was coincident with the peak in C/EBP expression observed in these clones after lesion. DISCUSSION Although the effects of C/EBP have been investigated in detail in cell types such as adipocytes, hepatocytes, and lymphocytes (53-57), very few target genes of this transcription factor have been identified in the central nervous system. In this study, we report the expression profiling of the transcriptional program controlled by C/EBP in neuroblastoma cells using cDNA microarrays. Our study identified several genes whose expression was significantly up-regulated by the overexpression of C/EBP in neuronal cells. A particularly interesting group among the genes up-regulated by C/EBP is one that encodes proteins involved in inflammation and brain injury. Northern analysis confirmed the changes in ornithine decarboxylase, 24p3/LCN2, GRO1/KC, spermidine/spermine N 1 -acetyltransferase, xanthine dehydrogenase, histidine de- carboxylase, decorin, and TM4SF1/L6 antigen. In addition, we have shown that the levels of C/EBP were increased after a neuronal lesion together with the levels of COX-2 enzyme. These data suggest that C/EBP may contribute to regulation of brain injury.
It is known that C/EBP plays an important role in regulating several aspects of inflammation and immunity in the liver and in cells of the myelomonocytic lineage (58). Indeed, C/EBP was originally identified because of its inducibility by IL-6 in human hepatoma cells. It has been subsequently determined by many independent studies that C/EBP and C/EBP␦ are strongly up-regulated at the transcriptional level by inflammatory stimuli such as turpentine oil and bacterial lipopolysaccharide and by recombinant cytokines such as IL-6, IL-1, and tumor necrosis factor-␣ (reviewed in Ref. 59). Inflammation is an important part of the pathophysiology of traumatic brain injury and chronic neurodegenerative disorders. Evidence now suggests that syndromes such as multiple sclerosis and Alzheimer's and Parkinson's diseases and traumatic brain injury have important inflammatory and immune components and may be amenable to treatment by anti-inflammatory and im-munotherapeutic approaches (reviewed in Ref. 60). The key players in these processes are the numerous immune mediators released within minutes of the primary injury. Inflammatory cytokines such as tumor necrosis factor-␣, IL-1, and IL-6 appear to be robustly activated and secreted as early as 1 h after ischemic and traumatic insults (61)(62)(63).
The results presented here show a very strong induction of 24p3, a member of the lipocalin family of small secreted polypeptides that is present in several tissues, including the brain (26), and it is thought to be involved in inflammatory processes. Lipocalins have remarkably diverse functions, including retinal transport, olfaction, pheromone transport, and prostaglandin synthesis. Recently, several reports have implicated members of this family in inflammatory processes, and it has been shown that the lipocalin proteins bind various ligands important in homeostasis and inflammation. For example, plasma levels of 24p3 are elevated during the acute-phase response (26), which involves a massive expansion of neutrophils. Also, it has been shown that the human homolog of 24p3 (neutrophil-associated lipocalina, NGAL) (24), which is present in neutrophilic granules, can play a role in the neutrophilic apoptosis that follows an inflammatory response.
Another gene shown in this study to be strongly up-regulated by C/EBP is ODC. The ODC and spermidine/spermine N 1acetyltransferase genes are also involved in inflammatory processes and several disorders of the nervous system. ODC is one of the key enzymes in the polyamine biosynthetic pathway catalyzing the formation of putrescine (the precursor of the natural polyamines spermidine and spermine) from ornithine (27). On the other hand, both polyamines are converted back to putrescine by the action of spermidine/spermine N 1 -acetyltransferase. The ODC protein is localized in many different brain areas, including the hypothalamus, some cortical areas, hippocampus, cerebellar cortex, some cranial nuclei, and nucleus ruber (29), and central nervous system injury results in an increase in the activities of both ODC and spermidine/ spermine N 1 -acetyltransferase (64). Natural polyamines are of considerable importance for the developing and mature nerv-ous systems, and it has also become clear that the polyamine system is involved in various brain pathological events such as traumatic brain injury, stroke, Alzheimer's disease, and others (reviewed in Ref. 65).
In this study, we have also shown that C/EBP could regulate the expression of the ODC and 24p3 gene promoters. We chose the ODC and 24p3 genes because their expression was strongly induced in C/EBP transfectants and because sequence analysis of their promoter regions revealed the presence of several potential C/EBP-binding sites. The demonstration that ODC and 24p3 promoter activities were increased by C/EBP suggests that both genes could be direct downstream targets of this transcription factor in neuroblastoma cells.
In addition to the ODC, 24p3, and spermidine/spermine N 1acetyltransferase genes, other genes involved in inflammatory processes in the brain identified in this screen include histidine decarboxylase, xanthine dehydrogenase, Gro1, decorin, and
Identification of Target Genes for C/EBP in Neuronal Cells
Tm4sf1/L6 antigen. Histidine decarboxylase is an enzyme that converts L-histidine to the neurotransmitter histidine, and its expression is increased in the hypothalamus after intracerebroventricular administration of lipopolysaccharide (38). Xanthine dehydrogenase plays a role in the inflammatory response of the human mammary epithelial cell (42), and it has also been involved in the regulation of reactive oxygen species production after ischemia (66). GRO1 is a protein that belongs to the CXC subfamily of chemokines, which are involved in the recruitment of leukocytes to sites of inflammation. Also, some reports suggest that GRO1 may play important roles in the brain. GRO1 induces tau phosphorylation in mouse primary cortical neurons, and an increase in its expression has been detected in the brains of post-mortem human brain tissues, suggesting a possible involvement of this gene in neurodegenerative disorders (35,67). Decorin is a small proteoglycan present in the extracellular matrix, and it is thought to behave as a growth inhibitory protein in different cell types and to regulate the NF-B pathway. A role of decorin in inflammation of the nervous system and in Alzheimer's disease has been also reported (46,68), and its levels are significantly up-regulated after brain injury (47). Finally, TM4SF1/L6 antigen is a protein that is structurally related to the tetraspanin family of proteins, which are found associated with different integrins (69). Although the cellular role of TM4SF1 is not well established, there are several reports implicating some members of the tetraspanin protein family in neurite extension and outgrowth (70). Northern blot analysis of all these genes confirmed that they are up-regulated by C/EBP in neuroblastoma cells. This observation and the fact that the promoters of histidine decarboxylase, xanthine dehydrogenase, GRO1, and decorin contain putative binding sites for C/EBP strongly suggest that they are also downstream targets of C/EBP; and therefore, this protein could represent an important new pathway in the regulation of their expression in neuronal cells. Altogether, given the important role of ODC, 24p3, GRO1, xanthine dehydrogenase, and decorin in brain inflammatory processes, the observed gene expression changes in TR neuroblastoma cells differentially expressing C/EBP are consistent with the hypothesis that C/EBP plays an important role in the 4. Activation of the p24p3 construct by C/EBP. A, control CB cells were transiently transfected with a luciferase reporter plasmid containing the 24p3 promoter (fragment Ϫ719 to ϩ18) and an expression vector for C/EBP, and luciferase activity was determined 24 h after transfection. Data are expressed relative to the basal value obtained with only the reporter construct and represent means Ϯ S.D. of luciferase activity determined in triplicate in at least three independent experiments. *, p Յ 0.05. B, CB, C22, and CE cells were transiently transfected with a luciferase reporter plasmid containing the 24p3 promoter (fragment Ϫ719 to ϩ18), and luciferase activity was determined 24 h after transfection. Data are expressed relative to the value of clone CB and represent means Ϯ S.D. of luciferase activity determined in triplicate in at least three independent experiments. **, p Յ 0.01.
FIG. 5.
Activation of the pODClux1m construct by C/EBP. A, control CB cells were transiently transfected with a luciferase reporter plasmid containing the rat ODC promoter (fragment Ϫ1156 to ϩ13) and an expression vector for C/EBP, and luciferase activity was determined 24 h after transfection. Data are expressed relative to the basal value obtained with only the reporter construct and represent means Ϯ S.D. of luciferase activity determined in triplicate in at least three independent experiments. **, p Յ 0.01. B, CB, C22, and CE cells were transiently transfected with a luciferase reporter plasmid containing the rat ODC promoter (fragment Ϫ1156 to ϩ13), and luciferase activity was determined 24 h after transfection. Data are expressed relative to the value of clone CB and represent means Ϯ S.D. of luciferase activity determined in triplicate in at least three independent experiments. *, p Յ 0.05. transcriptional control of brain injury processes. Consistent with the notion that C/EBP could be involved in inflammatory processes in the brain, it has been reported recently that lipopolysaccharide, IL-1, and tumor necrosis factor-␣ induce the mRNA levels of C/EBP in mouse primary astrocytes (71).
In this work, we have also shown an injury-induced activation of C/EBP expression in clones overexpressing this protein in an in vitro scratch-wound model. Scratching the monolayer of C22 cells resulted in a significant increase in C/EBP protein levels throughout all the monolayer. Neuronal cells in confluent monolayers are coupled to each other, and junctional connections render a quick intercellular communication possible throughout large culture areas. Therefore, injury to a confluent monolayer of cells was expected to affect not only the directly wounded cells, but also larger populations in the dish. Signals spreading through the intercellular junctions or via released factors (72) could have resulted in the observed effect after scratching of C22 cells.
Finally, we also observed up-regulation of COX-2 in clones C22 and CE (compared with vector-transfected cells) as well as a transient induction coincident with the increase in C/EBP protein levels after wound injury. COX-2 is the rate-limiting enzyme for the conversion of arachidonic acid to prostaglandins and a key therapeutic target for the treatment of brain injury. Inflammatory processes associated with the increased expression of COX-2 and elevated levels of prostaglandin E 2 have been implicated in the cascade of events leading to neurodegeneration in a variety of pathological settings (73)(74)(75). COX converts arachidonic acid to prostaglandin H 2 , the precursor of prostaglandin E 2 and several others prostanoids, and exists in FIG. 6. Expression of the C/EBP protein in CB, C22, and CE cells after wound-induced injury. A, Western blot analysis. Cells were lysed 1, 3, 6, 12, and 24 h after scratching, and cellular proteins were immunoblotted with anti-C/EBP or anti-␣-tubulin antibody. B, confocal imaging of the localization of C/EBP in clone C22. Cells were grown on glass coverslips, fixed with cold methanol, and processed for immunofluorescence using the same anti-C/EBP antibody as in A.
FIG. 7. Expression of the COX-2 protein in CB, C22, and CE cells growing under normal conditions (A) or after wound-induced injury (B).
Cells were lysed 1, 3, 6, 12, and 24 h after scratching, and cellular proteins were immunoblotted with anti-COX-2 or anti-␣-tubulin antibody. eukaryotic cells in two main isoforms: COX-1, which is constitutively expressed in many cell types, and COX-2, which is normally not present in most cells, but whose expression can readily be induced in inflamed tissues (76). In this regard, it is noteworthy that the expression of the COX-2 promoter is induced by C/EBP and that this protein plays an obligatory role in COX-2 expression in macrophages (77). Although both isoforms synthesize prostaglandin H 2 , COX-1 is primarily involved in the production of prostanoids relevant to physiological processes, whereas COX-2 is mainly responsible for the production of prostanoids linked to pathological events (76). Therefore, the data presented here showing an increase in COX-2 protein levels in C/EBP-overexpressing cells and after neuronal injury again implicate C/EBP in inflammatory processes in neuronal cells.
Collectively, our data provide evidence for C/EBP up-regulation in brain injury and support a role for C/EBP in transcriptional regulation of inflammatory processes in the brain. This results of this study suggest that regulation of C/EBP may be a valuable target for the development of new therapies for brain disorders involving inflammatory processes. Clearly, further studies are required to verify this hypothesis.
|
2018-04-03T02:26:26.169Z
|
2004-04-02T00:00:00.000
|
{
"year": 2004,
"sha1": "e7c69b37f213dc66a7f290cf6f2103490093865e",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/14/14409.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f2e2b28db9574782afa25e7a66b0d82db1e3681f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
247851510
|
pes2o/s2orc
|
v3-fos-license
|
A systematic review of pharmacologic treatment efficacy for depression in older patients with cancer
Background Older adults ≥65 years of age represent the majority of new cancer diagnoses and are vulnerable to developing depression-like symptoms. Evaluation and management of depression in older cancer patients is underappreciated despite its high prevalence and impact on health-related quality of life. Although antidepressants are the primary pharmacologics used to treat depressive-like symptoms, the efficacy and overall benefit(s) are not well-characterized in older adult patients with cancer. The objective of this investigation was to review what is known about the efficacy of pharmacologic treatment for older adults with depression and cancer. Methods PubMed (Medline) and EMBASE (Elsevier) databases were analyzed for relevant literature in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Results 1,919 unique studies were identified for title and abstract screening. Forty-eight publications were retrieved for full review. None of the identified studies evaluated the potential for benefit after pharmacological treatment among older adults with cancer and depression. Twenty-seven publications met all study criteria except for an analysis focused on older patients. Conclusion We discovered a universal absence of literature with a relevance to pharmacologic antidepressant treatment effects in older adult patients with cancer. This included a lack of evaluation in patients with brain tumors who have an unusually high predilection for developing depression. Our findings suggest that new research is critically needed for understanding optimal clinical management strategies in older adults with cancer and depression who are treated with antidepressants.
Introduction
An estimated 19.3 million new global cancer diagnoses were made in 2020 and roughly 10 million individuals succumbed to their malignancy in the same year (Sung et al., 2021). Age is a well-known risk factor that contributes to cancer incidence with 60% of new diagnoses representing older adult patients !65 years of age in the United States (Cinar and Tas, 2015;Berger et al., 2006). The U.S. Census Bureau estimates that the elderly population will double to nearly 70 million by the year 2030 (Yancik, 2005) and will parallel a rise in the number of older adults who are diagnosed with cancer (Smith et al., 2009). A cancer diagnosis is often associated with a tremendous impact on the quality of life (Nayak et al., 2017;Disease, 2016) with 40% of adults !70 years of age who experience a functional decline after a new diagnosis (Presley et al., 2019). In data released from the Substance Abuse and Mental Health Administration in 2020, an estimated 8.4% of US adults had at least one major depressive event (Substance Abuse and Menta, 2021). In patients with cancer, one meta-analysis estimated 24% of cohort patients showed clinical signs of depression (Massie, 2004), though others suggest this figure may be higher as depressive-like symptoms often go underrecognized (McDonald et al., 1999;Fisch, 2004;Otto-Meyer et al., 2019). The most common symptoms of older adult patients with advanced cancer include pain, anorexia, fatigue, insomnia, anxiety, delirium, and depression (Parpa et al., 2015). The burden of disease combined with the side effects of cancer treatments can be severely debilitating. Patients often face a decreased health-related quality of life, relationship challenges, difficulty sleeping, and other burdens (Alexopoulos, 2005). In elderly patients without cancer, depression is managed with pharmacologics and non-pharmacologics (e.g. psychotherapy, transcranial magnetic stimulation (TMS), electroconvulsive therapy (ECT), and alternative treatment). After conservative non-medical management, selective serotonin reuptake inhibitors (SSRIs) are the first line of antidepressant treatment because of their relative tolerability and high safety profiles (Gautam et al., 2017).
Despite the high level of depressive-like symptoms among cancer patients, the efficacy and characterization of antidepressant treatment in older adults with cancer and clinically-diagnosed depression has not been comprehensively investigated (Findley et al., 2012;Rodin et al., 2007). To care for elderly cancer patients optimally, while also attempting to anticipate their medical needs and challenges, it's essential to better understand this age group. Here, we systematically reviewed the efficacy of antidepressant agents in elderly patients !65 years of age who are diagnosed with depression and cancer.
Literature search
In accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines (Page et al., 2021) a systematic review of treatments for elderly patients with depression and cancer was performed. In June 2021, PubMed (MEDLINE) and EMBASE (Elsevier) were searched for relevant articles. The search was conducted involving specific MeSH keywords by combining variants of 'antidepressant agents', 'depression', 'cancer', and, 'aged'. The full search strategy is provided in the Appendix.
Inclusion criteria
To meet the inclusion criteria, an article's study objective must have included a determination of antidepressant efficacy among patients with cancer !65 years of age with an associated diagnosis of a clinical depression disorder, or at minimum, a subgroup analysis specifically analyzing this age group. Clinical depression was defined as Major Depressive Disorder, Minor Depressive Disorder, or another depressive disorder (Appendix). Antidepressants were recognized for study inclusion if the pharmacologic treatment was tested for its efficacy in modifying depressive-like symptoms in the specified population. Articles were excluded if: (i) subject age was not distinguished as a stratified variable during analysis, (ii) patients received non-pharmacologic therapy to treat depression, (iii) publications were not written in English, and/or (iv) studies were not peer-reviewed full-length controlled trials or cohort studies. Articles were initially screened for title and abstract followed by final inclusion after a full-text review. Randomized and nonrandomized controlled trials and cohort studies were intended for inclusion. Manuscripts other than full-length peer-reviewed articles including abstracts, posters, dissertations, and editorials were excluded. Articles were not limited by publication date. Studies that met the inclusion and exclusion criteria were preferred if using standardized depression labels that included Diagnostic and Statistical Manual of mental disorders (DSM) classification or International Classification of Disease (ICD) criteria. For depressive symptoms, studies that included validated scales due to variability of depression diagnosis within populations were preferred. Duplicate records were removed. Two reviewers independently screened all of the articles. Disagreements were reconciled with a discussion.
Data collection process
Data extraction was conducted on prespecified criteria. The primary outcome was a change in depression symptomology as measured by a validated scale at the time of final follow-up. This was measured through a mean change in depression including the Hamilton Rating Scale for Depression (HAM-D) (Hamilton, 1960), Hospital Anxiety and Depression Scale-Depression (HADS-D) (Zigmond and Snaith, 1983), Montgomery-Ảsberg Depression Rating Scale (MADRS) (Montgomery and Asberg, 1979), the Beck Depression Inventory (BDI) (Beck et al., 1961), Patient Health Questionnaire (PHQ-9) (Kroenke et al., 2001), and was collected as written within publications. Secondary outcomes included emotional distress and quality of life. For quantification of distress, validated measurement tools included the Hospital Anxiety and Depression Scale (HADS-A), Distress Thermometer (DT), (practice guidelines, 1999), and Mini-Mental Adjustment (MINI-MAC) (Johansson et al., 2011) scales. Quality of life scales included the European Organization for Research and Treatment of Cancer (EORTC QLQ-C30) (Aaronson et al., 1993), the Functional Assessment of Cancer Therapy (FACT) (Weitzner et al., 1995), and the 36-item Short-Form Health Survey (SF-36) (McHorney et al., 1993). Response to treatment was defined as a decrease of at least 50% in depression scores to trial endpoint (Nierenberg and DeCecco, 2001).
Quality assessment
Risk of bias was evaluated by using the revised cochrane risk-of-bias tool for randomized trials (RoB 2) and the risk of bias in non-randomized Studies of Interventions (ROBINS-I) (Sterne et al., 2016), by EER and A.M (Sterne et al., 2019). Bias was evaluated in confounding, selection, classification of interventions, randomization, deviation from intended, missing outcome data, selection of outcome, reported result, and overall risk of bias. Any disagreement between the authors was reconciled by a discussion and a consultation with an additional reviewer.
Statistical analysis
In the event that two or more Randomized Controlled Trials (RCTs) demonstrated a high similarity in population, intervention, and outcome measures, a meta-analysis was performed. This analysis was intended to be performed using the Hartung-Knapp-Sidik-Jonkman method for random-effects models. All statistical analysis was performed using R version 4.0.2 (R Foundation for Statistical Computing, Vienna, Austria) using the meta and metafor packages. Calculated odds ratios (ORs) from event rates were used to pool dichotomous variables and mean differences to pool continuous variables. I 2 values were used to assess study heterogeneity. The summary of representative population demographics, primary, and secondary outcomes was recorded and is presented in Tables 1-4.
Identification of potentially relevant studies
1,919 publications underwent title and abstract screening and were selected by searching for specific MeSH keywords and combing variants of, 'antidepressant agents', 'depression', 'cancer', and, 'aged.' Forty-eight of those manuscripts were identified to be relevant and were fully reviewed. No eligible studies met both the inclusion and exclusion criteria as summarized in Fig. 1. Of the 48 studies, 27 publications met all criteria except for directly analyzing the efficacy of pharmacologic treatment in cancer patients !65 years of age or subgroup analysis of this age group. There is a complete lack of studies that have exclusively analyzed antidepressant treatment effects in elderly cancer patients with depression. Composition of the 27 publications included 10 Randomized Controlled Trials (RCTs), 15 Prospective Non-Randomized Trials (PNRTs), and 2 Parallel-Group Randomized Trials (PGRTs). All studies included a clinical diagnosis of depression or clinical 'depressive-like' symptoms. Mean age and follow-up times for those studies are included in Table 1.
The number of cancers analyzed across the 27 included studies are listed in Table 2. Pooled numbers of cancers by type, as well as the number of publications including cancer type, are shown in Fig. 2. 81.5% of the manuscripts involved breast cancer, 63.0% involved gastrointestinal cancer, and 48.1% included lung cancer, representing 45.7%, 15.1%, and 9.9% of evaluable patients, respectively. Head and neck, gynecologic, hematologic, prostate, bone, dermatologic, brain, and renal cancers made up 13.6% of patients and cancer type was represented in 7.4%-40.7% of the 27 publications (Fig. 2, Table 2).
Eighteen of the 27 publications included the analysis of SSRIs, 2 publications included serotonin and norepinephrine reuptake inhibitors (SNRIs), 5 publications included tricyclic antidepressants (TCAs), and 7 publications included other atypical antidepressant medications. Tables 1, 3 and 4 provide the details regarding initial and follow-up measures of depression and anxiety in cancer patients treated with antidepressants across study groups and measures of significance as reported in publications. Follow-up times varied that ranged between 1 and 24 weeks. The numbers of treated patients ranged from 10 to 175. Antidepressant treatments included SSRIs, SNRIs, TCAs, and atypical antidepressants (Table 5). Strengths and weaknesses of depression scales are listed in the Appendix. Major article findings from the articles are addressed below.
Selective serotonin reuptake inhibitors (SSRIs)
SSRIs exert actions through the inhibition of presynaptic serotonin reuptake that increases the availability and activity of serotonin at the synapse (Edinoff et al., 2021). We identified several studies that evaluated SSRI treatment in oncologic patients with a positive depression screening. In one study, a 12-week trial of escitalopram showed a significant improvement in HAM-D and Distress Thermometer (DT) scores among breast cancer patients (n ¼ 79) (Park et al., 2012). Another study evaluated escitalopram within malignant melanoma patients but did not find significant differences in HAM-D scores as compared to the placebo group (n ¼ 24) (Musselman et al., 2013). Treatment with Sertraline demonstrated an improvement of depression and anxiety scales such as HADS-D (n ¼ 35) (Torta et al., 2008), HAM-A, HAM-D, as well as quality of life measures on the 36-Item Short Form Survey (SF-36) including general health, mental health, role limitation of physical, emotional, and social function dimensions (n ¼ 86) (Li et al., 2014). In a study that explored citalopram, severely depressed cancer patients showed a significant improvement in the Zung Self Rating Depression Scale (ZSRDS) score, as well as a significant improvement in boredom levels over eight weeks (n ¼ 21) (Theobald et al., 2003). In a smaller trial, the treatment with citalopram demonstrated an 8.4 point improvement in Patient Health Questionnaire-9 (PHQ-9) scores (n ¼ 10) indicating a decrease in depressive symptoms with pharmacologic treatment in patients with cancer and MDD (Raddin et al., 2014). Women with advanced cancer and depressive symptoms treated with fluoxetine was associated with a significant improvement in HAM-D, HAM-A, and Clinical Global Impression (CGI) severity and items on the SF-36 including Role Functioning (RF), Social Functioning (SF), Mental Health (MH), and Vitality (V) (Holland et al., 1998a). Similarly, in patients treated with desipramine, a reduction in HAM-D, HAMA-A, CGI, and SF-36 was found, but unlike Fluoxetine, failed to show a reduction in mood and pain intensity as well as improvements in RF after adjusted analysis. Despite fluoxetine showing an improvement in overall global symptoms in patients with major depression or adjustment disorders, it did not demonstrate significant differences in HADS response rates (n ¼ 30) (Razavi et al., 1996). After treatment with fluvoxamine in patients with depression or adjustment disorder, HADs scores were significantly reduced at 6 weeks post-treatment. In the depression group, vitality and emotional health was also improved in this cohort as evaluated by the SF-36 scale (n ¼ 10) (Suzuki et al., 2011).
Serotonin and norepinephrine reuptake inhibitors (SNRIs)
SNRIs inhibit the reuptake of serotonin and variable amounts of norepinephrine (Stahl et al., 2005). Patients with the diagnosis of major depression were found to have significant improvements over the course of a 12 week-long trial of duloxetine according to MADR, HADS-D, and HADs-A measures (n ¼ 27) (Torta et al., 2011). After 8 weeks of treatment with the norepinephrine receptor inhibitor, reboxetine, breast cancer patients with major depressive disorder showed a significant reduction in HAM-D, Brief Symptom Inventory (BSI), and Mini-MAC hopelessness and anxious preoccupation scores, as well as an improvement in the quality-of-life measure, EORTC-QLQ-C30 (n ¼ 22) (Grassi et al., 2004).
Tricyclic antidepressants (TCAs)
TCAs block the reuptake of serotonin and norepinephrine (NE) at the presynaptic cleft and also act as a competitive agonist at the post-synaptic terminus against alpha cholinergic, muscarinic, and histaminergic receptors (Moraczewski and Aedma, 2021). In breast cancer patients with depression, use of the TCA, amitriptyline, showed improvements in MADRS at the end of 8 weeks (n ¼ 175) (Pezzella et al., 2001). In women with advanced cancer, the use of desipramine improved depression and anxiety symptoms at 6 weeks as well as improvements in quality of life measures (n ¼ 38) (Holland et al., 1998b). In contrast, a 6 week-long trial aimed at treating major depression or adjustment disorder with paroxetine or desipramine showed no significant difference in HAM-D, HAM-A, or CGI scores within and between groups as compared to placebo-treated patients (n ¼ 35) (Musselman et al., 2006).
Atypical antidepressants
Atypical antidepressants are newer approaches that affect 5-hydroxytryptamine (5-HT) and NE through different mechanisms of action (Horst and Preskorn, 1998). In a 6 week-long treatment, cancer patients with depression were treated with the tetracyclic antidepressant with properties similar to TCAs (TeCA), mianserin. Treatment significantly improved depressive symptoms as compared to the placebo-treated group (n ¼ 55) (van Heeringen and Zivkov, 1996). Similar results were found in a 4 week-long study that also demonstrated improvements in sleep disturbance and anxiety as recorded on the HDRS scale (n ¼ 36) (Costa et al., 1985). In terminally ill patients with depression, treatment with mirtazapine improved the MADRS score from 32.25 to 26.73 by day Table 1 Articles removed in the full text review that reported efficacy of pharmacological treatment of depression or depressive disorders in patients with cancer. 28 (n ¼ 88) (Ng et al., 2014). In a cohort of 19 cancer patients diagnosed with depression, treatment with mirtazapine showed a significant HAM-D reduction with all patients in the observational study achieving a 50% response (n ¼ 19) (Ersoy et al., 2008). Another study examined the efficacy of mirtazapine for depression treatment in the adult oncology population and showed a 4.5 point improvement in PHQ-9 score from initial and final visits with a minimum of nine weeks (n ¼ 79) (Raddin et al., 2014). These findings were further demonstrated in cancer patients diagnosed with depression, adjustment disorder, and/or anxiety disorders where there was a significant reduction in HADS scores between the first and third visits for patients treated with mirtazapine (n ¼ 53) (Cankurtaran et al., 2008).
In a retrospective analysis of older adults enrolled in clinical cancer trials, only 25% of participants were identified as being >65 years of age despite !60% of new malignancy diagnoses occurring among this age group (Berger et al., 2006;Lewis et al., 2003). Similar findings were obtained from the Southwest Oncology Group (SWOG) (Hutchins et al., 1999). Of the reviewed publications, age classifications were rare and those that included stratifications presented few patients within the !65 years of age group. Despite the American Society of Clinical Oncology (ASCO) acknowledging the underrepresentation of older adults (Hurria et al., 2015;Sedrak et al., 2021), the integration of geriatric populations into clinical trials has yet to improve (Kimmick et al., 2005). Inclusion of elderly patients into clinical trials is limited due to stringent eligibility criteria, the unwillingness of elderly patients to enroll, comorbidities, toxicities associated with the experimental treatment, as well as emotional and financial burdens. These considerations are exacerbated by older adult patient dependency on family members, primary caregivers, or facilities in which they reside (Sedrak et al., 2021). Enrollment in US Food and Drug Administration (FDA) approved drug trials from 1995 to 2002 showed that elderly patients were significantly underrepresented in all registered studies except hormonal therapy for breast cancer, with the lowest representation of patients in the !70 years of age group (Talarico et al., 2004). Geriatric patients are treated with nearly one third of medications in the United States and have high rates of polypharmacy (Avorn, 1995), but have inadequate representation in experimental drug treatment-involved clinical trials (Sedrak et al., 2021;Talarico et al., 2004;Parikh, 2000;Salzman et al., 1993). Age-related physiological changes influence how older patients react to medications, and increase their risk for adverse side effects due to comorbidities and polypharmacy (Shenoy and Harugeri, 2015). Polypharmacy also adds to the therapeutic burden which can be a major source of morbidity. 10-30% of geriatric hospitalizations are related to adverse medication events Parameswaran Nair et al., 2016;Dubrall et al., 2020) that include substance-induced depression (Alexopoulos, 2005). Between 2011 and 2014 in the general population, 19.1% of individuals !60 years of age reported being treated with an antidepressant in the past month (Pratt et al., 2017). Furthermore, it is estimated that 47.5% of patients within nursing homes were prescribed an antidepressant in 2006 (Giovannini et al., 2020). For these older adults, SSRIs are considered first-line treatments (Alexopoulos, 2005) followed by the use of SNRIs and atypical antidepressants including bupropion and mirtazapine. TCAs and MAOIs are typically avoided in the cancer patient and general population due to their side effect profiles (Casey, 2017;Grassi et al., 2018). Despite the wide use of these medications (Fuentes et al., 2018), drug interactions may be overlooked in elderly patients such as CYP450 enzyme inhibition (Nemeroff et al., 1996;Crewe et al., 1992;van der Weide and Hinrichs, 2006). Metabolically, antidepressants have the potential to interact with chemotherapies among other cancer treatments (Caraci et al., 2011). For example, paroxetine, fluoxetine, venlafaxine, and bupropion have been implicated to decrease the active metabolite of tamoxifen, but the long-term impact of this treatment remains controversial and more research is needed (Grassi et al., 2018;Bradbury et al., 2021;Nevels et al., 2016;Del Re et al., 2016). Older adults are also more likely to have poorer hepatic and renal function that can inhibit metabolism and drug excretion, thereby contributing to increased adverse effects and toxicity (Mangoni and Jackson, 2004). For these reasons, antidepressant treatment that minimizes CYP inhibition has been recommended (Miguel and Albuquerque, 2011;Binkhorst et al., 2016). Medication side effects may include GI disturbances, weight gain, headaches, insomnia, anxiety, and serotonin syndrome, with adverse effects varying by intra-and inter-drug class (Table 5). TCAs may also cause anticholinergic effects in older adults with cancer (Grassi et al., 2018). For example, bupropion is typically avoided in neurologic disease for elevated seizure risk (Ramasubbu et al., 2012). Various guidelines exist for the prescription of antidepressants in cancer patients that often suggest a patient-centered consideration of antidepressant side effect profiles, drug interactions, response to previous treatments, comorbidities, the potential for benefit, and cancer prognosis (Ramasubbu et al., 2012;Andersen et al., 2014;Butow et al., 2015). Guidelines from the European Palliative Care Research Collaborative (EPCRC) proposed that clinicians should consider recommending antidepressants for the treatment of depression during palliative care (Rayner et al., 2011). In older adults without cancer, the treatment of depression involves pharmacologic and non-pharmacologic therapies. It has been suggested that while antidepressant effects in older adults are similarly effective as in younger patients, they exhibit more side effects (Anstey and Brodaty, 1995) with a longer time for responding to therapy among individuals with advanced age (Parikh, 2000). Meta-analyses have found evidence of effective treatments across the antidepressant classes (Thorlund et al., 2015;Kok et al., 2012;Nelson et al., 2008). However, research regarding the efficacy of treatment for depression specifically in elderly adult patients is limited to more advanced age groups such as !75 years of age (Fisch, 2004;Taylor and Doraiswamy, 2004;Wilkinson et al., 2018). Despite the need for more research in the oldest age group of patients, there is some evidence of pharmacologic benefit for the treatment of elderly adults with depression that cannot be generalized to patients with cancer of the same age. These individuals are not characteristically similar and differ in polypharmacy, Table 4 HAM-D and HAM-A initial and last follow-up scores in the treatment of depression or depressive symptomology in patients with cancer. p-values are reported from a change in treatment baseline score and does not include significant values across a comparison treatment or placebo groups. F: Fluoxetine, D: Desipramine, P: Paroxetine.
As the global incidence of cancer has increased, the most frequently incident cancers have also increased including those arising from the (Walker et al., 2013) or breast cancer (Carvalho et al., 2014) have yet to be performed. Similarly, a Cochrane Review investigating pharmacologic treatment of patients with primary brain cancer reported that no studies met the inclusion criteria (Rooney and Grant, 2010). This was repeated by the same group in 2013 (Rooney and Grant, 2013) and 2020 (Beevers et al., 2020) without evidence of high-quality studies that included updated queries. Within this review, brain cancer patients made up 0.4% of evaluated patients (Fig. 2) with only 3 studies meeting the majority of our selection criteria that had brain tumor patient involvement (Table 2). (Ersoy et al., 2008;Schillani et al., 2011;Capozzo et al., 2009) This highlights the paucity of data regarding the use of antidepressants in patients with primary brain cancer; a population whereby~1 of every 3 patients show signs of depression (Otto-Meyer et al., 2019). From 1990 to 2006, the global incidence of brain tumors rose from 4.6/100,000 to 17.3/100,000 (Brain and Other, 2019). Patients with intracranial neoplasms often present with physical deficits of generalized weakness, visual changes, motor and communication function, and post-operatively, are at a risk for developing new neurologic deficits secondary to tumor resection (Kushner and Amidei, 2015;De La Garza-Ramos et al., 2016). Meningioma patients who pre-operatively demonstrate depressive symptoms have a 7 times increased hazard ratio at the 5-year survival threshold (Bunevicius et al., 2017). In patients with glioblastoma, a high PHQ-9 score is indicative of a more necrotic tumor and correlates with a worse prognosis (Fu et al., 2020); a cancer patient population whereby older age is strongly associated with accelerated mortality (Kim et al., 2021;Ladomersky et al., 2020). Coincidently, a diagnosis of depression in glioma patients is associated with worse survival outcomes (Shi et al., 2018). Patients with WHO grade 3 or 4 malignant astrocytoma and active depression at the time of surgery are associated with decreased survival regardless of tumor grade, treatment, or disability (Gathinji et al., 2009). Similarly, high-grade glioma patients with pre-operative depression and treated with psychological interventions show an improved survival (Wang et al., 2014). In a nationwide South Korean cohort study, 17.0% of brain tumor patients who underwent surgical tumor resection were also newly diagnosed with depression. Both elderly patients !60 years of age [1.54 (CI 1.27:1.86)] and non-elderly patients <60 years of age [1.68 (CI 1.39:2.04)] showed increased odds of mortality at two years post-diagnosis (Oh et al., 2021). After surgery, antidepressant use increased in patients with low-grade glioma and depression rates in these patients were reported to be relevant to 36% of the population under study (Ryden et al., 1186;Hartung et al., 2017). While these patients are often treated with antidepressants, side effects of antidepressants may include lower seizure threshold, impaired memory formation/recall, and fatigue (Table 5). (Rooney and Grant, 2013;Hill et al., 2015) Some in vitro data suggests that SSRIs may have anti-brain tumor effects (Liu et al., 2015;Jeon et al., 2011;Ma et al., 2016;Chen et al., 2018). However, retrospective analyses have yet to report any significance (Gramatzki et al., 2020;Otto-Meyer et al., 2020;Caudill et al., 2011). The use of antidepressant therapy among brain tumor patients may improve mood and function within a population that undergoes treatment with radiation, chemotherapy, and progressive neurosurgical insults to the brain. SSRI and SNRI use have become common in the post-stroke and traumatic brain injury patient populations. Patients treated with these medications have demonstrated Sexual dysfunction, nausea, diarrhea, agitation, fatigue, insomnia, headache, weight gain. (Ferguson, 2001;Santarsieri and Schwartz, 2015) Selective Norepinephrine Reuptake Inhibitors (SNRI) Desvenlafaxine (Pristiq), Duloxetine (Cymbalta), Venlafaxine(Effexor), Milnacipran (Savella), Levomilnacipran, (Fetzima) Nausea, insomnia, dry mouth, headache, increased blood pressure, sexual dysfunction, diarrhea, weight gain, serotonin syndrome (Gillman, 2007) Tricyclic Antidepressants (TCA) Amitriptyline (Elavil), Desipramine (Norpramin), Doxepin (Sinequan), Imipramine (Tofranil), Nortriptyline (Pamelor), Amoxapine (Asendin), Clomipramine (Anafranil), Maprotiline (Ludiomil), Trimipramine (Surmontil), Protriptyline (Vivactil) Weight gain, sedation, dry mouth, nausea, blurred vision, constipation, tachycardia, orthostatic hypotension, tremor, respiratory depression, hyperpyrexia, serotonin syndrome. (Gillman, 2007;Pezzella et al., 2001;Santarsieri and Schwartz, 2015 improvement of motor and cognitive function (Chollet et al., 2011;Plantier et al., 2016) and similar effects should be investigated in brain tumor patients to potentially improve health-related quality of life, improvements in patient lassitude, sleep, and the promotion of participation in therapy services.
Study limitations
Each article measured the efficacy of pharmacologic therapy in patients with cancer and depression, but inclusion and exclusion criteria, medication type and dosage, follow-up, and utilized depression scales were highly variable between studies. Included publications ranged from 1985 to 2017 and included widely changing definitions for clinical forms of depression, with a notable difference in the DSM. Although it is routine to define elderly patients as !65 years of age, not all studies define this age group similarly. Publications that did not distinctly analyze older adults, or, older adults versus younger individuals were not investigated. The numbers of treated patients with depression within this review were highly variable with a median number of 30.
Though our search was conducted using PRISMA guidelines, it is possible that our eligibility criteria did not capture all relevant publications despite including studies that represent a mixture of prospective and retrospective investigations. The definition of a depression diagnosis and the use of screening tools within and between scales varied broadly. While conversions between those scales exist, their high level of heterogeneity makes them difficult to compare directly (Furukawa et al., 2019;Leucht et al., 2018;Schneibel et al., 2012) and patients with medical conditions such as physical symptoms may further confound the inter-relatability between indices. Depression scales vary on the ability to identify depressive symptoms among individuals and vary with the emphasis on psychiatric versus somatic origin (Table 6, Appendix). Furthermore, a diagnosis of clinical depression cannot be made without clinical interviews, and consideration of scores out of context may add further variance in study outcomes. In practice, antidepressants are frequently used for simultaneous treatment of chronic pain and mood complaints in patients with cancer (Zis et al., 2017). The design of this review excludes studies examining the relationship between pain and mood with the recommendation that future studies focus specifically on this complex clinical connection. Limited studies on specific medications were conducted and were too few for subgroup analysis. Medication dosing and follow-up times varied between studies. Explanations for loss to follow-up varied. Records for adverse reactions were not standardized.
Summary
Despite the need to study pharmacologic antidepressant treatment among older adult patients with cancer, with or without depression, no high-quality studies have been conducted to-date. Research on antidepressant treatment in the general cancer population is not optimal. Metaanalyses often show conflicting observations and the types of cancers under study are highly limited. Clinicians have little to guide intervention-based practice. Older adults face unique multifaceted barriers to enrolling and adhering to treatment and are at a risk for increased incidence of adverse reactions under a therapeutic burden. Systemic therapy adds to the burden of disease and can cause more offending symptomatology in the older adult population . Associated with the side effects of chemotherapies, patients may exhibit a lack of energy, sleep disturbance, weight loss, or other various side effects which may render standard depression screens unreliable for this population (Saracino et al., 2017). Pharmacologic therapy continues to be prescribed within the United States and while more studies are needed to explore treatment efficacy in the elderly cancer patient population with depression, health care workers should consider having individualized discussions and routine assessments of depression with their patients regarding pharmacologic therapeutic options. Future studies are critical to explore this topic and provide strong guidelines for treatment.
Declaration
All authors have read and approved of the manuscript submitted for peer review.
Declaration of competing interest
The authors declare no conflicts of interest.
Clinical Depression Defined in Review
Clinical depression was defined as having the diagnosis of Major Depressive Disorder, Major Depressive Disorder with atypical features, Major Depressive Disorder with psychotic features, Minor Depressive Disorder, Persistent Depressive Disorder, Adjustment Disorder with Depressed Mood, or another definition of "clinical depression" or "clinical depressive symptoms" as specified within the reviewed articles.
|
2022-04-02T05:18:53.660Z
|
2022-03-23T00:00:00.000
|
{
"year": 2022,
"sha1": "c1efe9f879fc06515d242b80306459b7acab1d72",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bbih.2022.100449",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1efe9f879fc06515d242b80306459b7acab1d72",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5959456
|
pes2o/s2orc
|
v3-fos-license
|
Three mutations switch H7N9 influenza to human-type receptor specificity
The avian H7N9 influenza outbreak in 2013 resulted from an unprecedented incidence of influenza transmission to humans from infected poultry. The majority of human H7N9 isolates contained a hemagglutinin (HA) mutation (Q226L) that has previously been associated with a switch in receptor specificity from avian-type (NeuAcα2-3Gal) to human-type (NeuAcα2-6Gal), as documented for the avian progenitors of the 1957 (H2N2) and 1968 (H3N2) human influenza pandemic viruses. While this raised concern that the H7N9 virus was adapting to humans, the mutation was not sufficient to switch the receptor specificity of H7N9, and has not resulted in sustained transmission in humans. To determine if the H7 HA was capable of acquiring human-type receptor specificity, we conducted mutation analyses. Remarkably, three amino acid mutations conferred a switch in specificity for human-type receptors that resembled the specificity of the 2009 human H1 pandemic virus, and promoted binding to human trachea epithelial cells.
Introduction
The 2013 avian H7N9 virus outbreak in China was tied to human exposure to infected poultry in live bird markets [1].Closure of the markets halted new human infections, but upon reopening, several additional outbreaks occurred; 779 human infections have been documented to date according to the WHO [2].While there are reports of possible human-tohuman transmission [3][4][5], H7N9 has not acquired the capability for sustained transmission in the human population.
Receptor specificity of influenza A viruses is widely considered to be a barrier for transmission of avian influenza viruses in humans [6].Over the past 50 years, the strains circulating in the human population include the H3N2 strain that caused the 1968 pandemic, a seasonal H1N1 strain introduced in 1977, and an H1N1 pandemic strain that emerged in 2009 and replaced seasonal H1N1 viruses.All human pandemic strains to date have exhibited specificity for human-type receptors (α2-6 linked), in contrast to their avian virus progenitors that recognize avian-type receptors (α2-3 linked) [7,8].In each case, the change in receptor specificity from avian-type to human-type involved two mutations in the HA receptor binding pocket, E190D and G225D for the H1N1 viruses, and Q226L and G228S for the H2N2 and H3N2 viruses [9,10].
These insights have framed current efforts to determine how avian influenza with other HA serotypes might acquire human-type receptor specificity.For the H5N1 HA, introduction of the two H1 specificity-switching mutations abolished receptor binding altogether, while the H3 mutations retained avian-type receptor binding, with minimal effect on receptor specificity [11][12][13][14][15].However, introducing the Q226L mutation in combination with other mutations both increased binding to human receptors and conferred respiratory droplet transmission in ferrets [16][17][18].While the H7N9 virus with the Q226L mutation maintained receptor specificity for avian-type receptors, some increase in avidity for human-type receptor analogs was noted [19][20][21].We therefore reasoned that additional mutations might enable a full switch to human-type receptor specificity [22].
Mutational analyses
We undertook a systematic mutation analysis of conserved residues in the H7 receptor-binding pocket.In addition to assessing the residues that conferred a receptor switch in the H1 and H3 hemagglutinins (Fig 1A ), we focused on three other residues that might impact binding of human-type receptors.1) In the crystal structure of H7 HA, we noted that the positively charged side chain of K193 points directly into the binding pocket [20].This could potentially inhibit binding of extended α2-6 sialosides that are known to project over the face of the 190-loop [23].This position is invariably a threonine or serine in human H2 and H3 viruses, respectively, and recently has been implicated to be important in the evolution of the H3N2 pandemic virus [24].We also used molecular modeling to show that K193 would likely physically interfere with the portion of the receptor glycan that is projecting from the sialic acid bound to the receptor-binding domain (Fig 1B ).2) In a study of tissue tropism of H5N1 (A/ Indonesia/05/05), a V186K mutation was found to confer binding to human trachea tissue sections.The G186V mutation was also noted as a potential adaptation of avian H7 to humantype receptors [25,26] and, in the H2 HA, N186 has been documented to form a hydrogen bond network that enables human-type receptor binding [27].3) The N224K mutation was identified as a critical residue for aerosol transmission of an H5N1 virus [28].
Receptor binding properties of H7N9 mutants that confer avian-tohuman-type receptor specificity
Varied combinations of mutations were introduced into the A/Shanghai/2/2013 (Sh2) gene and expressed as recombinant, soluble, trimeric HA proteins in HEK293S GnTI(-) cells [29].Each recombinant HA was tested for relative avidity to α2-3 (avian-type) and α2-6 (humantype) sialoside polymers in a glycan microarray based ELISA-like assay (Fig 2A) [30,31], and for receptor specificity using a custom glycan microarray comprising 135 sialosides with 4GlcNAc) is modeled in the WT H7 with K193 (dark gray), and the mutant H7 with V186K K193T G228S (light gray).In the WT, K193 causes the receptor to project further away from the 190 helix.Symbols in the sugar rings are the conventions for the Symbol Nomenclature For Glycans (SNFG) where sialic acid is the purple cubic diamond, galactose is the yellow sphere and GlcNAc is the blue cube.https://doi.org/10.1371/journal.ppat.1006390.g001) [32].The wild-type Sh2 HA that contains the Q226L mutation has a high preference for avian-type receptors, with minimal binding to human-type receptors, as noted previously [20].Introduction of the G228S mutation that is found in human H2 and H3 viruses retained binding to α2-3 sialosides, and gained significant but weaker binding to α2-6 sialoside in the ELISA-like assay (Fig 2A).However, there was no binding to human-type receptors in the glycan array (Fig 2B ), which exhibits higher stringency [12,16].In contrast, mutations that confer humantype receptor specificity for H1N1 strains, E190D and G225D, alone or in combination showed no binding to sialosides in the glycan array (S3 Table ).
We then introduced K193T in the G228S background.This Sh2 mutant bound almost equally well to avian-type and human-type receptors in both assays.Introduction of V186K in the K193T-G228S background, resulted in binding to human-type receptors in the ELISAlike assay, with some residual avian-type receptor binding.On the glycan array, this V186K-K193T-G228S mutant only bound human-type receptors, and displayed strikingly high specificity for α2-6 linked sialic acid found on extended N-linked glycans with 3 to 5 LacNAc repeats.A similar binding profile was also observed for an otherwise identical mutant containing V186G.The binding profile of these triple mutants is practically identical to pandemic H1N1 Cal/04/09 (Fig 2B, bottom) [32], which is known to transmit efficiently between humans.
Since most human infections to date have resulted from exposure to infected chickens in live poultry markets, we next investigated the impact of the receptor switch on binding to human and chicken airway tissues.Sh2 bound exclusively to chicken and not human trachea (Fig 2C).The G228S mutant showed very strong binding to the chicken respiratory tract and very weak yet observable binding to human trachea at the base of the cilia.For the Sh2 K193T-G228S double mutant, we observed binding to goblet cells in chicken trachea and to the base of the cilia in human trachea, consistent with this mutant having dual receptor specificity.The V186K-K193T-G228S mutant also showed dual receptor binding on chicken and human trachea sections.A triple mutant changing only V186K to V186G retained human-type receptor specificity, but exhibited reduced avidity and increased specificity for binding to human trachea epithelium, which correspond to properties similar to those of Cal/04/09 pdmH1N1.
H7 is able to acquire human-type receptor specificity
On analysis of the V186N and N224K mutations, we found that V186N in just the G228S background led to specific binding to human-type receptors in both the ELISA-like assay with sialoside polymers (Fig 3A ) and the glycan array (Fig 3B).With the addition of N224K in the V186N-G228S background, we observed a significant increase in binding.Thus, there are multiple ways for H7N9 to obtain human-type receptor specificity.The N224K mutant does not confer specificity to human-type receptors in other Sh2 backgrounds (S3 Table ), but does increase binding in human-specific Sh2 mutants (S1 Fig) .We conclude that a lysine at position 224 does not significantly alter receptor specificity, but does enhance the strength of binding, likely through a positive avidity contribution [28].
Binding avidity of H7 HAs to N-linked glycans.To quantify the binding avidities of H7 Sh2 and mutant (V186K-K193T-G228S) HAs, and assess in detail the strength of the specificity (glycans 11 to 79 on the x axis) and α2-6 linked sialosides in black (glycans 80 to 135).Glycans 1 to 10 are non-sialylated controls (see also S1 Table ).(C) Tissue binding to either chicken or human tracheal sections is observed by HRP-staining.The sialoside array, ELISA-like assay, and tissue binding experiments are representative of three independent assays performed with different batches of HA proteins.https://doi.org/10.1371/journal.ppat.1006390.g002switch to human-type receptor binding, we conducted a glycan ELISA using a series of biantennary N-linked glycans featuring either terminal NeuAcα2-3Gal or NeuAcα2-6Gal.These glycans are consistently observed as preferred receptors on the glycan array [32].Sh2 was selective for avian-type receptors, with weaker binding to human-type receptors, and showed little preference for glycan length, consistent with the glycan array results (Fig 4 top).The Sh2 V186K-K193T-G228S mutant lost almost all binding to avian-type receptors, and binds with higher avidity to human-type receptors than the wild-type Sh2 (Fig 4 bottom).Interestingly, the increased avidity of the mutant was seen primarily for extended α2-6-linked N-glycans, with 3 or 4 LacNAc repeats, with a 2-to 5-fold gain (apparent K d values are given in S4 Table ).These data are consistent with our hypothesis that HAs with human-type receptor specificity accommodate not only the altered chemistry of the terminal sialoside linkage, but also permit bidentate binding, resulting in an apparent preference for biantennary N-glycans with 3 or more LacNAc repeats [32].
Thermostability of H7 HAs.The stability of HA in acidic environments is a determinant for airborne transmissibility of influenza viruses among humans [33].In general, human virus HAs exhibit increased stability and fuse at lower pH than avian virus HAs.Thermal stability of an HA correlates with stability to fuse at low pH, and can be used as an alternative measure of HA stability [33,34].Using differential scanning calorimetry (DSC), we analyzed the thermostability of Sh2 and Sh2 V186K-K193T-G228S proteins relative to a human seasonal H1N1 control, A/KY/07.Sh2 exhibited a broad thermal denaturation profile with a melting temperature (T m ) of 55˚C, while the human-type receptor specificity mutant (Sh2 V186K-K193T-G228S) had a slightly lower stability with a T m of 53˚C (Fig 5).While H7N9 has been demonstrated to transmit between ferrets at low efficiency [35,36], the lower stability of the mutant would suggest it is less likely to transmit than the wild type.In contrast to the H7 HAs, the H1 HA of A/KY/07 human control had a T m of 65˚C, indicative of the higher stability expected of a viral HA that transmits in humans.Thus, while the Sh2 V186K-K193T-G228S mutant exhibits human-type receptor specificity, the thermal stability is, if anything, lower than wild-type Sh2 HA, which is not surprising since mutations that increase stability are generally in the stem region, not in the receptor binding domain.
K193T permits bidendate receptor-binding to N-linked glycans
The influence of the K193T mutation on human-type receptor binding was of particular interest because K193S was shown to be an essential mutation for the H3 Hong-Kong 1968 pandemic, and K193T for H10N8 to obtain binding to human-type receptors [24,37].In an ideal cis conformation, the human-type receptor would bind and project from the sialic acid binding site towards the 190 helix and has the potential to interact with amino acids of the 190 helix that frame the top of the receptor binding site [23].Moreover, we have recently shown that H3N2 viruses, as well as pandemic H1N1, exhibit preference for branched N-linked glycans that feature elongated LacNAc repeats extending over the 190 helix.These receptors project the second branch over the top of HA such that the second sialic acid can reach the receptor binding site of a second protomer in the same trimer [32].Using molecular modeling, we investigated the possibility that the K193T mutation in Sh2 H7N9 HA would impact simultaneous binding of human-type receptors on complex N-glycans, as shown in Fig 6 .Here the low-energy conformation of the extended glycan chain produces a steric clash with the K193, forcing LacNAc moieties to adopt a conformation projecting out of the receptor-binding site, and away from the 190 helix.Such a clash likely disfavors the preferred binding mode, where the rest of the glycan arches over the top of the HA surface.As a result, bidendate binding involving the simultaneous coordination of another branch of the glycan to a second protomer in the HA trimer is not possible (Fig 6A).In contrast, simulations show that T193 interacts with LacNAc, enabling it to come closer to the HA, facilitating a bidentate interaction where the glycan is able to extend over the top of the trimer, thus effectively increasing avidity (Fig 6B ).
We also determined the crystal structure of the Sh2 V186K K193T G228S with and without avian-and human-type receptors (S2A and S2B Fig) .The structures were virtually identical compared to the previously determined crystal structure of the Sh2 H7 HA protein (Protein Data Bank [PDB] code 4N5J [20]).Moreover, in co-crystals with monomeric human-type (LSTc) and avian-type (LSTa) receptor analogs, electron density is seen only for the sialic acid, consistent with low-affinity binding of the monovalent receptor to the receptor site (S3 Fig) and preference of the mutants for extended biantennary glycans that offer the potential for bidentate binding.
Discussion
We demonstrate here that several alternative three-amino-acid mutations (V186G/ K-K193T-G228S or V186N-N224K-G228S) can switch the receptor specificity of the H7N9 HA from avian-to human-type, a property required for transmission in humans and ferrets https://doi.org/10.1371/journal.ppat.1006390.g005[38,39].Of these mutations, only isolated examples of 186G and 193N have to date been reported in H7 avian isolates.The mutants show profound loss of binding to avian-type (α2-3 linked) receptors, and increased binding to human-type (α2-6 linked) receptors in both glycan microarrays and glycan ELISA-type avidity assays.The mutants exhibit preferential binding to a subset of human-type receptors with extended branched N-linked glycans that terminate with NeuAcα2-6Gal reported to be present in N-linked glycans in human and ferret airway tissues [23,40,41].Notably, this specificity for a restricted subset of human-type receptors is shared with recent H3N2 viruses, and the 2009 H1N1 pandemic virus.We have also recently observed that different sets of mutations switch the H6N1 and H10N8 HAs to human-type receptor specificity and, in each case, confer specificity for a similar subset of human-type receptors [37] (de Vries, Tzarum, Wilson & Paulson manuscript in revision).Thus, recognition of human-type receptors with extended glycan chains appears to be a common characteristic of human influenza virus HAs, and avian virus HA mutants that bind to human-type receptors.
Ideally, it would be important to assess the impact of the switch in receptor specificity in the ferret model that displays human-type receptors in the airway epithelium and is used to assess the propensity for air droplet transmission of human viruses.However, the introduction of the mutations that switch receptor specificity into an actual H7N9 virus background would represent gain-of-function (GoF) experiments that are currently prohibited [42].During the course of this study, no viruses were created, and no experiments assessing the potential for air droplet transmission were performed.We suggest that understanding mutations that can confer human-type receptor binding will benefit risk assessment in worldwide surveillance of H7N9 in poultry and humans.
Ethics statement
IRB & IACUC & IBC approval obtained at the funded institution.The tissues used for this study were obtained from the tissue archive of the Veterinary Pathologic Diagnostic Center (Department of Pathobiology, Faculty of Veterinary Medicine, Utrecht University, The Netherlands).This archive is composed of paraffin blocks with tissues maintained for diagnostic purposes; no permission of the Committee on the Ethics of Animal Experiment is required.Anonymized human tissues were obtained under Service Level Agreement from the University Medical Centre Utrecht, The Netherlands.Use of anonymous material for scientific purposes is part of the standard treatment contract with patients and therefore informed consent procedure was not required according to the institutional medical ethical review board."
Expression and purification of HA for binding studies
Codon-optimized H1 and H7 encoding cDNAs (Genscript, USA) of A/Shanghai/2/13, Cal/04/ 09 and A/KY/07 were cloned into the pCD5 expression as described previously [29].The pCD5 expression vector is adapted so that that the HA-encoding cDNAs are cloned in frame with DNA sequences coding for a signal sequence, a GCN4 trimerization motif (RMKQIED-KIEEIESKQKKIENEIARIKK), and Strep-tag II (WSHPQFEK; IBA, Germany).
Glycan microarray binding of HA
Purified, soluble trimeric HA was pre-complexed with horseradish peroxidase (HRP)-linked anti-Strep-tag mouse antibody (IBA) and with Alexa488-linked anti-mouse IgG (4:2:1 molar ratio) prior to incubation for 15 min on ice in 100 μl PBS-T, and incubated on the array surface in a humidified chamber for 90 minutes.Slides were subsequently washed by successive rinses with PBS-T, PBS, and deionized H 2 O. Washed arrays were dried by centrifugation and immediately scanned for FITC signal on a Perkin-Elmer ProScanArray Express confocal microarray scanner.Fluorescent signal intensity was measured using Imagene (Biodiscovery) and mean intensity minus mean background was calculated and graphed using MS Excel.For each glycan, the mean signal intensity was calculated from 6 replicate spots.The highest and lowest signals of the 6 replicates were removed and the remaining 4 replicates used to calculate the mean signal, standard deviation (SD), and standard error measurement (SEM).Bar graphs represent the averaged mean signal minus background for each glycan sample and error bars are the SEM value.A list of glycans on the microarray is included in S1 Table.
Glycan ELISA
Purified HA trimers were precomplexed with anti-HIS mouse IgG (Invitrogen) and HRP-conjugated goat anti-mouse IgG (Pierce), then diluted in series to required assay concentrations (40-0.05μg/mL final).Preparation of streptavidin-coated plates with biotinylated glycans, incubation and washing of pre-complexed HA dilutions was exactly as described previously [23,32].
Tissue staining
Sections of formalin-fixed, paraffin-embedded, human trachea and chicken trachea were obtained from the University Medical Center and the Department of Veterinary Pathobiology, Faculty of Veterinary Medicine, at Utrecht University, respectively.Tissue sections were rehydrated in a series of alcohol from 100%, 96% and 70%, and lastly in distilled water.Endogenous peroxidase activity was blocked with 1% hydrogen peroxide for 30 min at room temperature.Tissue slides were boiled in citrate buffer pH 6.0 for 10 minutes at 900kW in a microwave for antigen retrieval and washed in PBS-T three times.Tissue was subsequently incubated with 3% BSA in PBS-T for overnight at 4 ˚C.On the next day, the purified HAs were precomplexed with mouse anti-strep-tag-HRP antibodies (IBA) and goat anti-mouse IgG HRP antibodies (Life Biosciences) at a 4:2:1 ratio in PBS-T with 3% BSA and incubated on ice for 20 minutes.After draining the slide, the precomplexed HA was applied onto the tissue and incubated for 90 minutes at RT.Sections were then washed in PBS-T, incubated with 3-amino-9-ethyl-carbazole (AEC; Sigma-Aldrich) for 15 minutes, counterstained with hematoxylin, and mounted with Aquatex (Merck).Images were taken using a charge-coupled device (CCD) camera and an Olympus BX41 microscope linked to CellB imaging software (Soft Imaging Solutions GmbH, Mu ¨nster, Germany).
Cloning, baculovirus expression and purification of ha for crystallization
The ectodomain of the Sh2 H7 HA mutant (V186K-K193T-G228S) was expressed in a baculovirus system essentially as previously described [20].Briefly, the cDNAs corresponding to residues 19-327 of HA1 and 1-174 of HA2 (H3 numbering) of HA from A/Shanghai/2/2013 (H7N9) (Global Initiative on Sharing All Influenza Data (GISAID) isolate ID: EPI_ISL_138738) were codon-optimized and synthesized for insect cell expression and inserted into a baculovirus transfer vector, pFastbacHT-A (Invitrogen) with an N-terminal gp67 signal peptide, C-terminal trimerization domain, His 6 tag, and thrombin cleavage site incorporated to separate the HA ectodomain and the trimerization and His tags.The HA1 domain triple mutant (G228S, V186K, K193T) was made by site-directed mutagenesis.The purified recombinant HA Bacmids were used to transfect Sf9 insect cells for overexpression.HA protein was produced in infecting suspension cultures of Hi5 cells with recombinant baculovirus at an MOI of 5-10 and incubated at 28˚C shaking at 110 RPM.After 72 hours, Hi5 cells were removed by centrifugation and supernatants containing secreted, soluble HA proteins were concentrated and buffer-exchanged into 1xPBS, pH 7.4.The HAs were recovered from the cell supernatants by metal affinity chromatography using Ni-NTA resin, and were digested with thrombin to remove the trimerization domain and His 6-tag.The cleaved HAs were further purified by size exclusion chromatography on a Hiload 16/90 Superdex 200 column (GE healthcare, Pittsburgh, PA) in 20 mM Tris pH 8.0, 100 mM NaCl, and 0.02% (v/v) NaN 3 .
3D structure generation
Structure models were generated from PDBID 4LN8 [44].A trimeric "head region" was created from residues K46 to S260 from the HA1 (receptor binding region) and residues Q61 to S93 from HA2 (from the top of membrane fusion stem region).One structure was kept as WT, while each of the three binding sites in a second structure were altered by point mutations; G228S, V186K and K193T.The mutant model structures were generated in UCSF Chimera [45], by selecting rotamers from the Dunbrack library [46].A sialylated biantennary TriLacNAc N-glycan was generated on Glycam-Web (www.glycam.org/cb)and modeled into both the WT and mutated structure via computational carbohydrate grafting [47], using the Neu5Ac in PDBID 4LN8 as a template.The reported grafting algorithm [48,49] was adapted to rotate the glycosidic linkages within normal bounds [50], while monitoring the distance between the binding motif on the other arm of the glycan and the second HA binding site.The linkages were adjusted in series, beginning from the NeuAcα2-6Gal motif.A single optimal structure was selected based on the relative orientation and proximity of the NeuAcα2-6Gal motif on the other arm to the target HA binding site.The results were independent of whether the 3-arm or the 6-arm of the glycan was grafted onto the bound NeuAcα2-6Gal motif.The resulting structures were then subject to energy minimization and molecular dynamics simulation as described previously, to attempt to see whether the NeuAcα2-6Gal motif on the second arm of the glycan could locate into second binding site.
Crystallization, data collection and structural determination
Crystallization experiments were set up using the sitting drop vapor diffusion method.Initial crystallization conditions for the H7 mutant HA (V186K-K193T-G228S) were obtained from robotic crystallization trials using the automated CrystalMation system (Rigaku) at The Scripps Research Institute.Following optimization, diffraction quality crystals of the triple mutant HA were grown at 22˚C by mixing 0.5 μl of protein (7.4 mg/ml) in 20 mM Tris, pH 8.0, 100 mM NaCl with 0.5 μl of a reservoir solution containing 0.2 M tri-potassium citrate, 5% (v/v) ethylene glycol and 22% (w/v) PEG3350.The crystals were flash-cooled in liquid nitrogen by adding 20% (v/v) ethylene glycol to the mother liquor as cryoprotectant.The triple mutant HA-ligand complexes were obtained by soaking HA crystals in the well solution that now contained glycan ligands.Final concentrations of ligands LSTa (NeuAcα2-3Galβ1-3GlcNAcβ1-3Galβ1-4Glc) and LSTc (NeuAcα2-6Galβ1-4GlcNAcβ1-3Galβ1-4Glc) were all 5 mM, and soaking times were 10 min.Diffraction data were collected on synchrotron radiation sources specified in the data statistics tables.HKL2000 (HKL Research, Inc.) was used to integrate and scale diffraction data.Initial phases were determined by molecular replacement using Phaser [51] with the wild-type HA structure (PDB codes 4N5J) as a model.One HA protomer is present per asymmetric unit.Refinement was carried out using the program Phenix [52].Model rebuilding was performed manually using the graphics program Coot [53].Final refinement statistics are summarized in S2 Table.
Differential scanning calorimetry (DSC)
Thermal denaturation was studied using a nano-DSC calorimeter (TA instruments, Etten-Leur, The Netherlands).HA proteins were eluted from the streptavidin beads in PBS with 2.5mM desthiobiotin, and 100μg of protein was tested.After loading the sample into the cell, thermal denaturation was probed at a scan rate of 60˚C/h.Buffer correction, normalization, and baseline subtraction procedures were applied before the data were analyzed using NanoAnalyze Software v.3.3.0 (TA Instruments).The data were fitted using a non-two-state model.
Accession numbers
Atomic coordinates and structure factors have been deposited in the Protein Data Bank (PDB) under accession codes 5VJK, 5VJL and 5VJM for Sh2 mutant HA (V186K-K193T-G228S) in apo form and in complex with LSTc or LSTa.
Fig 1 .
Fig 1.Amino acid variation in the receptor binding pocket of influenza HAs and impact of K193T mutation on receptor conformation.(A) Variation at HA positions that are known to mediate the switch in receptor binding specificity for human H1, H2 and H3 pandemic viruses and corresponding avian viruses of H1, H2, H3 and H5 subtypes in comparison with human H7N9.Red indicates amino acids involved in either human-or avian-type receptor specificity, blue indicates amino-acid positions that are mutated to the amino acids found in human H3N2 and H2N2 viruses.(B) Projection of the receptor glycan from the binding pocket.The receptor analog 6'SLNLN (α2-6 linked sialylated di-LacNAc; NeuAcα2-6Galβ1-4GlcNAcβ1-3Galβ1-4GlcNAc) is modeled in the WT H7 with K193 (dark gray), and the mutant H7 with V186K K193T G228S (light gray).In the WT, K193 causes the receptor to project further away from the 190 helix.Symbols in the sugar rings are the conventions for the Symbol Nomenclature For Glycans (SNFG) where sialic acid is the purple cubic diamond, galactose is the yellow sphere and GlcNAc is the blue cube.
Fig 2 .
Fig 2. Specificity of wild type and mutant H7 HAs on glycan arrays and binding to chicken and human trachea epithelium.Glycan binding analyses of Sh2 H7N9 HA wild type and several mutants that confer human-type receptor binding: G228S, K193T G228S, V186K K193T G228S, V186G K193T G228S, with human Cal/04/09 2009 pandemic H1N1 HA as a control.(A) ELISA-like assay using sialoside polymers.The mean signal and standard error were calculated from six independent replicates; white open circles represent α2-3 linked sialylated di-LacNAc (3'SLNLN), black closed circles represent α2-6 linked sialylated di-LacNAc (6'SLNLN), and non-sialylated di-LacNAc (LNLN) are represented in asterisks.(B) The glycan array mean signal and standard error were calculated from six independent replicates; α2-3 linked sialosides are shown in white bars
Fig 5 .
Fig 5. Melting curve of recombinant HA obtained by DSC to determine the thermostability of Sh2, Sh2 V186K K193T G228S, and A/KY/07.The raw data are depicted in the solid line, while the fitted curve, from which the T m was derived, is depicted with a dotted line.(A) A summary of the peaks observed during the DSC experiments for each recombinant HA, (B) Sh2, (C) A/KY/07 and (D) Sh2 V186K K193T G228S.
|
2018-04-03T00:36:37.395Z
|
2017-06-01T00:00:00.000
|
{
"year": 2017,
"sha1": "76b129f35ec4b4b038cdbddb71879e277d5e464c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1006390&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "76b129f35ec4b4b038cdbddb71879e277d5e464c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
24495087
|
pes2o/s2orc
|
v3-fos-license
|
The use of e-health and m-health tools in health promotion and primary prevention among older adults: a systematic literature review
Background The use of e-health and m-health technologies in health promotion and primary prevention among older people is largely unexplored. This study provides a systematic review of the evidence on the scope of the use of e-health and m-health tools in health promotion and primary prevention among older adults (age 50+). Methods A systematic literature review was conducted in October 2015. The search for relevant publications was done in the search engine PubMed. The key inclusion criteria were: e-health and m-health tools used, participants’ age 50+ years, focus on health promotion and primary prevention, published in the past 10 years, in English, and full-paper can be obtained. The text of the publications was analyzed based on two themes: the characteristics of e-health and m-health tools and the determinants of the use of these tools by older adults. The quality of the studies reviewed was also assessed. Results The initial search resulted in 656 publications. After we applied the inclusion and exclusion criteria, 45 publications were selected for the review. In the publications reviewed, various types of e-health/m-health tools were described, namely apps, websites, devices, video consults and webinars. Most of the publications (60 %) reported studies in the US. In 37 % of the publications, the study population was older adults in general, while the rest of the publications studied a specific group of older adults (e.g. women or those with overweight). The publications indicated various facilitators and barriers. The most commonly mentioned facilitator was the support for the use of the e-health/m-health tools that the older adults received. Conclusions E-health and m-health tools are used by older adults in diverse health promotion programs, but also outside formal programs to monitor and improve their health. The latter is hardly studied. The successful use of e-health/m-health tools in health promotion programs for older adults greatly depends on the older adults’ motivation and support that older adults receive when using e-health and m-health tools. Electronic supplementary material The online version of this article (doi:10.1186/s12913-016-1522-3) contains supplementary material, which is available to authorized users.
Background
In the healthcare sector, e-health and m-health tools are increasingly being used. E-health and m-health can be any kind of electronic device or monitoring system that is applied by physicians in the healthcare practice or by individuals to monitor or improve their health status. E-health typically refers to on-line and off-line computerbased applications while m-health refers to applications for mobile phones [1,2]. Such tools can be used to stimulate a positive health behavior change or assist persons to lead a healthier life style, or to support diagnosis and treatment of diseases.
E-health and m-health technologies are mostly used by younger people. The potential of this technology for older adults is generally recognized although the application of e-health and m-health tools in health promotion and primary prevention for older persons has been largely unexplored [2].
Positive changes in health-related lifestyle among older adults offer the opportunity for health benefits. It is recognized that promoting health among this population group may contribute to more healthy life years and increased life expectancy [3]. Many diseases among older adults are partly or fully preventable if individuals engage in a healthy lifestyle [4]. For example, physical activity and a proper diet can help to prevent obesity, heart diseases, hypertension, diabetes and even premature mortality [5,6]. Although the importance of a healthy lifestyle is known, older adults (50+ years) are frequently physically inactive [3,7]. The use of e-health and m-health technologies could help older adults to improve or maintain their health. But to what extent is the use of such tools by older adults reported in the literature?
This study aims to provide insight in the scope of the use of e-health and m-health tools for health promotion and primary prevention among older adults using the method of a systematic review. Hitherto, no in-depth overview of this topic is provided in the literature. Therefore, our review is an initial step that explores the scope of the use of e-health and m-health tools in health promotion and primary prevention among older adults. We include the use of such tools within health promotion programs as well as the use of such tools by older adults outside formal programs with the goal to monitor and improve their health. In this way, our review may provide a base for subsequent more specific reviews focused on a certain type of e-health and m-health tools, and its use in health promotion programs, as well as reviews on the effectiveness of e-health and m-health promotion programs within a specific group of older adults and specific setting.
Our review focuses on two dimensions: the characteristics of e-health and m-health tools used for monitoring and improving the health of older adults and the determinants of the use of e-health and m-health tools by older adults. The review identifies gaps in the research area that can be used for setting up new studies as mentioned above.
The review is relevant for policy and society because of the ageing of the population and the increase in multi-morbidity which consequently lead to greater demand for healthcare and higher healthcare expenditure. Therefore, it is important to know whether and how the emerging new technologies, specifically e-health and m-health tools, can be used by older adults to prevent diseases and help older adults to have not only longer but also a healthier life.
Methods
This study uses a systematic literature review to analyze the use of e-health and m-health tools for health promotion among older adults. The methodology of a systematic literature review outlined in Grant and Booth (2009) is applied [8]. The study starts with a systematic literature search based on predefined search terms and selection criteria. Then, the articles selected for the review are appraised and their relevant findings are synthesized narratively based on the objective of the review, with the support of descriptive tables. The applied review method allows for a comprehensive overview of the current knowledge in a specific research field. This distinguishes our systematic review from other review methods such as scoping review and meta-analysis.
Given the aim for this review, three components are used to build the search terms for the identification of studies on the use of e-health and m-health tools in health promotion and primary prevention among older adults: (1) elderly or old or senior; (2) health promotion or primary prevention; (3) telemedicine or e-health or m-health. Different forms of the above words as well as relevant synonyms are taken into account. This results in the following chain of keywords, which is used to search for relevant literature in search engine PubMed: In the search, MeSH terms (bibliographic thesaurus) were also included to ensure uniformity and consistency in the literature search. The search was done in October 2015.
Various inclusion and exclusion criteria are applied. To be included in the review, the publication should be published in the last 10 years, should be in English, and the full-paper can be obtained. There are no limitations with regard to the institution, which provides the e-health or m-health tools, i.e. papers that present e-health and m-health tools provided by state, insurers, employers and others are considered as relevant. We include papers which present the application of e-health and m-health tools not only within health promotion programs but also the use of such tools by older adults outside formal programs with the goal to monitor and improve their health. The publications could present data collected among older adults or among healthcare providers who provide services to older persons. Publications that discuss the topic in general as well as opinion papers and editorials are excluded.
Also, publications are selected if the age of participants (study group) is 50 years or older and if the focus is on health promotion and primary prevention. Based on Kenkel (2000), health promotion and primary prevention are defined in this review as activities that aim to reduce the probability of illness by stimulating a healthy lifestyle and providing services that might decrease the future incident of illnesses [9]. Hence, publications that deal with the use of telemedicine in home care to assist disable persons are excluded, as well as publications that report on the use of electronic devices and computer-based systems in secondary and tertiary prevention (e.g. monitoring of chronic conditions in case of specific diseases).
The first screening of the publications that appear after searching in PubMed with the chain of keywords given above, is based on the title and abstract of the publications. At this stage, publications are considered potentially relevant if their title and abstract have a link with the review topic. For the second screening, the publications are downloaded and the text of the publication is fully screened. Publications that fit the inclusion criteria outlined above are classified as relevant and are selected for the review.
After the screening, the method of directed (relational) content analysis of Hsieh and Shannon (2005), is used for the analysis of the publications [10]. This type of analysis requires the identification of categories (themes) relevant to the review objective, extraction of information related to categories and synthesis of the information classified in each category. The groups of themes that are used for the review and which form the units of analysis, are: Based on these groups of themes, the data extraction is done. The results are presented per group of themes in a narrative manner and are complemented by descriptive tables. The quality of the publications (research design and findings of the study reported) is assessed in a qualitative manner. We classify a study as reliable if the methods of data collection and analysis are well defined in the publication, and are potentially repeatable. Similarly, we classify a study as valid if the publication provides clear indications of consistency of the results with stated study hypotheses, expectations and/or results of other similar studies. The generalizability of the study is defined based on indications for possible extrapolation of the findings to the larger population. The quality of this review is also checked using the Prisma 2009 checklist (see Additional file 1).
Results
The chain of keywords shown above yields 656 publications, which are included in the initial screening. The results of the screening are presented in Fig. 1. In the first screening, 454 publications are excluded after reading the abstract based on the inclusion/exclusion criteria. In total, 202 publications are included in the second screening. For the second screening, the publications are downloaded. The full-paper cannot be obtained for 35 publications, and hence, these articles are excluded. The text of the remaining 167 articles is reviewed. From these 167 articles, 122 publications are excluded after reading the full text. The reasons of exclusion are: (1) publications are not about health promotion or primary prevention; (2) there is no e-health or m-health tool studied; (3) older adults are not a study group; (4) a combination of these above reasons. Thus, after the second screening, 45 publications are selected for this systematic review. A detailed description of the articles is presented in Additional file 2.
General description of the selected publications
The main characteristics of the included publications are presented in Table 1. The majority of the publications have been published in the last 4 years. In the last 2 years only, 33.3 % of the publications are published. The majority of the studies have an explanatory aim (quantitative studies investigating relations and determinants) and only nine publications are explorative or descriptive (qualitative or mix-methods studies providing more insight on the topic).
There are four different research approaches in the publications reviewed. The majority (32 publications) are quantitative studies with primary data collection. Five publications use secondary data. In 28 publications, randomization was reported. There are two systematic reviews conducted, both are about telehealth. Seven methods of data collection are reported. Most publications use biomedical test results as input data. These test results are measured by a healthcare professional or are provided by the participant through a self-report. Five publications, including the two systematic reviews, use secondary data or patient records for data analysis. In one publication, observation techniques are used for the data collection. Focus group discussions are reported in two publications. Seven publications report unstructured or semi-structured interviews for data collection. The other publications use standardized questionnaires or online questionnaires.
In Table 1, our qualitative assessment of reliability, validity and generalizability is also presented. If the publications are clear about their methods of data collection and analysis, they are considered reliable. In total, 19 publications have a clear and reliable description of the methods. In 16 publications, some aspects of validity are mentioned. Generalization is clearly outlined in 11 publications.
Characteristics of the use of e-health and m-health tools among older adults
In Table 2, the characteristics of the e-health and mhealth tools used for health promotion and primary prevention among older adults are outlined. In the first category, the type of tool is mentioned. In 21 publications, a website is reported for e-health services. One website for instance, offers a help program to reduce weight, where participants can enter their data and plan their goals. Then, the website helps with feedback to achieve these goals. Other website-related interventions deliver information for health prevention or health promotion. Two publications report on a smartphone app. In 15 publications, the use of various devices is reported. These devices are often used to gather healthrelated data, for example, a pedometer to count steps. There are 4 publications that report on the use of video consults so that patients do not need to go to a healthcare facility. Participants can use programs like Skype to have a video consultation with the nurse or general practitioner. In 13 publications, the use of telehealth is reported. Telehealth is used to deliver online webinars. Here, people can participate in a course or program. There are webinars to help older adults to get active or to work on their healthy behavior.
Virtually all e-health and m-health tools reported are related to a health promotion or primary prevention program for older adults. Only, one publication describes a tool without a health promotion or prevention program, this tool is a phone-based diary (app). The majority of the publications report on computer tailored lifestyle programs (computer-based e-health programs). Specifically, a computer tailored lifestyle program has the aim to change unhealthy behaviors. Such program helps older adults with personal goal setting and achieving these goals. In addition, ten programs that we identified, are based on providing feedback. Feedback is provided with an interactive voice response, with the use of internet, or it is a face-to-face feedback. These programs do not have to be tailor-made but could provide the same feedback for an entire group. Telehealth offers different programs that are studied in 12 publications. There are telehealth programs to increase physical activity among older adults. Other telehealth programs provide information, for example, on stroke prevention. Telehealth programs help older adults to access their health. To increase information for health prevention or health promotion among older adults, 4 publications report on the use of online health information.
The third category in Table 2 portrays the study groups. All publications report on the use of e-health and m-health tools among older adults, older adults being defined as 50 years and older. The majority of the publications (17 publications) do not have further limitations of the study group. Fourteen publications mention specific physical requirements. Most of them are health programs that aim at people who suffer from overweight. There are three studies focused on older women, and three that aim at older adults from a specific cultural group. One publication reports on older adults with limited computer knowledge.
The majority of the publications come from the United States (27 out of the 45 publications reviewed). Ten publications come from Europe. There are three publications from Asia, of these, two are from Japan, and one from Hong Kong. The other five publications come from Canada and Australia.
In Table 3, a cross-tabulation is given to show which types of e-health and m-health tools in what programs are reported. Apps are used for the provision of healthrelated feedback to older adults within a health promotion program, as well as outside a formal program. Websites are also used within health promotion programs to provide health-related information to older adults. Websites, devices and webinars are used in computer tailored lifestyle programs and program providing health-related support or feedback. Telehealth programs involve the use of devices, video consults and webinars.
In Table 4 we reviewed did not increase much during the past 10 years. There is an increased focus on programs based on the use of e-health tools for providing support and feedback for a healthy lifestyle. From the 10 publications with such focus, eight have been published in the last 4 years. These publications are from the USA and Europe. E-health tools in telehealth programs are reported before 2012. Table 5 presents a cross-tabulation of the study group and the type of use of e-health and m-health tools. Most publications with a specific study group are about physical conditions. There is only one publication that reports on telehealth that aims to help older adults who live in a clinic. The only publication that is focused on older adults with limited computer knowledge is a program about online health information. Some publications report on the use of e-health and m-health tools in computer-tailored lifestyle programs to help older adults get physically active. Other publications for older adults with specific physical conditions report on the use of ehealth and m-health tools in programs focused on providing support and feedback, and one publication is focused on telehealth. Studies on telehealth most often include older adults in general. Three publications report on the use of e-health and m-health tools by older women in a computer-tailored lifestyle program and programs focused on providing health-related support and feedback.
Facilitating factors and barriers to the use of e-health and m-health tools in health promotion among older adults
For the use of e-health and m-health tools, barriers and facilitating factors are reported in the publications reviewed. These factors are described in Table 6. It should be underlined however, that most of the factors listed in Table 6, such as motivation, self-regulation, information and rewards, are important determinants of behavior change in general and not necessarily direct determinants of the use of e-health and m-health tools per se. At the same time, other factors in Table 6, such as usability and accessibility can be directly related to the use of e-health and m-health tools. Seven types of facilitating factors are reported in the publications. The most often mentioned facilitating factors are motivation, support and feedback. These are reported in 12 publications. Specifically, support received from other participants in the e-health or m-health program is a key factor to help to change behavior. Motivation or feedback from other participants is also important to observe progress. This also contributes to adherence to the e-health or m-health programs. Four publications indicate that it is necessary to let older adults participate in accordance with their own planning to change. This could be accomplished by self-regulation and goal setting. Goal setting and insight in how they perform, is a way to keep them motivated. Information on individual progress and the nature of the tool also help to facilitate the use of the tool. According to three publications, it is helpful if there is a reward system. The reward system can be based on both the use of the e-health/m-health tool and concrete changes in health-related behavior. This could be a financial reward, or a reward in the sense that the participants can notice progress. Three publications mentioned userfriendliness as facilitating factors of the tools. If the electronic device is simple and works easily, older adults are more willing to keep using it. Two publications indicate the accessibility to the tools or programs as a facilitating factor. It is be better if the programs or tools are provided in multiple languages so that older adults could use it in their native language. For some older adults, it is better to have access to different forms of information, for example, if there is also a printed version beside the online information. Access to remote help at home is the second most often mentioned facilitating factor. This is the case in eight publications. The benefit of this remote help at home is often the lack of travel distance. In the publications, barriers to the use of e-health and m-health tools for health promotion and primary prevention among older adults are also mentioned. These barriers are presented in Table 6. There are seven categories of barriers. The first two categories are related to personal barriers. Six publications mention barriers to use of e-health and m-health tools related to personal choice. This choice refers to the lack of time or other priorities. A solution that is indicated is to have a tool that can be paused and the use can be resumed when the older adult has time. Some publications mention that the monetary costs of use are too high. As mentioned for the facilitating factors, the lack of motivation and support is also most often reported as a barrier to the adherence to e-health or m-health health promotion programs. When there is an online support group, the group could be used to motivate each other. If the online support group is not used or only filled with negative comments, then the support has a negative influence and becomes a barrier. According to the publications reviewed, there is also a lack of motivation when participants cannot reach their goals. When devices are used or information is provided, it should be clear how the device works and the users should be able to understand the information that they receive. The lack of information or the lack of comprehensible information is mentioned in three publications. Barriers related to the technology used and the device is reported in four publications.
There are examples of problems with the use of internet or with the device. Problems with the electronic devices can also be caused by sociodemographic barriers. Sometimes older adults do not have the proper skills to work with e-health or m-health devices. In four publications, sociodemographic barriers are mentioned. The sociodemographic barriers are related to educational level and age. Three publications indicate barriers that are related to policy or to a lack of resources to implement the ehealth program or tool.
Discussion
This systematic literature review presents evidence on the scope of the use of e-health and m-health tools for health promotion and primary prevention among older adults, as well as the factors that influence the use of these tools. There are different kinds of e-health and m-health tools used for health promotion and primary prevention among older adults. These include apps, websites, devices, video consults and webinars. Many of the health promotion and primary prevention programs for older adults that utilize such tools, have websites with information on health-related aspects. This is for example the case with computer-tailored lifestyle programs and telehealth programs. The majority of the publications on e-health and m-health tools that we reviewed, study the general older adult population. Only few publications report studies focused on a specific older adult group. The most common specific study group consists of older adults with a certain physical limitation, most often the need of weight reduction. This is not surprising as many diseases can be prevented through physical activity or maintenance of a healthy weight [11][12][13]. This could explain why there is a strong focus on older adults with weight problems as a study group. The publications with this study groups most often report the use of e-health and mhealth programs in a computer tailored lifestyle program. This is also reported by the systematic reviews focused on Policy/reimbursement changes required 2 (6.7) 28, 29 The sum of N per category can exceed 45 as papers can be classified in multiple sub-categories e-health interventions for physical activity and dietary behavior change [14]. During the past 10 years, the amount of publications reporting on the use of e-health and m-health tools in health promotion and primary prevention among older adults, has been increasing. Also, the focus of the publications has changed through the years. At the beginning of the period covered by our review, there were more publications about telehealth. In the past 4 years, publications more often report on the use of e-health and mhealth tools in computer tailored lifestyle or programs that provide support and feedback for a healthy lifestyle. The results show that for different study groups, different e-health and m-health tools are used. The choice of an adequate tool depends on the specificities of the participants. This could explain why there are many different e-health tools and programs reported [15]. Another explanation for this diversity is the rapid change in the available e-health m-health tools [2]. Thus, although 40 % of the publications we reviewed, report the use of telehealth for older adults in general, this might change in the near future as new m-health tools (such as apps) are becoming available [16]. Most probably, some of these tools are already used by older adults, but are not yet studied and reported in the literature.
In this review, we also outline the evidence on the facilitating factors and barriers to the use of e-health and m-health tools for health promotion and primary prevention among older adults. The results show different facilitating factors and barriers. When the barriers are studied, over 25%of the publications mentioned the lack of motivation, support and feedback as obstacles. At the same time, the results for the facilitating factors also show that strong motivation as well as adequate support and feedback are important for the continuity of the health program based on e-health and m-health tools. It is recognized hewer that these factors are important determinants of behavior change and not necessarily direct determinants of the use of e-health and m-health tools [13]. When health promotion and primary prevention programs offer support or feedback, older adults are more likely to keep using the e-health and m-health tools offered by the program. Motivation can be stimulated in different ways [17]. The most frequently mentioned motivator is feedback on the extent to which people have achieved their goals. Such feedback can come from a professional or a peer-support group. We find evidence however, that the use of an online support group can have both positive and negative effects. If the feedback is formulated positively, it can be a motivation for further achievement. But if the group mostly focuses on the negative aspects of the use of e-health and m-health tools, and provides more negative comments, then, it can turn into a barrier. One publication indicates rewards as a facilitating factor. This could be not only a financial reward, but also the achievement of the health goals [13,17]. To help with the motivation, adequate goal setting is an important facilitating factor [18]. When older adults have a personal aim and reachable goals, they are more likely to pursue the targeted behavior change and therefore, continue to use the e-health and m-health tools offered. If the goals are too difficult to reach then, this has negative effects, and the goal setting becomes a barrier [18,19].
Some of the publications that we reviewed point to sociodemographical barriers. For some older adults for example, it is problematic to work with new technologies. This could be due to a low educational level or limited skills with electronic devices. When the e-health and m-health tools use technologies, which older adults already know, the ease of use is a facilitating factor. This tool should also present the information in a clear and comprehensible way. Also, the older adults' access to health promotion or primary prevention programs, can be facilitated through telehealth [20]. With the use of this type of e-health, older adults do not have to travel to benefit from such programs [21].
Although our review was systematic and we took care to assure its quality (see Additional file 1), we still need to acknowledge some key limitations. A limitation of this review is that the search for relevant publications is done in one search engine by a single researcher. Although PubMed is the most relevant search engine with regard to our topic and it includes an enormous volume of publications, we may have missed publications on commercial ehealth or m-health tools. Also, a certain bias in selecting relevant publications is present since only one researcher did the selection. Another limitation is that we assessed the study designs in a qualitative manner without applying a standardized protocol that could have helped us to quantify the strengths and weaknesses of the study designs. Therefore, our review should be only seen as a first attempt to bring together evidence on the use of e-health and m-health tools for health promotion and primary prevention among older adults.
With regard to the scope of our review, we only address the use of m-health and e-health tools for primary prevention while the use of these tools is equally relevant in secondary and tertiary prevention and in treatment. Such applications of m-health and e-health tools are widely reported in the literature [22,23]. Also, our review is exclusively focused on older adults while a valuable starting point in future reviews could be the inclusion of more population groups or a comparison with the general population [24][25][26]. In addition, as stated at the outset of this paper, our review should be seen as an initial step that explores the scope of the use of e-health and m-health tools for health promotion and primary prevention among older adults. We were unable to explore the effectiveness of e-health and m-health tools among older adults. Specifically, our review captures studies with very different objectives. For example, some of the studies focus on the effectiveness of e-health and mhealth programs, while others not. Even if we only examine studies that specifically focus on the effectiveness of ehealth and m-health programs, the focus is not necessarily on the effectiveness of the e-health and m-health tools but rather on the effectiveness of the programs in general. Thus, the outcomes measured do not reflect the effects of e-health and m-health tools, but the more general objective of the study. Subsequent more specific reviews focused on the effectiveness of e-health and mhealth tools and programs for older adults within specific settings need to be conducted to obtain a better understanding of how such programs should be designed and implemented.
Conclusions
The results of this systematic literature review show that the relevance of e-health and m-health tools in health promotion and primary prevention among older adults is recognized and that there are a variety of uses of such tools. Also, the research focused on this issue is increasing since more publications have appeared in recent years. It seems however, that the use of e-health and m-health tools in health promotion programs for older adults, is mostly an isolated initiative, especially outside the US. European countries for example, that experience a fast population aging, could specifically benefit from the use of e-health and m-health tools in health promotion and primary prevention programs among older adults. If these programs are designed with caution to avoid potential barriers (as those outlined here), and if the cost-effectiveness of the programs can be demonstrated in future studies, governments might be willing to consider their expansion and funding. In this regard, more evidence on the effectiveness and cost-effectiveness of e-health and m-health health promotion programs for older adults is needed.
Acknowledgement
This publication arises from the project Pro-Health 65+ which has received funding from the European Union, in the framework of the Health Programme (2008-2013). The content of this publication represents the views of the authors and it is their sole responsibility; it can in no way be taken to reflect the views of the European Commission and/or the Executive Agency for Health and Consumers or any other body of the European Union. The European Commission and/or the Executive Agency do(es) not accept responsibility for any use that may be made of the information it contains.
Publication co-financed from funds for science in the years 2015-2017 allocated for implementation of an international co-financed project.
Declarations
This article has been published as part of BMC Health Services Research Volume 16 Supplement 5, 2016: Economic and institutional perspectives on health promotion activities for older persons. The full contents of the supplement are available online at http://bmchealthservres.biomedcentral. com/articles/supplements/volume-16-supplement-5.
Availability of data and materials PRISMA 2009 Checklist for systematic review and/or meta-analyses is filled in and included in Additional file 1. The list of papers reviewed and key content used in this review can be found in Additional file 2.
Authors' contributions RK contributed to the development of the study design, carried out the literature search and analysis, drafted and improved the manuscript, approved the final version and agreed to be accountable for his contribution. MP contributed to the development of the study design, reviewed the literature search and analysis, and reviewed and commented on the preliminary drafts and final version of the paper, approved the final version and agreed to be accountable for her contribution. MT contributed to the development of the study design, and also reviewed and commented on the preliminary and final paper drafts paper, approved the final version and agreed to be accountable for her contribution. WG assessed the study design, reviewed and commented on the literature search and analysis, and also reviewed and commented on preliminary paper drafts and the final version paper, approved the final version and agreed to be accountable for his contribution.
|
2017-08-28T03:00:09.784Z
|
2016-09-05T00:00:00.000
|
{
"year": 2016,
"sha1": "fdec80cf08d59e0ca51592448ed11e6f31321021",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-016-1522-3",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "89305f74f8feee0a98b3bb8fde73be341d7e2bde",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Political Science",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257845184
|
pes2o/s2orc
|
v3-fos-license
|
Automated streamliner portfolios for constraint satisfaction problems
Constraint Programming (CP) is a powerful technique for solving large-scale combinatorial problems. Solving a problem proceeds in two distinct phases: modelling and solving. Effective modelling has a huge impact on the performance of the solving process. Even with the advance of modern automated modelling tools, search spaces involved can be so vast that problems can still be difficult to solve. To further constrain the model, a more aggressive step that can be taken is the addition of streamliner constraints, which are not guaranteed to be sound but are designed to focus effort on a highly restricted but promising portion of the search space. Previously, producing effective streamlined models was a manual, difficult and time-consuming task. This paper presents a completely automated process to the generation, search and selection of streamliner portfolios to produce a substantial reduction in search effort across a diverse range of problems. The results demonstrate a marked improvement in performance for both Chuffed, a CP solver with clause learning, and lingeling, a modern SAT solver. © 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons .org /licenses /by /4 .0/).
Introduction
Challenging combinatorial problems, from domains such as planning, scheduling, packing or configuration, often form problem classes: families of problem instances related by a shared high-level specification, with a common set of free parameters.Constraint Programming (CP) and Propositional Satisfiability solving (SAT) offer powerful, complementary means to solve these problem classes.For either formalism, a model must be formulated, which describes the problem class in a format suitable for input to the intended solver.Since the search spaces involved can be vast, however, sometimes the model initially formulated for a problem class may give instances where it is too difficult for the solver to find a solution in a timely manner In response, a natural step is to constrain the model further in order to strengthen the inferences the solver can make, therefore detecting dead ends in the search earlier and reducing overall search effort.One approach is to add implied constraints, which can be inferred from the initial model and are therefore guaranteed to be sound.Manual [1,2] and automated [3][4][5] approaches to generating implied constraints have been successful.Other approaches include adding symmetry-breaking [6][7][8][9] and dominance-breaking constraints [10][11][12], both of which rule out members of equivalence classes of solutions while preserving at least one member of each such class.[14], shown to be NP-complete [15].A number of cars (n_cars) are to be produced; they are not identical, because different classes (n_classes) are available (quantity) as variants on the basic model.The assembly line has different stations which install the various options (n_options) such as air conditioning and sun roof (each class of cars requires certain options, represented by usage).
A maximum number of cars (maxcars) requiring a certain option can be sequenced within a consecutive subsequence block (blksize), otherwise the station will not be able to cope.
If these techniques are inapplicable, or improve performance insufficiently, for satisfiable problems a more aggressive step is to add streamliner constraints [13], which are not guaranteed to be sound but are designed to focus effort on a highly restricted but promising portion of the search space.Streamliners trade the completeness (i.e.failing to find a solution when there is one) offered by implied, symmetry-breaking and dominance-breaking constraints for potentially much greater search reduction.
Previously, producing effective streamlined models was a difficult and time-consuming task.It involved manually inspecting the solutions of small instances of the problem class in question to identify patterns to use as the basis for streamliners [13,[16][17][18].For example, Gomes and Sellmann [13] added a streamliner requiring a Latin Square structure when searching for diagonally ordered magic squares.
The principal contribution of this paper is to demonstrate how a powerful range of streamliners can be generated and applied automatically.Our approach is situated in the automated constraint modelling system Conjure [19][20][21].This system takes as input a specification in the abstract constraint specification language Essence [22,23].Fig. 1 presents an example specification, which asks us to sequence cars on a production line so as not to exceed the capacity of any station along the line, each of which installs an option such as sun roof.Essence supports a powerful set of type constructors, such as set, multi set, function and relation, hence Essence specifications are concise and highly structured.Existing constraint solvers do not support these abstract decision variables directly.Therefore we use Conjure to refine abstract constraint specifications into concrete constraint models, using constrained collections of primitive variables (e.g.integer, boolean) to represent the abstract structure.The constraint modelling assistant tool Savile Row [24,25] is then used for producing solver dependent input.Savile Row supports several solving paradigms, including CP, SAT and SMT (satisfiability modulo theories).
Our method exploits the structure in an Essence specification to produce streamlined models automatically, for example by imposing streamlining constraints on the function present in the specification in Fig. 1.The modified specification is refined automatically into a streamlined constraint model by Conjure.Identifying and adding the streamlining constraints at this level of abstraction is considerably easier than working directly with the constraint model, which would involve first recognising (for example) that a certain collection of primitive variables and constraints together represent a function -a potentially very costly process.Moreover, recovering high-level information from a low-level model expressed in a lower level constraint modelling language like OPL [26], MiniZinc [27] or Essence Prime [28] would be brittle with respect to the exact heuristics and modelling reformulations used inside Conjure and Savile Row.As with automated symmetry breaking during modelling [29,20], automated streamlining therefore motivates the adoption of a higher level language such as Essence and letting automated tools work out the best compilation -just as has happened in general programming languages.
Our streamlining system completely automates the original manual process defined by Gomes and Sellmann [13].As per their method, it does require an initial investment in generating and testing streamliners for the problem class at hand, but this effort is repaid in two ways.First, we assume a context common across automated algorithm selection [30] and machine learning in general: we expect to solve a large number of instances of the problem class, and so the effort made to formulate the best model that we can is amortised over that substantial solving effort.Second, successful streamlining can result in a vast reduction in search effort, allowing us to solve much harder instances than would otherwise be practically feasible.Our work significantly expands previous work on automated streamliner generation from high-level problem specifications [31] and automatic selection of streamlining constraints [32].We demonstrate the effectiveness of our approach on both the CP and SAT solving paradigms, choosing representative solvers for each.For CP, we use the learning solver chuffed [33], and for SAT, we use lingeling [34].As presented in Section 8, our method can often produce a substantial reduction in search effort across a diverse range of problems.The automated streamlining system, Essence problem specifications and all the data used in this work for computational evaluation are available at https://www.github.com/stacs -cp /automated -streamliner-portfolios (see [35]).
Architecture overview
We begin with an overview of the architecture of our system, before explaining each of its components in detail in the subsequent sections.Given a problem class of interest, our streamlining approach proceeds in three main phases.Firstly, candidate streamliners are generated from an Essence specification (Section 2.1).Secondly, streamliners are combined into a portfolio of streamliner combinations with complementary strengths (Section 2.2).Finally, given an unseen instance of the problem class, one or more streamliner combinations are selected from the portfolio and scheduled for use in solving (Section 2.3).
Phase 1: candidate streamliner generation
Given an Essence specification of a problem class, several candidate streamliners are automatically derived via a set of prebuilt rules encoded inside Conjure.Those rules define patterns that match against the types of the decision variables in the Essence specification.As an example, a candidate streamliner for the Car Sequencing problem in Fig. 1 can enforce approximatelyHalf of the range of the car function to take odd values only, hence reducing the search space.A full description of the rules and the streamliner generation process are presented in Section 3. The performance of a streamlined model on a given problem instance can then be evaluated using the procedure described in Fig. 2.
Phase 2: portfolio construction
The generated candidate streamliners can be combined to form stronger streamliners [36], i.e., multiple streamliners can be added to the original problem specification at the same time to form a streamlined model.This results in a potentially very large number of possible (combined) streamliners for a given problem class.Each of those streamliners may drastically reduce the search space and lead to large speed up in solving time.However, in contrast to other space-pruning techniques, such as symmetry-breaking or implied constraints, we cannot expect streamliners to be universally applicable.As illustrated in Section 5.2.1, the effectiveness of a streamliner can vary widely among instances of the same problem class.For example, a streamliner can be very useful for solving some instances but impair performance on others or even render them unsatisfiable.Therefore, we need an effective mechanism to search in the large space of streamliner combinations and identify the effective and sound ones.
Given a problem class, our system aims at constructing a portfolio of streamliners with complementary strengths, i.e., each streamliner is specialised towards a region of the instance space (i.e., the high-dimensional space defined by instance features [37]).Our portfolio construction method consists of three stages.Firstly, a set of training instances that is representative of the instance space of the given problem is automatically built (Section 5).Secondly, a Monte Carlo Tree Search method is used for searching in the space of the candidate streamliner combinations (Section 6.1 and Section 6.2).Finally, a portfolio builder approach is applied on top of the search process to enhance the complementary strength of the constructed portfolio (Section 6.3).
Phase 3: streamliner selection and application
Once a streamliner portfolio has been constructed, it can be used for solving any unseen instance of the same problem class.In Section 7, we discuss several methods to select and apply a streamliner portfolio to a new problem instance.We start with the simplest approaches where streamliners are selected based on their average performance across the whole training instance set.We then investigate a learning-based approach [38] where a prediction model is used for deciding the best streamliner based on features of the given instance.Even though the training of such learning-based approach is more computationally expensive, this approach offers significant improvement in performance compared to the other ones in several cases (as shown in Section 8).
From streamlining constraints to streamlined specifications
This section presents the methods used to generate streamlined models automatically.The process is driven by the decision variables in an Essence specification, such as the function in Fig. 1.The highly structured description of a problem an Essence specification provides is better suited to streamliner generation than a lower level representation, such as a constraint modelling languages like OPL, MiniZinc and Essence Prime.This is because nested types like multiset of sets must be represented as a constrained collection of more primitive variables, obscuring the structure that is useful to drive streamliner generation.For each variable, the system generates streamlining constraints that capture possible regularities that impose additional restrictions on the values of that variable's domain.Since the domains of Essence decision variables have complex, nested types, these restrictions can have far-reaching consequences for constraint models refined from the modified specification.The intention is that the search space is reduced considerably, while retaining at least one solution.
Candidate streamliners are generated by applying a system of streamlining rules.A streamlining rule takes as input the domain of an existing Essence term (a reference to a decision variable, or parts of it) and produces a constraint posted on this term.Abstract domains in Essence can be arbitrarily nested and streamlining rules take advantage of this nested structure.A rule defined to work on a domain D is lifted to work on a domain of the form set of D (and other abstract domain constructors mset, function, sequence, relation, partition, tuple, etc) through high-order rules.
We identify groups of streamlining rules and tag them appropriately such that multiple streamlining constraints from the same group are not combined.For example a rule that enforces an integer variable to be even uses the same tag as another rule that enforces the same integer to be odd.This simple way of identifying conflicting streamlining constraints allows streamliner selection search to prune some of these combinations without wasting any computational effort (Section 6.1).
Streamlining rules
Domain attributes can be added to Essence domains annotated to restrict the set of acceptable values.For example, a function variable domain may be restricted to injective functions, or a decision variable whose domain is a partition may be restricted to regular partitions.Hence, the simplest source of streamliners is the systematic annotation of the decision variables in an input specification.This sometimes retains solutions to the original problem while improving solver performance.
The existing Essence domain attributes are, however, of limited value.They are very strong restrictions and so often remove all solutions to the original problem when added to a specification.In order to generate a large variety of useful streamliners we employ a small set of rules, categorised into two classes: 1. First-order rules add constraints to reduce the domain of a decision variable directly.2. High-order rules take another rule as an argument and lift its operation onto a decision variable with a nested domain, such as the complex set of functions presented in Fixed Length Error Correcting Codes (see Fig. 5).This allows for the generation of a rule such as enforcing that approximately half (with softness parameter) of the functions in the set are monotonically increasing.Imposing extra structure in this manner can reduce search very considerably.
A selection of the first-order rules is given in Fig. 3, and a selection of the higher-order rules is given in Fig. 4.These rules cover all domain constructors in Essence and they can be applied recursively to nested domains.Some of the rules (e.g.approximately half) take a softness argument which can control how strict the generated streamliner constraint is going to be.As a convention, smaller values of the softness parameters produce comparatively strict streamliners (hence potentially Fig. 3.The first-order streamlining rules.For each rule we present the rule name, rule's input and output and the tag.The tags are used to filter trivially contradicting streamliners during streamliner selection.We choose up to 1 streamliner from each tag.causing greater reductions in the amount of search) and larger values produce more applicable streamliners.The reduction power and applicability are two of the criteria that we use when searching for effective streamliners (see Section 6.2).
The first two set of rules in Fig. 3 operate on a decision variable with an integer domain.They work by adding a unary constraint to limit values to the odd values, even values, values from the lower half of the domain, or upper half of the original domain.The next five sets of rules operate on decision variables with function domains.Monotonically increasing (and decreasing) enforce entries in the function to be monotonically increasing (or decreasing).The smallest first rule is a subset of monotonically increasing: it only enforces the smallest value in the function's defined set to be mapped to the smallest value in the range set (similarly for largest first).Commutative, non-commutative and associative rules enforce the corresponding property on a function variable.All streamlining rules work on partial and total functions.The quasi-regular rule takes a softness parameter and enforces the partition decision variable to be almost regular.A regular partition is one in which all parts of the partition are of equal cardinality.On binary relations, Essence contains common binary relation properties (listed in the figure).These attributes can simply be turned on by adding them to the domain declaration.The last set of streamlining rules generate one of these attributes on decision variables with a binary relation domain.
The tags given in Fig. 3 are used to filter trivially contradicting streamliner rules.For example, since they share the same tag our system would not generate commutative and non-commutative streamliners simultaneously.However, we would generate one of commutative or non-commutative together with associative.Fig. 4. The higher-order streamlining rules.These rules lift existing first-order and higher-order streamlining rules to work on nested domain constructors of Essence.They do not introduce any additional tags, but they propagate the tags introduced by the rule they are parameterised on.Fig. 4 lists the higher-order streamlining rules implemented in Conjure.These rules are used to apply the first-order streamlining rules to decision variables with nested domains.The first set of rules (all, half and at most one) work on sets, multi-sets, sequences and relations.They take another streamlining rule as an argument and apply it to the relevant members of the domain.The second set of rules applies a given streamlining rule to approximately half of the members of the decision variable's domain.The third set of rules applies a given streamlining rule to the defined set or the range set of a function variable.Finally, the last rule applies a given streamlining rule to the set of parts of a partition (returned by the parts function).The streamlining rule (R) that is given as a parameter to these rules can be a first-order or a higher-order rule.Moreover, they can be nested arbitrarily to match the nested domains that can be defined in the Essence problem specification.See Fig. 1 and Fig. 5 for the domains of the decision variables in the ten problem classes we use in this paper.
Although rich, the set of Essence type constructors is not exhaustive.Graph types, for example, are a work in progress [39].At present, therefore, we might specify such a problem in terms of a set of pairs or a relation domain.The streamliner generator constraints would produce candidate streamliners based on this representation.As further type constructors are added to Essence it is straightforward to extend our streamlining rules to accommodate them.
Problem classes
We now introduce the problem classes studied in this paper, in addition to the Car Sequencing Problem presented in Fig. 1.There are 10 problem classes in total.They will be used both to illustrate the remainder of our method and for our empirical evaluation.We selected these problems, presented in Fig. 5, to give good coverage of the abstract domains available in Essence, including matrices, sets, partitions, relations and functions.They also cover various types of problems in practice, such as fundamental combinatorial design problems (Balanced Incomplete Block Design, Social Golfers), scheduling and manufacturing problems (Car Sequencing, Vessel Loading), timetabling problems (Balanced Academic Curriculum Problem), and transportation problems (Transshipment, Tail Assignment).Such problems have been considered in various prior works in both Constraint Programming and Operation Research.
The Balanced Academic Curriculum Problem (BACP) [40] (decision version) is to design a balanced academic curriculum by assigning periods to courses.The constraints include the minimum and maximum academic load for each period, the minimum and maximum number of courses for each period, and the prerequisite relationships between courses.This problem is also specified as finding a function from courses to periods.
The Balanced Incomplete Block Design problem (BIBD) [41] is a standard problem from design theory often used in the design of experiments.It asks us to find an arrangement of v distinct objects into b blocks such that each block contains exactly k distinct objects, each object occurs in exactly r different blocks, and every two distinct objects occur together in exactly λ blocks.This problem is naturally specified as finding a relation between objects and blocks.The Covering Array problem [42] requires finding a matrix of integer values indexed by k and b such that any subset of t rows can be used to encode numbers from 0 to g t−1 .In addition to the covering constraint, row and column symmetries are broken using the lexicographic ordering constraints [43].
The Equidistant Frequency Permutation Arrays problem (EFPA) [44,45] is to find a set (optionally of maximal size) of codewords, such that any pair of codewords are a certain Hamming distance apart.The decision version we consider here works for a given number of codewords.In comparison to Fixed Length Error Correction Codes problem which only has a minimum distance requirement, this problem requires the distances to be equal to an exact value.In addition each codeword must include each symbol a certain number of times.
The Fixed Length Error Correction Codes problem (FLECC) [46,47] asks us to find a set of code words of a uniform length such that each pair of code words are at least a specified minimum distance from each other, as computed by a given distance metric (e.g.hamming distance).
The Transshipment problem [48] considers the design of a distribution network, which includes a number of warehouses and transshipment points to serve a number of customers.The cost of delivering items from each warehouse to each transshipment point and from each transshipment point to each customer, and the amount of stock available at each warehouse are given.We are asked to find a delivery plan that meets customer demand within a cost budget.This is specified as a pair of functions describing the amount of demand supplied between each warehouse and transshipment point, and each transshipment point and customer.
The Tail Assignment problem [49] is the problem of deciding which individual aircraft (identified by its tail number) should cover which flight.The problem is represented using a nested function variable, where the outer function is total and the inner one is potentially partial.There are several constraints on this nested function variable ensuring a sensible sequence of flights and frequent enough visits to a maintenance depot.
The Social Golfers problem [50] is concerned with finding a schedule for a number of golfers over w weeks.The schedule for each week is a partitioning of the golfers such that over the course of the entire schedule no golfer plays in the same group as any other golfer on more than one occasion.
The Vessel Loading decision problem [51] is to determine whether a given set of containers can be positioned on a given deck, without overlapping, and without violating any of the separation constraints.The problem is modelled in Essence using four total functions capturing the location of each container.For brevity, we omit the specification for n_customer, costTC, stock, demand and maxCost, as they are rewritten in exactly the same way as other parameters of the same types.
Generating and selecting training instances
The core idea behind our methodology is that we can build a portfolio of streamliners on a training set and then employ that portfolio to solve unseen instances from the same problem class with substantially less effort than the unstreamlined model.This method relies upon the automatic evaluation of model candidates in order to construct a high quality portfolio of streamlined models.In this section, we describe how a set training instances that are representative of the problem instance space is generated automatically.
Generating candidate training instances
As the first step, we generate a large number of candidate training instances for each pair of problem class and solver via AutoIG1 [52,54,55], a constraint-based automated instance generation tool.AutoIG allows users to describe the generation of instances for a given problem class in a declarative way as a constraint model, and supports the automated generation of new instances with certain properties required by the users.Fig. 6 shows (part of) an example generator model for the Transshipment problem.For this work, we use AutoIG to find satisfiable instances that are solvable by a chosen solver within the solving time range of [10,300] seconds.The lower bound of 10 seconds is imposed to avoid trivially solvable instances, as the gain when applying streamliners on such instances are often negligible.
The internal working mechanism of AutoIG is as follows.Starting from a description of the problem and an instance generation model (either created by users or generated via an automated generator approach [52,54]), AutoIG generates new instances by searching in the parameter configuration space of the generator using the automated algorithm configuration tool irace [56], and by sampling a new instance via solving each instance of the generator model (provided by irace) via the constraint solver Minion [53].Every time a new instance is generated, it is evaluated using the chosen solver and its quality (in term of satisfying the properties specified by users) is given to irace as feedback to update its sampling model.The instance generation process stops once a given tuning budget is exhausted and all instances satisfying the required properties will then be returned.
Training set construction
For some problems, the number of instances found by the automated instance generation process can be quite large.For example, we got 4647 instances for FLECC problem with chuffed as the target solver (Table 1).Using all instances during the streamliner portfolio construction phase would require a significant amount of computation.In this section, we propose a method to select a small representative subset of training instances from the ones obtained in previous step.
Streamliner performance footprint analysis
The aim of the instance selection process is not only to reduce the size of the training instance set, but also to make sure that the selected instances are as diverse as possible.The motivation behind the latter objective is to ensure the generalisation ability of the constructed portfolio.
We illustrate our motivation via a visualisation analysis on streamliner performance across the instance space.The visualisation is similar to the algorithm footprint analysis method [57].An algorithm footprint is simply the part of the instance space where the algorithm performs well.In our context, a streamliner is an algorithm.We will show that the footprints of different streamliners can cover different parts of the instance space, which suggests the necessity of selecting the training instances with good coverage across the space.
To visualise the instance space, we extract FlatZinc instance features via the fzn2feat tool (part of mzn2feat [58]).There are 95 features grouped into 6 categories (variables, constraints, domains, global constraints, objective, and solving features) [58].The feature space is then projected into a 2-D space using Principal Component Analysis [59].For each streamliner, we mark its performance on all generated instances with different colours.For simplicity, the performance is divided into three categories: (i) UNSAT (the streamliner eliminates all solutions of the instance), (ii) the streamliner offers less than 50% reduction in solving time for the given instance, and (iii) the streamliner achieve more than 50% reduction in solving time.Fig. 7 shows performance of three example streamliners on the FLECC problem with chuffed.Looking at the group of instances at the bottom of each plot, we see that the performance achieved by different streamliners varies drastically.Streamliner 2 is mostly satisfiable across the group but generally it only achieves a reduction of less than 50%.Streamliner 121 on the other hand occasionally achieves good reductions but in general it seems to be too strict and renders most instances infeasible.Note also that we can vastly modify the footprint achieved by applying the combination of 2 and 8.For most instances in the bottom group it seems that this is a more effective combination as the general reduction has drastically increased to ≥ 50%.However in other parts this combination makes the resulting instance too tight and as such negatively affects its feasibility.Therefore, during training set construction, it is important that we take this diversity into account.If our training set only include instances from this particular group, this will directly limit the ability of our streamliner portfolios to generalize across the problem class.
Building a compressed training instance set via clustering
To select a diverse subset of instances for the training phase, the GMeans [60] clustering method is used on the instance feature space to detect the number of instance clusters (column 4 of Table 1).An example of clustering results for the FLECC problem with chuffed is shown in Fig. 8.With those clusters, we can build a compressed version of the original training set by selecting a subset of instances per cluster.
To make sure that we have a sufficient number of representative training instances, we define a minimum number of 50 instances to comprise our compressed training set.This value was chosen based on the computational resources available for our experiments.If the number of clusters detected by GMeans is larger than this minimum size, one representative instance per cluster is selected.In the scenarios where the number of detected clusters is less than 50, instances are chosen from each cluster until the minimum number ( 50) is met, i.e., the number of instances selected per cluster is proportional to the size of the cluster.In order to take into account the information regarding instance difficulty in the selection of representative instances, for each cluster, instead of using purely random selection or selection of the instances closest to the centroid, we perform sampling without replacement of the median instance in terms of its corresponding solving time by the unstreamlined model.
Identifying effective combinations of streamliners
It has previously been observed that applying several streamlining constraints to a model simultaneously can result in larger performance gains than any of the constraints in isolation [17]; and we give an example of this behaviour using the FLECC problem in Fig. 7.In order to find such combinations of constraints we must consider the power set of candidate streamliners, which form a lattice: the root is the original Essence specification and an edge represents the addition of a streamliner to the combination associated with the parent node.Finding effective streamliner combinations involves search in this lattice and evaluating the combinations at each node at which the search arrives.In order to keep the size of the set of streamliners (and hence the search) manageable, we used a small number of softness parameter values for each rule that requires a softness parameter.
Pruning the streamliner lattice
For many of the problems considered a large number of singleton streamliners are generated (see Table 1), resulting in a space of streamliner combinations too large to be explored exhaustively in practice.Two forms of pruning are used to reduce the number of combinations to be considered: 1.If a set of streamliners fails all supersets are excluded from consideration.To be considered failed a streamliner must have zero applicability across the instance space, i.e. it removes all solutions for all instances.The effectiveness of this pruning strategy is largely dependent on the ordering of the traversal of the streamliner configurations.For instance in the example from Fig. 9 it can be seen that the streamliner CD is unsatisfiable, which allows for the supersets of that configuration (ABCD, BCD) to be immediately pruned.However, this pruning can only occur if the fact that CD is unsatisfiable is discovered before BCD and ABCD are evaluated.Thus different traversal orderings can impact on the amount of pruning that can be performed.2. Trivially conflicting streamliners are not combined, such as streamliners A and B in the figure .For example, we avoid forcing a set simultaneously to contain only odd numbers and contain only even numbers.We associate a set of tags with each of the rules in order to implement this pruning.Rules applied to the same variable that share tags are not combined.This also removes the possibility of combining two different streamliners that differ only in the values of their softness parameters.
Searching for a streamliner portfolio
The two pruning rules described above only remove combinations that are sure to fail, or are equivalent to a smaller set of streamliners.Therefore, even after pruning, the number of combinations to consider is still typically too large to allow exhaustive enumeration.A traversal of the lattice allowing good combinations to be identified rapidly is desirable.In reality, streamliner generation has two conflicting goals: to uncover constraints that steer search towards a small and highly structured area of the search space that yields a solution, versus identifying streamliner constraints in training that generalise to as many instances as possible.These goals conflict as generally the search reduction a streamliner achieves is related to its tightness.The tighter a streamliner constraint the more propagation it can achieve at each node of search resulting in a more restricted search space; this is the reason that combining different candidate streamliners can provide superior results as with the addition of each streamliner the search space is further restricted.With two competing objectives, it is no longer feasible to find a single "best" streamlined specification: a streamliner combination may be optimal in relation to one objective, but at the expense of compromising the other.
To address these problems we adopt a multi-objective optimisation approach, where each point x in the search space X is associated with a 2-dimensional (following the number of objectives) reward vector r x in R 2 .Our two objectives: 1. Applicability.The proportion of training instances for which the streamlined model admits a solution.2. Search Reduction.The mean search reduction in solving time achieved by the streamliner on the satisfiable instances With these two objectives for each streamliner combination we define a partial ordering on R 2 and so on X using the Pareto dominance definition in multi-objective optimisation.Given x, x ∈ X with vectorial rewards r x = r 1 , r 2 and r x = r 1 , r 2 : To search the lattice structure for a portfolio of Pareto optimal streamlined models we have adapted the Dominance-based Multi-Objective Monte Carlo Tree Search (MOMCTS-DOM) algorithm [61].The algorithm has four phases, as summarised below.An illustration example of the four phases is presented in Fig. 10.
1.
Selection: Starting at the root node, the Upper Confidence Bound applied to Trees (UCT) [62] policy is applied to traverse the explored part of the lattice until an unexpanded node is reached.2. Expansion: Uniformly select a random admissible child and expand on it (see the simulation step below).
Simulation:
The collection of streamliners associated with the expanded node are evaluated.The vectorial reward Applicability, Search Reduction across the set of training instances is calculated and returned.Simulation is the most expensive phase.To constrain the computational cost and improve iteration speed we perform instance filtering to avoid wasting time on evaluating runs known to be UNSAT.More specifically, if a streamliner combination is proved to be UNSAT on an instance, we know that any of its descendants will also be UNSAT on that same instance without having to evaluate them.We make use of this knowledge to reduce the number of instances being unnecessarily evaluated at each lattice node.When we arrive at a given node in the lattice, the intersection of the sets of satisfiable instances from all available parents in the lattice is used to construct the evaluation set.For example, given three streamliners A, B, and C and a set of five instances.If we know AB is only satisfiable on instances {1, 2, 3} while AC is only satisfiable on instances {2, 3}, then for ABC we only need to evaluate the streamliner combination on instances {2, 3}.This reduces the computation on the current node by 60%.
Back Propagation:
The current portfolio of non-dominated streamliner combinations is used to compute the Pareto dominance test.The reward values of the Pareto dominance test are non stationary since they depend on the portfolio, which evolves during search.Hence, we use the cumulative discounted dominance (CDD) [61] reward mechanism during reward update.If the current vectorial reward is not dominated by any streamliner combination in the portfolio then the evaluated streamliner combination is added to the portfolio and a CDD reward of 1 is given, otherwise 0. Dominated streamliner combinations are removed from the portfolio.The result of the evaluation is propagated back up through all paths in the lattice to update CDD reward values, as shown in Fig. 10.
Improving portfolio complementary strength
In our initial implementation the multi-objective search of the lattice is performed until either the computational budget is reached or the lattice is fully explored.During this time one portfolio of non-dominated streamliners is built where domination is defined across the two objectives defined in Section 6.2.There are two deficiencies with this method that can be highlighted through an example.Consider a setting with three instances {A,B,C} and two singleton streamliners {S1, S2}.S1 retains satisfiability on instances {A,C} with {50%, 25%} reduction percentage respectively but renders instance {B} unsatisfiable, yielding {AvgReduction:37.5%,AvgApplicability: 66.6%} in terms of our two objectives.{S2} renders instance {A} unsatisfiable but retains satisfiability on instances {B,C} with {30%, 35%} reduction respectively, resulting in {AvgReduction:32.5%,AvgApplicability: 66.6%}.The resulting portfolio at the end of search will only ever contain S1 as S2 is always Pareto dominated and so will be disregarded.However S2 actually possesses some interesting qualities that we might not want to overlook.Firstly, it manages to cover instance B which is not covered by the current portfolio and it also manages to achieve a higher reduction on instance C. Given this in our ideal setting we would like to retain streamliner S2 as part of our portfolio.By averaging the performance of a streamliner and maintaining just one Pareto front it makes it difficult to distinguish cases like this and will often mean that the resultant portfolio will be suboptimal.
An alternative would be to create an objective per instance that records the performance of the streamliner on that individual instance.This would retain full information and allow us to distinguish the situation where a streamliner works on a complementary set of instances, or manages to attain a higher reduction on one particular instance.The difficulty with this approach is that with more objectives the size of the portfolio will grow exponentially as a result and it would become impractical to schedule the streamliners from the portfolio in any meaningful way.The other difficulty is if each streamliner produces small improvements to one or more instances then every path traversed in the lattice will produce a reward which will make it very difficult for our best first search procedure to focus and would essentially degrade it into random search.
In order to solve this issue, we adapted our search to incorporate elements of Hydra [63], a portfolio builder approach that automatically builds a set of solvers or parameter configurations of solvers with complementary strengths by iteratively configuring (a set of) algorithms.Instead of performing just one lattice search and building one portfolio, we now perform multiple rounds of search.In each round a portfolio is built to complements the strengths of the combined portfolios built in the prior rounds.
More specifically, in the first round, an MO-MCTS search with our original performance metric, which tries to optimise both applicability and solving-time reduction on the training set, is done and a portfolio of streamliners is constructed.In each subsequent round, a new MO-MCTS search is started using a modified performance metric.For each instance i, the best streamliner p for i (the one that has the highest solving-time reduction on i) in the combined portfolios from previous rounds is identified if it existed.For any new streamliner q being evaluated in the search of the current round, if it has better reduction than p on i, or if i was not yet solved by any streamliner in the current combined portfolio, performance of q on i is used, otherwise, the reduction value of p on i is used instead.This means that the new streamliner q will not be penalised for its poor performance on an instance if the instance is already efficiently solved by the combined portfolio in the previous rounds.Therefore, the MO-MCTS can focus on trying to improve performance in regions of instance space where the current portfolio is weak.The final result is a combined portfolio with complementary strengths that can perform well on all parts of the training instance set.
Performance of all evaluated streamliners are cached and re-used across rounds.In each round, at least M iterations (not including iterations using cached results) have to be completed.After that, the round is stopped if it spends N consecutive iterations without finding anything to add to the current portfolio, as it is an indication that we might have reached the point of diminishing returns for the current round.The whole Hydra search is terminated if the current combined portfolio remains unchanged after a round.In our experiments M and N are set as 10 and 5 respectively.The values were chosen based on a small manual tuning experiment, which aims at having a good balance between the number of rounds being done and the amount of resources given to each round within the available computational budget.
Independent solver search
For each problem class, we perform a streamliner search per solver (chuffed or lingeling).It might be expected that this is unnecessary and that the streamliners that work well for chuffed will also work well for lingeling and vice-versa, as we are generating the streamliners from the Essence specification of a problem class, which is solver independent.However, the intricacies of solvers such as heuristics, propagation mechanisms and restarts can be so different that the performance of a constraint can vary wildly.Also, the streamliners are defined in Essence and how they are represented in a constraint or SAT model can be very different.One streamliner that can be efficiently represented in a constraint model might be very verbose in the SAT encoding, which may result in substantial overhead during search and as such affect performance.We illustrate that via Fig. 11.The performance of a streamliner when tested on the same instance sets can drastically differ between CP and SAT.In Transshipment the streamliners that comprise the portfolio found via chuffed search do elicit reductions in lingeling albeit not as strong as their chuffed counterpart.In CarSequencing, however, almost all of the chuffed portfolio produces a negative reduction in solving time when applied in lingeling.
Applying a streamliner portfolio on unseen instances
Having constructed a streamliner portfolio for a particular problem class using our streamliner search on the training instance set, the next question is how to apply the given portfolio on an unseen instance of the same problem class.In this section, we describe different streamliner portfolio application methods used in this work, ranging from simple instanceoblivious approaches (Section 7.1 and Section 7.2) to instance-specific methods taken from the literature of automated algorithm selection [64,30] (Section 7.3).
Single best streamliner (SBS)
The most basic streamliner application approach, namely the Single Best Streamliner (SBS), is to choose from the portfolio the streamliner that results in the lowest average solving time across all training instances, and applying that chosen one for any unseen future instance.The deficiency of this approach is that streamliners that do not perform well on average across the instance space are neglected even if they may exhibit good performance on a subset of instances.
Lexicographic selection methods
It is possible to order the streamlined models in a portfolio lexicographically by, for example, prioritising Applicability, then Search Reduction.Given two objectives, there are two such orderings to consider.Thus two lexicographic selection methods are used herein: AppFirst, which prioritises applicability over search reduction, and ReducFirst, which has the reverse priority.
The selection process involves traversing the portfolio (using the defined ordering) for a given time period and applying each streamliner in turn to the given instance.The schedule is static in that it only moves to the next streamlined model when the search space of the current one is exhausted.
When traversing a schedule it is possible to dynamically filter streamliners based upon prior results.If for a given instance we are evaluating the schedule containing the streamliners {S-1, S-3, S-1-2} in their respective ordering.When evaluating this on a given instance if the first streamliner S-1 renders the test instance unsatisfiable, and this is proven within the given time limit, then this allows us to filter the rest of the schedule and remove S-1-2 since any superset of S-1 is guaranteed also to render the test instance unsatisfiable.
Automated algorithm selection methods
In many paradigms for solving combinatorial problems, such as SAT, CP, ASP (Answer Set Programming) there are multiple available algorithms or search strategies, all with complementary solving strengths.Automated Algorithm Selection (AS) techniques [65,64,30] aim to exploit this fact by utilising instance characteristics to select from a set of algorithms the one(s) expected to solve a given problem instance most efficiently.Algorithm selectors have had great success and have been shown empirically to improve the state of the art for solving heterogeneous instance sets [38,66].This is a very similar setting to our portfolio of streamliners, with complementary solving strengths and no single dominating streamliner.In this work we employ the algorithm selection system AutoFolio [38].Given a particular problem instance the goal is to have AutoFolio, based upon the features of the instance, predict which streamliner from the generated portfolio will most efficiently solve the instance.
When applying algorithm selection to a new domain a number of questions arise.First, there are multiple different algorithm selection techniques and there is the question of which particular AS technique is best for the current domain.Second, AS approaches generally contain several parameters and there is the need to set these effectively to obtain good performance.The AS framework AutoFolio [38] addresses these questions by integrating several AS techniques and automatically choosing the best one as well as configuring their hyper-parameters using the automatic algorithm configuration tool SMAC [67].AutoFolio also supports a pre-solving schedule, a static schedule built from a small subset of streamlined models.This schedule is run for a small amount of time.If it fails to solve an instance, the model chosen by the prediction model is applied.AutoFolio chooses whether to use a pre-solving schedule during its configuration phase by SMAC.
Experimental results
In the preceding sections we have presented a completely automated approach to the generation and selection of streamliner constraints, hitherto a laborious manual task.In this section we present two experiments to evaluate the efficacy of this approach.The first one is designed to measure the frequency with which streamlining results in a reduction in search,
Table 2
Performance comparison between ApplicFirst, ReducFirst, AutoFolio the Oracle and the SBS using the overall speedup on Distribution A, containing instances with unstreamlined solving times in [10,300] and the magnitude of that reduction (Section 8.2).The second one situates our approach in the simplest practical setting, where an unstreamlined model is solved in parallel with a streamlined model, in order to measure the overall speedup obtained when solving a given set of instances (Section 8.3).
Experimental setup
The effectiveness of our streamliner approach is demonstrated on a wide range of 10 different problem classes (as described in Section 4) with two solving paradigms (chuffed for CP and lingeling for SAT).All experiments were run on compute nodes with two 2.1 GHz, 18-core Intel Xeon E5-2695 processors.The streamliner portfolio construction phase was run on a single core with a maximum time budget of 4 CPU days for each pair of problem class and solver.
The generated streamliner portfolios were evaluated on two different instance distributions, distinguished by their comprising instance difficulty.The first one, denoted Distribution A, consists of instances with similar difficulty to those used during the portfolio construction phase (Section 6), i.e. satisfiable instances with a solving time within [10,300] seconds by the unstreamlined model.We use this to analyse the generalisation performance of the streamliner portfolios on similarly difficult instances unseen by the portfolio construction.The second test set, denoted Distribution B, includes instances generated by the same method (Section 5.1), but drawn from a different distribution with a solving time limit of (300,3600] seconds (by the unstreamlined model).Distribution B allows us to study the ability of the portfolio to generalise to instances of greater difficulty.
Similar to the generation of training instances, the automated instance generation tool AutoIG [55] is used for generating instances of both distributions.Each AutoIG run is given a wall-time limit of 24 CPU hours.The number of instances generated for each problem class are listed in Table 2 and Table 3).
Our system makes use of Conjure2 (for generating streamliners and for producing streamlined models), Savile Row3 (for producing solver-specific inputs, including the FlatZinc input for the fzn2feat4 feature extraction tool), and the two solvers chuffed5 and lingeling6 for the evaluation.All of those softwares were run with their default parameter settings.We report the performance of all streamliner scheduling/selection approaches described in Section 7, including the Single Best Streamliner (SBS), the two simple streamliner scheduling methods ApplicFirst and ReducFirst, and the automated algorithm selection approach AutoFolio7 [38].A training of AutoFolio on a pair of problem class and solver is given a tuning budget of one CPU day.We also report as a reference point the theoretically best performance, namely the Oracle, where we assume that the best solving model (either unstreamlined or the streamlined ones from the constructed portfolio) for each instance is used.The construction of our streamliner portfolios is similar to previous works on manual streamlining [13,17], we show that streamliners found using small instances can be highly effective when solving larger and more difficult instances.As described in Section 5.1 and Section 6, for each pair of problem class and solver, the lattice search is done using instances with the solving times similar to distribution A. By limiting the search to small (yet non-trivial) instances, we can not only examine a large amount of streamliners within a reasonable time budget, but also evaluating each streamliner on several instances, which allows the search to identify regularities and structures that are common among various instances.
The training cost of building a streamliner selection model on the constructed streamliner portfolio (i.e., AutoFolio) is generally much lower since it only focuses on a limited number of streamliner combinations.Therefore, to ensure the selection is effective on larger and more difficult instances, we add a small number of instances drawn from distribution B (separated from the test instances used for the evaluation) to the training of AutoFolio. 8This AutoFolio model is used for the evaluation on distribution B. Note that when evaluating on distribution A, we do not need to add those extra instances, i.e., the AutoFolio model is trained on instances from distribution A only (again separated from the test instances used for the evaluation).As shown in the subsequent section, AutoFolio achieves the best overall performance in several cases, especially on the larger and more difficult instances (Table 3), which demonstrates the importance of effective portfoliobased streamliner selection.
Frequency and magnitude of search reduction
The effectiveness of our streamliner method in term of improvement frequency (how often the selected streamlined models wins over the original model) and the magnitude of the improvement (in term of search reduction) are presented in Fig. 12 and Fig. 13.We begin by considering the setting where the instances are solved with the CP solver chuffed.The high improvement frequency of the Oracle on all problem classes demonstrates that there is almost always a streamliner in our portfolio that can be used to reduce search for a given unseen instance.As might be expected, the magnitude of the search reduction does vary with problem class.On Distribution A, for BACP, Car Sequencing, and Transshipment it is most pronounced, approaching one hundred percent, which would indicate a solution obtained with little or no search efforts, while for other problems such as BIBD, CoveringArray and Social Golfers, the reduction is generally much smaller.
Performance across all problem classes significantly improves for Distribution B, suggesting that the impact of streamlining grows with the difficulty of the problem instance.This is as expected: the size of the search space typically increases Fig. 12. Results with chuffed and lingeling on Distribution A. The top of each pair of charts shows how frequently the associated approach produces an improvement (% improved), and also indicates the reason for failure to improve on the remainder of the instances: the instance was rendered unsatisfiable (% UNSAT), or the search completed more slowly than the unstreamlined model (% non-improved).The bottom of each pair of charts shows the magnitude of the solving time reduction on those instances where an improvement was obtained (% reduction).Hence, care must be taken when comparing approaches, since an infrequently applicable approach may do well on the few instances it does improve (e.g. the lexicographic scheduling approaches on the Social Golfers Problem).The best approaches are both frequently applicable and result in a large search reduction.with that of the instance, providing the opportunity for the selected streamliner to prune larger parts of the search space and reduce search further.
In several cases, our automated streamliner selection approaches are able to deliver a substantial fraction of the performance of the oracle in terms of the percentage of instances improved.The two simple scheduling approaches sometimes perform well, but for problem classes such as BIBD, CoveringArray (on both instance distributions) and SocialGolfers (on distribution B), their performance is relatively weak.The Single Best Streamliner approach, which requires no further training following the streamliner search, offers a fairly good compromise between performance and cost before solving unseen instances, especially on the easy instances (distribution A).However, the most robust performance comes from AutoFolio.
Performance in the SAT domain is generally less strong than for Constraint Programming.Although the performance of the oracle again indicates that there is almost always a streamliner in our portfolio that can improve search, in Distribution A on 4 of 10 problem classes (BIBD, CarSequencing, EFPA and SocialGolfers), the magnitude of this reduction is small.On the remaining problem classes, the performance is stronger and in some cases (CoveringArray and FLECC) exceeds the improvement delivered in the corresponding CP setting.Once again performance clearly improves on Distribution B relative to Distribution A, suggesting that for SAT the impact of streamlining also grows with difficulty.
A practical setting
By employing the algorithm selection techniques described in Section 7 our aim is to maximise the occasions on which streamlining produces a reduction in search effort.However, we cannot expect an aggressive technique such as streamlining to be universally applicable: in particular, the selected streamliner may render the instance under consideration unsatisfiable.Therefore, we envisage a practical setting in which a streamliner portfolio is a constituent of a wider portfolio containing other more conservative approaches.The simplest such setting, which we employ here, is to run the streamliner portfolio in parallel with the unstreamlined model, which will produce a solution if the selected streamliner renders a satisfiable instance unsatisfiable.
We evaluated this parallel configuration on both Distribution A and B. Our results are summarised in Table 2 and Table 3, which present the Overall Speedup of each approach across the ten different problem classes with two solving paradigms, CP and SAT.Overall Speedup represents the total time of the original model divided by the total streamlined time across all instances.This metric gives an indication of the overall reduction in search effort across each instance distribution.
Across both distributions, AutoFolio is clearly the best performing among our different streamliner selection approaches.It achieves geometric mean speedups of 2.01× and 1.57× for Distribution A and 3.74× and 2.99× for Distribution B, with maximum speedups on Distribution A of over 4× for both CP and SAT, and on Distribution B of over 40× for CP and over 10× for SAT.As in the results presented in Section 8.2, there is a pronounced increase in the speedups achieved for the more difficult Distribution B instances.Savile Row introduces a cost for translating models to solver input, which can increase in the presence of streamlining constraints.Since many of the instances in Distribution A are relatively easy to solve, this limits the speedup obtainable through streamlining.As difficulty increases in Distribution B, the benefits of streamlining become clear.
Overall Speedup is an aggregation metric, which can obscure the individual instance speedups being obtained.For example, if a streamliner is evaluated on 10 instances and for half its application makes them trivial reducing the solving time to near zero but for the other half it is unsatisfiable the Overall Speedup will be ≈ 2×.This does not provide a good indication that on half of the instance distribution the streamliner is decimating the search space to such a high degree.In order to better visualize this Fig. 14 shows the distribution of speedup values across instance distribution B for the Oracle, SingleBestSolver and AutoFolio methods.The minimum speedup obtained for a streamliner in this setup is 1 (log 10 (0)) as these are the cases where the chosen streamliner is not satisfiable and as such is solved by the original model providing no speedup.If we restrict our scope to individual problem classes it can be seen that the application of streamliners can provide substantial speedups.
Using the chuffed CP solver, BACP and Transshipment are two problem classes where AutoFolio is able to obtain large speedups for a majority of the instance distribution.In the case of BACP the speedups range from ≈ 8× to ≈ 2568× with the 1st and 3rd quartiles having values at {q 1 ≈ 103×, q 3 ≈ 1458×}.For Transshipment the speedups range from ≈ 1× to ≈ 1120× with the 1st and 3rd quartiles having values at {q 1 ≈ 6×, q 3 ≈ 120×}.These large speedups are not just restricted to CP and under the SAT paradigm CoveringArray and Transshipment are also examples where this search space decimation can occur.For Transshipment the speedups range from ≈ 1× to ≈ 141× with the 1st and 3rd quartiles having values at {q 1 ≈ 25×, q 3 ≈ 35×}.For CoveringArray the speedups range from ≈ 1× to ≈ 244× with the 1st and 3rd quartiles having values at {q 1 ≈ 6×, q 3 ≈ 25×}.
Cumulative CPU consumption
We must be mindful that all results presented thus far in this section are in terms of the reduction/speedup in time of the portfolio approach versus the original model.Since the portfolio method utilises two cores, one for the original model and one for the portfolio, this consumes more resources to arrive at a solution.We present a further analysis in Fig. 15 to illustrate the cumulative time spent by the scheduling methods across the instances comprising Distribution B.Here the cumulative time of the portfolio approach to solve all instances, taking into account both cores, is compared against that of the original model.For the chuffed solver on 9 of the 10 problems, BIBD being the exception, AutoFolio is still able to reduce the cumulative time with, in some cases, a substantial reduction.Speedups of BACP: ≈ 42.98x, CarSequencing: ≈ 3.10x, TailAssignment: ≈ 1.59x and Transshipment: ≈ 3.88x are attained.For lingeling, on every problem except BIBD and EFPA positive speedups in cumulative time are achieved again with substantial results for some problems.Speedups of BACP: ≈ 2.37x, CarSequencing: ≈ 1.66x, CoveringArray: ≈ 5.96x, FixedLengthErrorCorrectingCodes: ≈ 2.13x, TailAssignment: ≈ 1.47x and Transshipment: ≈ 2.7x are attained.
It is useful to note that the cumulative time speedup of the portfolio approach is not always half that of the elapsed time speedup.The reason for this is that the portfolio approach does not always use twice the time of the original model.Let us use an example to illustrate this point.We have an instance which takes 100 s under the original model.If during evaluation the selected streamliner is proven to be unsatisfiable at T = 50 s then from T = 50 s to T = 100 s only one core will be utilised running the original model.The cumulative time speedup of the portfolio approach is then 2 3 compared with a speedup of 1 in elapsed time.
Related work
Improving solving performance has been a major research goal in Constraint Programming.Streamlining is a very powerful method towards this goal, among methods like adding implied constraints, symmetry breaking constraints and dominance constraints.Typically each of these methods are initially explored on a small number of problem classes manually by experts.Once their efficacy is shown, research has moved to automating their application.Symmetry breaking has been most successfully automated among these methods, implied constraints and dominance breaking constraints comparatively less so.Streamlining has been successfully applied manually prior to our work, we present the first substantial step towards automating the application of streamlining in this paper.This section provides the necessary context in prior work in streamlining and the related research areas.
Manual streamlining
Gomes and Sellmann in their introductory work on Streamlining worked on problems from the field of Combinatorial Design [13].One of the problems that they looked at was the construction of Diagonally Ordered Magic Squares (DOMS).They note that even for small sizes finding solutions to this problem was difficult, their base model could only find solutions up to size 9.They noticed a regularity in the solutions to the small instances: numbers within the magic square are quite evenly distributed.Intuitively this makes sense since numbers on each row, column, and diagonal have to sum to the same number and placing several large numbers on the same row is unlikely to lead to a solution.Once they identified this regularity, they posted a streamliner constraint that disallows similar numbers from appearing near each other.The streamlined model was then able to solve instances up to size 18.
Using a similar methodology to Gomes and Sellmann [13], Kouril et al. applied streamlining constraints to the Van der Waerden numbers problem [68].After observing patterns in the solutions to small instances, they added simple constraints that force or disallow certain sequences of values to occur in the solutions.Again, this led to a dramatic improvement in run time of the solver, allowing much tighter bounds to be computed.
Le Bras et al. used streamlining to help construct graceful double wheel graphs [17].Constraints forcing certain parts of the colouring to form arithmetic sequences allow for the construction of colourings for much larger graphs.These constraints led to the discovery of a polynomial time construction algorithms for such colourings, and eventually to a proof that all double wheel graphs are graceful.
Finally Le Bras et al. made use of streamlining constraints to compute new bounds on the Erd ős discrepancy problem [18].Here the streamlining constraints were used to enforce periodicity in the solution, the improved Walters sequence to occur in the solution and a partially multiplicative property.Thanks to these streamlining constraints new bounds were discovered for this problem.Streamlining proved to be valuable in all of these cases.Despite their success, their application was limited to mathematical and combinatorial design problems.The main reason preventing their widespread adoption has been the need for a laborious manual component that requires domain expertise to analyse solution patterns and derive common patterns.This motivated the automation of streamlining in this paper.
Streamlining is related to implied constraints, symmetry breaking constraints and dominance breaking constraints.All of these methods can be applied manually or (semi-)automatically to a given base model.Streamlining is orthogonal to these methods: it can fruitfully coexist with them in the same model.
Relation to model counting via XOR constraints
Gomes et al. [69] employ XOR constraints in order to obtain good quality lower bounds for model (meaning solution in this context) counting in SAT problems.Their approach involves repeatedly adding randomly chosen XOR constraints on the problem variables.The central idea of the approach is that each random XOR constraint cuts the search space approximately in half, which means that approximately half of the solutions remain satisfiable after the addition of an XOR constraint.By adding several XOR constraints they aim to bring the SAT problem to the boundary of being unsatisfiable.They provide a formal proof that with high probability, the number of XOR constraints required in this process determines the solution count.They also empirically study the relationship between the number of XOR constraints and the solution count.They report promising results including a good bound on the number of solutions.
The randomly chosen XOR constraints in this work are streamlining constraints, since they aim to remove some but not all solutions.However the focus is very different from ours, specifically they focus on counting solutions rather than efficiency of finding the first solution.In their empirical analysis they compare their XOR-based approximation method with exact solution counting methods.Exact solution counting methods require repeatedly solving the same problem, typically by adding solution blocking clauses to avoid enumerating the same solution twice.The XOR-based method is computationally faster than exact solution counting overall, however it is unlikely to be faster for finding a single solution.Indeed, as the authors note, adding large XORs can make a SAT solver inefficient.Nonetheless, this scheme might work as an option in our current framework, although we would need to consider in future work how best to integrate it with our method of generating streamliners from the structure present in an Essence specification.
Relation to implied constraints
Streamlining constraints are most closely related to implied constraints.An implied constraint is an additional constraint that is added to the model to increase the solving performance.They are also called redundant constraints, since the correctness of the model does not depend on them.There is a key difference between implied constraints and streamlining constraints: implied constraints are sound (they do not change the set of solutions).Some previous work focuses on manually adding implied constraints to a model to increase the solving performance [1,2].The scope for adding implied constraints is narrower due to the soundness requirement.Automated approaches ( [3][4][5]) focus on deriving simple facts from the model constraints and using forward-chaining to generate more complex constraints that still hold for a set of instances.Automatically proving the soundness of these implied constraints is a challenging task in general, which limits the impact of these approaches.
Relation to symmetry breaking constraints
Breaking modelling symmetry has been shown to have a substantial impact in improving solving performance.There are two main approaches: dynamic approaches that generate symmetry breaking constraints during search [70], and static approaches that work by adding constraints to the model to break symmetries [71,43].Static symmetry breaking constraints may allow the derivation of additional implied constraints [1].Symmetry breaking constraints are either detected from the data structures in the model [72] or detected automatically by applying graph automorphism detection methods on the constraint graph [73].Conjure generates static symmetry breaking constraints thanks to its high-level variable domains [74,75] and our streamlining approach works in conjunction with the symmetry breaking constraints.
Relation to dominance breaking constraints
A generalisation of symmetry is dominance between solutions.In the context of optimisation problems, dominance breaking constraints can result in dramatic speed ups [10][11][12].A dominance breaking constraint disallows solutions that are known to be sub-optimal, as well as some optimal solutions, as long as at least one optimal solution remains.This is achieved by identifying a mapping between solutions and a condition under which this mapping will improve a solution.Preliminary work on automating parts of the derivation of dominance constraints shows promising results [76].
Portfolio approaches
In several contexts, no single algorithm has been seen to dominate all others on all instances of a computational problem.Algorithms often show complementary strengths, and the ideas of making use of algorithm portfolios have been investigated in various fields with great success [30].For SAT, perhaps the most successful examples of portfolio approaches is SATzilla [77,78,66], the first portfolio approach that was shown to outperform stand-alone SAT solvers, and has exhibited strong performance in several SAT competitions.For CP, the portfolio approaches CPHydra [79] and SUNNY-CP [80,81] won the Second International Constraint Solver Competition [82] and MiniZinc challenges 2015-2017 (open track), respectively.We refer to [64,30] for an overview of portfolio approaches and their achievements.To the best of our knowledge, this work is the first one that applies portfolio approaches in the context of streamliners for constraint modelling.
Conclusions and future work
Streamliner generation has been the exclusive province of human experts, requiring substantial effort in examining the solutions to instances of a problem class, manually forming candidate streamliners, and then testing their efficacy in practice.In this work we have presented the first completely automated method of generating effective streamliners, achieved through the exploitation of the structure present in abstract constraint specifications written in Essence, a best-first search among streamliner candidates, and a procedure to select and apply streamliners when faced with an unseen problem instance.Our empirical results demonstrate the success of our approach.
Streamlining constraints are typically used for constraint satisfaction problems.Streamlining in the context of optimisation is a challenge, because rather than rendering an instance unsatisfiable, an over-aggressive streamliner might disallow the best solution(s).Recently, the effective use of streamlining constraints for optimisation problems has been investigated in a preliminary study in [83] In this work, the authors present a way of producing automatically a portfolio of streamliners for optimisation problems.Each streamlined model in the portfolio represents a different balance between three criteria: how aggressively the search space is reduced, the proportion of training instances for which the streamliner admitted at least one solution, and the average reduction in quality of the objective value versus the unstreamlined model.This third criteria is important for optimization problems because the goal is to find a solution with an optimal objective value, not just any solution as in satisfaction problems.
In support of our new method, we present an automated approach to training and test instance generation, and provide several approaches to the selection and application of the streamliners from the portfolio.Empirical results demonstrate drastic improvements both to the time required to find good solutions early and to prove optimality on three problem classes.
A further important item of future work is to exploit the ability of Conjure to refine several alternative models from an Essence specification.Herein we have employed the default heuristic in Conjure in order to refine a single model from each streamlined specification, but although known generally to select effective models [19], this is not necessarily the best model among the set that Conjure can produce.We will add an extra degree of flexibility to the search for effective candidate streamliners by measuring the performance of several models for each streamlined specification.Selecting the best model of a streamlined specification among those available will lead to still greater performance gains.For SAT this approach could be extended further by searching over the possible SAT encodings of the Essence Prime models refined by Conjure.Preliminary results on a single problem class are promising [84].
Declaration of competing interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Ozgur Akgun reports financial support was provided by Engineering and Physical Sciences Research Council.Nguyen Dang reports financial support was provided by Leverhulme Trust.Ian Miguel reports financial support was provided by Engineering and Physical Sciences Research Council.
Fig. 1 .
Fig. 1.Essence specification of the Car Sequencing Problem[14], shown to be NP-complete[15].A number of cars (n_cars) are to be produced; they are not identical, because different classes (n_classes) are available (quantity) as variants on the basic model.The assembly line has different stations
Fig. 2 .
Fig. 2. Solving a problem instance using a streamlined Essence specification.Conjure is run once for a streamlined model, whereas Savile Row and the selected solver is run once per instance.The streamliner that is provided as input to Conjure is generated by a separate call to Conjure as part of Phase 1 (Candidate Streamliner Generation).
Fig. 5 .
Fig. 5. Essence problem specification fragments for the problem classes used for evaluation, in addition to Fig. 1.Here we list the decision variable declaration statements for these problems, since these are what govern the generation of candidate streamlining constraints.The full models we use can be found in our accompanied github repo.
Fig. 6 .
Fig.6.A snippet of the instance generator model for the Transshipment problem.For brevity, we omit the specification for n_customer, costTC, stock, demand and maxCost, as they are rewritten in exactly the same way as other parameters of the same types.
Fig. 7 .
Fig. 7. Performance of three example streamliners on the training set of 4647 instances generated for the Fixed Length Error Correcting Code problem with chuffed.Each two dimensional plot is a projection of the original multi-dimensional instance feature space via Principal Component Analysis.(For interpretation of the colours in the figure(s), the reader is referred to the web version of this article.)
Fig. 8 .
Fig. 8. GMeans clustering results on the instance feature space (projected to 2-dimensional space by PCA) for the Fixed Length Error Correcting Codes problem with chuffed.Each colour represents a cluster.
Fig. 9 .
Fig.9.The power set of singleton candidate streamliners is explored to identify combinations that result in powerful streamlined specifications.Starting from an empty set of streamliners (the unstreamlined model), new streamliners are gradually added.If small sets of streamliners that fail to retain solutions are identified, such as CD, all supersets can be pruned from the search, vastly reducing the number of vertices to be explored.Streamliners A and B are tagged (Section 3) mutually exclusive, and so no streamliner combinations containing both are evaluated.
Fig. 10 .
Fig. 10.MOMCTS-DOM operating on the streamliner lattice.A, B and C refer to single candidate streamliners generated from the original Essence specification.As MOMCTS-DOM descends down through the lattice the streamliners are combined through the conjunction of the individual streamliners (AB, ABC).The nodes are labelled with CDD reward value divided by the number of times visited.
Fig. 11 .
Fig. 11.For Transshipment and CarSequencing the portfolios generated during the chuffed streamliner search are tested on lingeling on the same set of test instances.The Average Reduction across the two portfolios is represented for both paradigms.The same set of test instances are used so that any variation in the reductions of the streamliners is purely due to the different setting.
Fig. 13 .
Fig. 13.Results with chuffed and lingeling on Distribution B. Detailed on meaning of the plots are described in Fig. 12.
Fig. 14 .
Fig. 14.Distribution of speedup values (in base 10 logarithmic scale) for the Oracle, AutoFolio and SingleBestSolver on Distribution B.
Fig. 15 .
Fig. 15.The cumulative CPU time of AutoFolio in comparison to that of the Oracle and the Original model, totalling across the test instances of Distribution B. The x-axis shows the total number of instances, sorted in descending order of instance difficulty (solving time) according to the original model.The y-axis shows the corresponding total cumulative CPU.
Table 1
For each problem class, the table shows the following fields: the number of candidate streamliners automatically generated by Conjure, the total number of training instances generated by the automated instance generation procedure, and the number of clusters detected by GMeans.The instance-related fields are different per solver, as instance generation is done separatedly for each solver. seconds.
Table 3
Performance comparison between ApplicFirst, ReducFirst, AutoFolio the Oracle and the SBS using the overall speedup on Distribution B, containing instances with unstreamlined solving times in [300, 3600] seconds.
|
2023-03-31T15:17:32.067Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9317411177919c64f7ccd51a1777e87cdc4331da",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.artint.2023.103915",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b565422e0371a8f1f793fb6bae34c4ad7739aa91",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
2532187
|
pes2o/s2orc
|
v3-fos-license
|
On the study of force-balance percolation
We study models of correlated percolation where there are constraints on the occupation of sites that mimic force-balance, i.e. for a site to be stable requires occupied neighboring sites in all four compass directions in two dimensions. We prove rigorously that $p_c<1$ for the two-dimensional models studied. Numerical data indicate that the force-balance percolation transition is discontinuous with a growing crossover length, with perhaps the same form as the jamming percolation models, suggesting the same underlying mechanism driving the transition in both cases. In other words, force-balance percolation and jamming percolation may indeed belong to the same universality class. We find a lower bound for the correlation length in the connected phase and that the correlation function does not appear to be a power law at the transition. Finally, we study the dynamics of the culling procedure invoked to obtain the force-balance configurations and find a dynamical exponent similar to that found in sandpile models.
I. INTRODUCTION
Uncorrelated percolation and its associated geometric phase transition is arguably the most studied paradigm for a phase transition in a disordered system. Not only have physicists been able to nail down the universality class of percolation, more recently, mathematicians have been able to rigorously verify the universality class for at least one particular percolation model [1,2]. Even more recently, this work has been extended to a second two-dimensional percolation model [3]. Other twodimensional models are expected to follow.
While uncorrelated percolation exhibits a continuous phase transition, there exists a new class of correlated percolation models that provably exhibit a discontinuous phase transition-a notable departure from the transition in uncorrelated models [4][5][6][7][8]. What do we mean by correlated percolation? We mean that there are constraints imposed on the occupation of sites such that correlations in the occupation arise regardless of whether or not there is a transition.
One of the simplest class of models of correlated percolation is k-core/bootstrap percolation [9][10][11][12][13]. It is defined as follows. Consider a regular lattice of coordination number Z max , and some integer k with 2 ≤ k < Z max . Initially, sites are independently occupied with probability p. Then, all occupied sites with fewer than k neighboring occupied sites are eliminated. This decimation process is repeatedly applied to the surviving occupied sites, until all surviving sites (if any) have at least k surviving neighbors. The surviving sites are called the k-core, and phases of the model are determined by the presence or absence of an infinite cluster of these survivors. The k-core percolation model has a number of physical realizations [14], including nonmagnetic impurities in a magnetic system [9] and the glass transition via a kinetically-constrained spin-flip model known as the Fredrickson-Andersen model [15,16].
Another well-known model of correlated percolation is the Kob-Andersen model [17], a particle-conserving counterpart to the Fredrickson-Andersen model. In the Kob-Andersen model, a particle can hop if and only if there are at least m empty neighbors before and after a particle hop. As the density of particles increases, it becomes more difficult to hop, and the density of frozen particles increases, resulting in slower dynamics. Eventually, the frozen particles percolate throughout the system, resulting in a glass transition. Based on numerical analysis of the percolation of frozen particles, it was initially thought that the Kob-Andersen model exhibited a glass transition at a value of p c ≈ 0.84 in two dimensions. However, recent work by Toninelli, Biroli, and Fisher rigorously demonstrates that the thermodynamic p c is actually unity for the range of m relevant to the glass transition [18,19].
Jamming percolation, a new class of two-dimensional correlated percolation models inspired by kinetically constrained models, was introduced by Toninelli, Biroli, and Fisher [4,5,8]. These models consist of the following occupation constraints: a site can remain occupied only if there exists at least one occupied site in set A and one occupied site in set B, or one occupied site in set C and one occupied site in set D. All sets are disjoint from each other and contain two sites. See Fig. 1 for the sets in a jamming percolation model called the spiral model. Jamming percolation models exhibit discontinuous phase transitions with a crossover length scale that diverges faster than any power law. It has been recently demonstrated that this behaviour prevails when more than four disjoint sets are introduced [20]. While there are rigorous results for some models in this class, it is unknown how generic this transition is and whether or not there are other unusual, or atypical, phase transitions of correlated percolation.
Here, we present a class of correlated percolation models denoted as force-balance percolation models. Forcebalance percolation was originally introduced in Ref. [11] as a toy model for the jamming transition in finite dimensions. The jamming transition is a transition from a liquid-like to an amorphous solid-like state as some particular control parameter, such as the packing density, is varied. Examples of potentially related jamming systems are glass-forming liquids, colloidal suspensions, foams, emulsions, and granular matter [21]. Despite decades of study of the glass-forming liquids in particular, however, it is even unclear whether these transitions are true thermodynamic transitions or merely examples of kinetic arrest [22].
There has been some recent activity focusing on a zerotemperature jamming transition in a system of repulsive soft spheres as the packing density is increased. Numerical simulations by O'Hern, et al. considered repulsive soft particles in two and three dimensions [23,24]. For small packing fraction φ, the particles easily arrange themselves so as not to overlap with any other particle, and the total potential energy thus vanishes. As φ is increased, there is a particular value of φ c (Point J) above which the particles can no longer avoid each other and the total potential energy becomes nonzero. The system jams in that it develops nonzero static bulk and shear moduli above φ c . The average coordination number (the average number of overlapping neighbors per particle) jumps from Z = 0 to Z = Z c at Point J, and then rises with increasing packing fraction φ as Z − Z c ∼ (φ − φ c ) β , where β = 0.5. This behaviour was recently observed in a two-dimensional systems of glass beads [25]. Furthermore, as φ approaches φ c from above, the singular part of the shear modulus vanishes with the exponent γ = 0.5; more recent simulations also find that there is a length scale that diverges with an exponent ν = 0.25 [26]. So the transition at Point J appears to have characteristics of both first-order and second-order phase transitions: at the transition, Z is discontinuous, but there are nontrivial power laws.
To understand this somewhat unusual transition, an analogy to k-core/bootstrap percolation was made in Ref. [11]. The scalar aspect of the principle of local mechanical stability, where particles need d + 1 contacts, maps to a requirement of k occupied neighbors. Surprising agreement was found between the mean-field k-core percolation exponents and the repulsive soft sphere simulations. We note that the two-and three-dimensional simulations observed the same exponents, suggesting a possible critical dimension of two since logarithmic corrections would be difficult to determine [27].
While there is agreement between the mean field k-core exponents and the low-dimensional repulsive soft sphere exponents, k-core in finite-dimensional spaces does not appear to have such agreement. Such systems seem to fall into one of two classes: the transition is either continuous and in the same universality class as normal percolation, or it does not occur until p c = 1. In the first class, systems that allow finite clusters all exhibit continuous transitions [28,29]. In the second class, large voids are very likely to grow, and in the infinite system limit, with probability one there will be at least one void that will grow to empty the entire system [30,31]. This prevents k-core percolation for any p < 1. Unfortunately, neither category of k-core percolation describes the nontrivial discontinuous transition observed in the finite-dimensional simulations and experiments of jamming.
In an effort to try to capture the behaviour of jamming in finite dimensions, force-balance percolation was introduced in Ref. [11]. In this model, the k-core constraint is retained, but the vectorial constraints of the principle of local mechanical stability are mimicked by creating culling rules that take into account where the neighboring particles are located. Loosely speaking, if there is a neighboring contact to one side of the particle, there must be at least one neighboring contact on the other side of the particle to allow for force balance.
We must emphasize the force-balance percolation is a model with no explicit forces. We look only at connectivity, in contrast to models such as rigidity percolation where repulsive and attractive forces are defined on, say, a lattice of springs [32][33][34][35]. However, the nature and possible universality of the rigidity percolation transition is still up for debate, despite decades of study. Since forcebalance percolation is a much simpler model there is ultimately a better chance of analyzing it beyond numerics.
Here, we explore several versions of force-balance percolation, both rigorously and numerically, to begin to answer the following questions: (0) Is there a force-balance percolation transition? (1) If there is indeed a transition, what are its properties? Is it continuous? (2) How generic is the transition among the various force-balance models? (3) Is there a link between force-balance percolation and jamming percolation? Are they in the same universality class?
The paper is organized as follows. In section II we review the force-balance percolation model introduced in Ref. [11], and introduce two related models. We present in section III a rigorous proof that the thermodynamic p c is less than unity for at least two of these models. Earlier work on k-core percolation misinterpreted numerical results, finding transitions in novel universality classes, with p c < 1 [30,36,37], when in fact the models studied had p c = 1 [31,38,39]; our proof renders our subsequent interpretation of the numerical data presented in section IV on sounder footing. Finally, we close in section V with a summary of our findings and discuss their implications.
II. MODELS
For the first force-balance percolation model, we begin with a two-dimensional square lattice. Each site neighbors all sites except itself within a 5x5 square-each site therefore has 24 nearest neighbors. (For a 3x3 square, with the following rules, p c = 1). Since we are in two dimensions, we impose a 3-core constraint. However, we also impose the force-balance constraint, which is the following: there must be at least one occupied neighbor in set A, which in turn calls for at least one occupied neighbor in set B, and there must be at least one occupied neighbor in set C, which in turn calls for at least one occupied neighbor in set D. The four sets A, B, C, and D, are defined in Fig. 2. The force-balance constraint can be succinctly stated as: (A and B) and (C and D), where each letter X is short for "at least one occupied site in set X". Note that the force-balance constraint is defined in such a way such that vertical and/or horizontal lines of occupied particles are, by themselves, not stable. Fig. 3a demonstrates an allowed configuration and Fig. 3b demonstrates a prohibited configuration.
To enforce the force-balance and k-core constraints, we initially occupy sites on the lattice with independent occupation probabilities p, and then repeatedly remove occupied sites that violate either the k-core or force-balance constraints, until all remaining occupied sites are stable. Note that p is the occupation density before culling, and generically differs from the final occupation density. The model is abelian. In other words, the final configuration after the culling process is independent of the order in which sites are culled. It can be done in parallel, or in series, or some combination thereof. One can define another force-balance-like model that allows for horizontal and vertical lines of occupied neighbors. However, such a model would be nonabelian. Nonabliean models are less desirable, because in such models the final re- sults depend on the order in which sites are culled. In our work, the culling procedure is merely an algorithm to achieve the force-balance percolation configuration.
To determine whether or not the behaviour observed in the original force-balance model is generic, we define two additional abelian models. Our second model is defined on the square lattice, with 16 nearest neighbors, and quadrants as defined in Fig. 4. Again, k = 3 and the force-balance constraint is the same as above: (A and B) and (C and D). This model is obviously similar to the first one, though the ratio of k to Z max is different.
The third model is a three-dimensional model with 26 nearest neighbors and six regions, color-coded in Fig. 5. Each of the six regions consists of a 3 × 3 square on a face of the 3 × 3 × 3 cube centered on the site whose stability is being analyzed. For this model, we impose a 4-core constraint (since d = 3), and the force-balance condition requires an ocupied site in each of the six regions.
For comparison, along with numerical simulations of these three models, we also performed simulations on the spiral model. This model was introduced by Toninelli, et al. [7,8], and they have proved that it undergoes a jamming transition, and obtained rigorous results about the properties of the transition [8].
III. RIGOROUS RESULTS
A. Proof that pc < 1 in the 2d force-balance models To prove that p c < 1 for the initial force-balance model on the square lattice, we will demonstrate that p c < 1 for a more heavily constrained model with the same 24 nearest neighbors. In particular, we require (1) k = 6, (2) at least one occupied neighbor in the 4-site region to the northeast, and (3) at least one occupied neighbor in the 4-site region to the southwest. See Fig. 6 for the two regions. Since any sites stable under these conditions are automatically stable in the force-balance model, if p c < 1 for the k = 6 NE-SW model, then also p c < 1 for the force-balance model. Next, we prove that the origin has a non-zero probability of participating in an infinite cluster for the k = 6 NE-SW model. We divide the lattice into clusters of three sites, as shown in Fig. 7. Certain clusters are defined as adjacent, and connected by directed lines in Fig. 7. Sites not grouped in 3-clusters play no role in the proof, and can be ignored. Now, suppose that p > (p DP c ) 1/3 , where p DP c is the critical probability for two-dimensional directed percolation. Then each 3-cluster is occupied with probability p 3 > p DP c . The directed lines in Fig. 6 are isomorphic to two-dimensional percolation. Thus, the probability of the origin participating in an infinite chain of 3-clusters to the northeast is nonzero, provided the 3-cluster at the origin is occupied. Similarly, the probability of the origin participating in an infinite chain of 3-clusters to the southwest is also nonzero. Looking only at the infinite chain of 3-clusters, it is straightforward to check that every site in it is stable under the culling rules. To see this, observe that each 3-cluster in the infinite path has only four possible configurations of adjacent 3-clusters, shown in Fig. 8. For all four configurations, all sites in the central 3-cluster are stable under the k = 6, NE-SW stability condition. Therefore, p c < (p DP c ) 1/3 < 1. This bound holds for k ≤ 6 for the force-balance model, since the force-balance constraints are a subset of the constraints in the NE-SW model.
A similar proof holds for the 16 nearest-neighbor forcebalance model. For this model, one can construct 2clusters to obtain a proof that for k ≤ 4 we have p c ≤ (p DP c ) 1/2 . We note that a simple lower bound for p c for both models is the p c of the corresponding uncorrelated percolation models (i.e. the p c of the model without culling). Regarding the three-dimensional model, as was shown in Ref. [20], for three-dimensional versions of the jamming percolation models, percolating structures along one direction can cross without having sites in common. This prevents an obvious mapping to directed percolation. We have therefore not obtained a rigorous bound for p c in the three-dimensional model. While it can be shown that p c < 1 for the twodimensional models, there is no proof as to whether the transition is continuous or discontinuous. For the twodimensional jamming percolation models, there exists a rigorous argument for a discontinuous transition assuming a conjecture regarding directed percolation [4,8]. The argument relies on having two independent directed percolation processes arising from the disjoint pairs of sets, each containing two sites. The jamming percolation transition can then be shown to occur at the directed percolation transition point. In the force-balance models considered here, the sets are composed of more than two sites, and are not disjoint. The critical occupation probability thus cannot be easily identified. This means that it is not obvious how to construct a rigorous argument along the lines of the jamming percolation models.
However, we first argue that the alteration of some properties of the jamming percolation models to make them look more like force-balance percolation should not change the nature of the jamming percolation transition. For example, the property of having two sites per set in the jamming percolation models can be extended to having more sites per set by investigating other lattice models with more nearest neighbors, but belonging to the same universality class as directed percolation. Surely, there exist other directed percolation processes with more than two neighbors per site.
On the other hand, the force-balance rules can be modified to be more like the jamming percolation models, where the "and" between overlapping sets is equivalent to "or" between various pairs or triplets of smaller disjoint sets. See Fig. 9. While the occupation of pairs of disjoint sets are the same as in the jamming percolation case (excluding the k-core constraint where more than one site in the set must be occupied for the pairs), the occupation of triplets are more stringent. It remains to be seen whether or not a rigorous argument can be constructed for force-balance percolation.
Given these arguments, the transition for the force balance percolation models may in fact behave similarly to the jamming percolation models and so we expect the transition in the force-balance case to be discontinuous as well. We will numerically test this hypothesis as well as others in the upcoming section.
IV. NUMERICAL RESULTS
A. 24 NN model
Culling dynamics
We begin our numerical simulations of the 24 NN (nearest neighbor) force-balance model by looking at the dynamics of the culling process. The lattice is size L by L. We surround it with a boundary of fully occupied sites, two layers thick, which are never culled. We denote this as wired boundary conditions. Then, at each timestep, we simultaneously cull every unstable site. We repeat this until all remaining sites are stable and record the culling time.
For any given system size, the mean culling time as a function of the initial occupation density peaks for some density-a sample curve for L = 2896 is shown in Figure 10. The density at which the curve peaks provides one possible definition of the critical density. The size of the peak is denoted by M , the maximum number of average culls. M grows with L. We plot M vs. L on a log-log scale in Figure 11. The results are well fit by a power law, although there is visible overall curvature. A best fit for L ≥ 32 gives M ∝ L α , with α = 1.226±0.027, which is suggestively close to 5/4. We note that 5/4 is the dynamical exponent for avalanches in the sandpile model [40]. Priezzhev has given an analytical argument that the dynamics exponent for the sandpile model should be exactly 5/4 [41]. More generally, Pietronero, et al. [40] have given a renormalization group argument that the avalanche exponent should be the same for a wide class of "sandpile-like" models [42]. They obtain an approximate exponent of 1.253 ≈ 5/4. "Sandpile-like" models are those in which energy is dissipated, so that instabilities propagate from site to site. A similar process may be responsible for the exponent of 5/4 seen in our models here: culling at a site may be analogous to toppling at a critical point of a sandpile-like model, in the sense that the culling of one site triggers adjacent cullings that can propagate long distances. The same data is also plotted for the spiral model.
Force-balance avalanches
Farrow, et al. [43] studied the dynamics of culling in k-core percolation on various lattices by looking at avalanches. More precisely, the culling process is iterated until a stable k-core configuration is obtained. Then, a random site in the stable k-core is removed. It triggers the removal of other sites until the k-core stabilizes once more. The number of sites removed during this removal is the culling avalanche size. The process is repeated until the lattice is completely empty. In cases where p c = 1 and the stability rules allow no finite clusters, a power-law distribution of culling avalanche sizes was found. For these cases, the system goes from being unoccupied for p c < 1, to being fully occupied at p c = 1. Since all sites must be eventually be removed below p c = 1 and no finite clusters are allowed, the culling avalanches should become spatially long-ranged near p c = 1. This result is to be contrasted with k-core cases with finite kcore clusters. There, no power law distribution of culling avalanche sizes was found [43].
In the force-balance model, there are no finite forcebalance clusters, so the culling avalanches should be spatially long-ranged near the transition. Whether or not there should be a broad distribution of sizes is not clear a priori. Given the sandpile-like behavior detected in the mean culling time required to obtain a force-balance cluster, one may expect to find a broad distribution as is found in sandpile models. However, if the force-balance percolation transition is discontinuous one would expect a well-defined avalanche size for those systems whose redundant sites have already been removed.
Numerically, we calculate the probability of having a culling avalanche size s, P (s) in the presence of periodic boundary conditions. See Fig. 12. The probability is broad near and above the transition for intermediate avalanche sizes. On a log-log plot, the slope of P (s) for the broadly distributed intermediate-sized avalanches depends somewhat on p and on L. More careful study is needed to determine whether or not these avalanche sizes are consistent with the measured −1.253 for sandpile models [42]. It is of note that for p = 0.45 and L = 512, for example, the slope is approximately −1.3.
As opposed to quantitative analysis of the intermediate-sized avalanches, we are more interested in first determining the qualitative nature of the curves. As opposed to a purely broad distribution, there is a prominent peak at the tail of the distribution. These well-defined avalanche sizes correspond the last occuring avalanches when the lattice ultimately empties out. These correspond to the marginal infinite cluster. The peak persists as p is decreased towards the p c indicated from the position of the maximum in the mean culling times, suggesting a discontinuous transition for the onset of the infinite cluster. This behaviour is retained in the larger systems with the peak becoming more separated from the broad part of the distribution. See Fig. 13.
Spanning cluster
We next look at P s , the probability of spanning. We define this, for wired boundary conditions, as the probability that the largest cluster connects either the top and bottom sides, or the right and left sides; for this definition, sites are defined as connected if one is within the 24 site neighborhood of the other (see Fig. 2). Fig. 14 depicts P s for the 24 NN model with wired boundary conditions. Fig. 15 shows the same curves, but for periodic boundary conditions. For periodic boundary conditions, it is impossible to have any occupied sites without having a spanning cluster, so to check if we have a spanning cluster, we just need to check that at least one site is occu-
pied.
For periodic boundary conditions, the P s curves can be generated quickly by a simple modification of the Newman-Ziff algorithm [44]. For uncorrelated percolation, the Newman-Ziff algorithm passes once through every possible density, in increasing order, by adding occupied sites in random order, and updating the connectivity with the Hoshen-Kopelman algorithm [45]. This allows for efficient generation of P s for every occupation density. This method is not available to us for the force-balance model with wired boundary conditions, because there sites are added by increasing the density, but also removed by culling. This prevents a single pass through all densities from being done in time O(L 2 ln L). (The Hoshen-Kopelmann algorithm does not allow cluster identifications to be rapidly updated after removal of sites.) However, with periodic boundary conditions in a force-balance model, we do not need to run the Hoshen- Kopelman algorithm. We can instead start from a fully occupied lattice, and remove sites until the culling condition causes an empty lattice. This allows the calculation of P s at every density in time O(L 2 ). From Figs. 14 and 15, we can define the critical probability for a specified system size, p c (L), as the initial occupation density that gives P s = 1/2. This definition differs from that based on the peak of the mean culling time, given in subsection IV A 1. However, as shown in figure 16, the two definitions have the same qualitative dependence on L, and approach each other for large system sizes.
Given at the width W of the transition, which we define to be the difference between the occupation probabilities that yield P s = 1/4 and P s = 3/4. The width as a function of L is shown in figure 17 for both wired and periodic boundary conditions. The same renormalization group formalism that implies | p c (L = ∞) − p c (L) |∝ L −1/ν also implies W ∝ L −1/ν . However, Fig. 17 shows clear deviations from this power law form. This trend was starting to emerge in Ref. [11], though it was masked by the large error bars for the larger systems. In other words, the fit to the above assumption in Ref. [11] was premature. Based on our above heuristic arguments for a connection between force-balance percolation and jamming percolation, we instead fit to the form for a diverging crossover length scale found by Toninelli, et al. [7,8] (TBF) for the spiral model. Approaching the transition from below, TBF identified a crossover length Σ, such that systems of size smaller than Σ are likely to have a spanning cluster, while those of size greater than Σ are exponentially unlikely to have a spanning cluster. TBF proved that Σ diverges at the transition faster than any power law. In Ref. [4], TBF argued that the upper and lower bounds scaled similarly, finding Σ ∼ exp(−C(p − p c ) −µ ), where µ = (1 − 1 z )ν || ∼ = 0.64. (Both 1 z ∼ = 0.63 and ν || ∼ = 1.73 are from directed percolation [46].) Based on the TBF formula, we propose that W ∝ (ln(L)) −1/µ . The widths are replotted with the appropriate scales for this fit in Figure 18, and while the curves still have deviations from perfect straight line behavior, the agreement is significantly better than for W ∝ L −1/ν . From the width data, for wired boundary conditions, we extract µ = 0.39 ± 0.01; for periodic boundary conditions, µ = 0.45 ± 0.02. The stated error bar is purely statistical and does not take into account systematic effects that occur given the small range of system sizes, so one cannot necessarily rule out a link with overlapping directed percolation processes. We also note that in Ref. [4], TBF stated that their numerical data was consistent with µ ∼ = 0.64 given the large systematic error bars. There are two possible, but incompatible, explanations for the difference in the results for µ. One possibility is that the scaling of W and |p c (L = ∞) − p c (L)| is in fact the same in the thermodynamic limit, but that we obtain differing µ's for the finite system sizes studied due to the slow approach to the thermodynamic limit. This would be reminiscent of the extremely large finite-size effects seen in analogous systems, such as k-core percolation, the Kob-Andersen model, and glassy dynamics. Numerically, this possibility is quite reasonable, given that the TBF formula for the growing length scale has an extra power law factor in front. A second possible explanation is that W and |p c (L = ∞) − p c (L)| in fact scale differently. This would be possible if there are are two diverging length scales, rather than one; this would change the standard picture of a renormalization group fixed point controlled by a single parameter, so that W and |p c (L = ∞)−p c (L)| would no longer be forced to scale in the same manner. There are certainly two diverging lengthscales in mean field k-core percolation-one associated with those in the infinite cluster and one associated with sites that get removed in response to one random site being removed.
Order parameter
We also investigate the order parameter, κ, the fraction of sites in the infinite force-balance cluster. This is obtained for a given system size and initial occupation density by generating configurations, keeping only those configurations in which the largest cluster is spanning, and then finding the average fraction of sites in the largest cluster for those configurations. Plots are shown for L ≥ 128 in figure 20. The curves lie on top of each other for large L, differing only in the minimum value of p needed for the numerical simulations to have a reasonable chance of obtaining occupied sites after culling.
The minimum value of κ is increasing, rather than decreasing, with system size. We therefore assume that there is a jump in the order parameter at the transition, denoted by κ c . In other words, in the thermodynamic limit, as soon as there is a spanning cluster, it occupies a finite fraction of the system. From Fig. 20, and the already-obtained result of p c = 0.414 ± 0.008, we estimate the thermodynamic κ c to be κ c = 0.399 ± 0.011. Furthermore, for each individual curve it appears that just above the transition, κ increases linearly with p, suggesting that the order parameter exponent is β = 1. This result is to contrasted with the mean field k-core percolation results and the repulsive soft sphere simulations, where β = 1/2.
Correlation function
We next look at correlation functions for this model, now using only periodic boundary conditions. For these simulations, for a given system of size L and density p, we generate states with exactly pL 2 occupied sites (rather than occupying sites independently). We only keep configurations that were nonempty after culling, and in each of these, keep only the largest cluster (this cluster was automatically spanning, since we use periodic boundary conditions). We then calculate the correlation function C(i)-the probability that sites a distance i apart are both occupied, minus an asymptotic constant, to be discussed shortly. Moreover, we calculate correlations at all angles (i.e. not merely horizontal and vertical correlations).
With uncorrelated percolation, it is easy to quickly produce precise plots of the correlation function over several decades of distance, and confirm that the correlation function has a power law form at the critical point. However, correlations in the force-balance model have significantly greater sources of uncertainty.
When we generate samples for uncorrelated percolation, each site occupation is independent, so the correlation function calculated for C(i) is largely (although not entirely) independent of that for C(j) with j = i. So for uncorrelated percolation, when we look at different pairs of points in a single sample, we get somewhat independent estimates of the correlation function for a range of distances. That is, we quickly generate lots of independent data for the curve C(i). However, for correlated percolation models where there are no finite clusters, the results for C(i) and C(j), for j = i, are are highly correlated for a given sample, even for widely separated i and j.
For a given amount of computer running time, the error bars in the correlation function are thus substantially larger than for uncorrelated percolation, and because they are correlated, it is difficult to reliably extract the asymptotic value of C(i). This is important because to get C(i), we need to subtract off the probability that two distant sites (i → ∞) are occupied in the infinite system limit (L → ∞) after culling. For our models, we do not know the infinite system occupation probability after culling. (For uncorrelated percolation, it is trivially identical to the initial occupation probability, as there is no culling). The most straightforward solution is to subtract off the asymptotic value of C(i), since we know that the connected correlation function should approach 0 as i → ∞. However, because the error bars at different i are correlated, there are also large error bars in the asymptotic value of C(i). The functional form of C(i) is highly sensitive to the constant subtracted off-over a limited distance range, a power law and an exponential can look very similar if the asymptotic value is shifted slightly. This means that that it is difficult to reliably determine the form of the correlation function. The correlation length, is also very sensitive to a small shift in C(i).
The end result is that it is difficult to get accurate results for large system sizes, and we have limited our simulations of correlation functions to L = 100, L = 200 and L = 400. The correlation length as a function of initial occupation density is shown for these three system sizes in figure 21. For densities ρ > 0.4, the correlations are clearly short-ranged, and are independent of the system size. For lower densities, the correlation length grows with L. The curve for L = 200 has a clear peak at p ≈ 0.375; the L = 400 data appears to have a peak at a larger L, although the largish error bars make its location unclear. Thus, the data is somewhat suggestive of a lengthscale that grows with system size.
The position of the peak gives yet another plausible measurement for p c (L). If we compare the p c (L) defined from the probability of spanning for L = 200, p c (L) = 0.3671 ± 0.0007, which is lower than the estimated p c (L) from correlation length data. However, this discrepancy should vanish in the infinite system limit. Since the position of the peak is not so clear for L = 400 it is more difficult to discern the trend.
Because of the limited system sizes studied, it is somewhat difficult to determine with confidence whether the correlation functions have power law or exponential forms. The correlations are very short-ranged (ξ < 5) for p ≥ 0.4, indicating that the correlations are probably exponential (or a very steep power law). At lower densities, for most system sizes and densities, the correlation function is exponentially decaying at long distances; for example, see figure 22, showing the correlation functions for L = 400 and p = 0.37, 0.38, and 0.39. One exception to the generally observed exponentialy behavior is te L = 200 correlation function for p = 0.395, which appears somewhat power-law-like, as shown in figure 23. The density of p = 0.395 differs slightly from the peak in the plot of ξ for L = 200 in figure 21. For the L = 400 system, the power-law-like trend of the correlation function near the peak density is not as prominent and the correlation functions appear more exponential. These results should be compared with those of Parisi and Rizzo for k-core percolation in four dimensions [47]. While they found that ξ grew with decreasing ρ, they never obtained correlation lengths greater than 10. So they also found no power law correlations. They argue that the 4-core transition in four dimensions is an ordinary discontinuous transition with no diverging lengthscales. We note that their systems had finite stable clusters and so may have different properties than the systems considered in this paper. (The fact that their systems had finite stable clusters also presumably means that they had smaller numerical correlations between C(i) and C(j) for | i − j | large, allowing them to determine correlation functions for larger systems than here.)
B. 16 NN Model and Spiral Model
Now we present results for the 16 nearest neighbor force-balance model and spiral model. We begin with the mean culling time. The plot of the mean peak culling time as a function of L shows a similar 5/4 exponent as was measured for the 24NN model, α = 1.240 ± 0.025. In fact, all three two-dimensional models-the 24NN model, the 16NN model and the spiral model-are in quantitative agreement with another. We find α = 1.246 ± 0.027 for the spiral model. These results suggest that similar processes underlie the culling in all three two-dimensional models. See Fig. 24.
The probability of having a force-balance avalanche of size s, P (s), for the 16 NN model shows similar trends to the 24 NN model with a broad distribution of intermediate sizes and a prominent peak for the largest sizes. See Fig. 25. The spiral model exhibits the same qualitative behaviour as well. Fig. 26 shows the probability of spanning for periodic boundary conditions. From this data we extract the width of the transition as a function of L, as depicted in Fig. 27. We plot both periodic and wired boundary conditions on a log-log scale to demonstrate the significant deivations from a power-law growing crossover length. We also plot the same data for the 24NN and spiral mod-els for comparison. As with the 24NN model, the fitting form is clearly not a power law in L. Figure 28 shows the widths using the fitting form motivated by the TBF result. For the 16NN model, one can extract µ = 0.35±0.01 for wired boundary conditions and µ = 0.42 ± 0.03 for periodic boundary conditions. In Fig. 29, similarly to Fig. 19, we choose p c (∞) so that p c (∞) − p c (L) as a function of L is well fit by the TBF functional form. We obtain the best fit for p c (∞) = 0.497 ± 0.007 for wired boundary conditions, which gives µ = 0.47 ± 0.07. For periodic boundary conditions, p c (∞) = 0.502 ± 0.010 and µ = 0.70 ± 0.15. Once again, there is a discrepancy between the µ extracted from the width data and the µ extracted from the one-parameter fit.
A similar fit (not shown) for the spiral model finds p c (∞) = 0.690 ± 0.008, and µ = 0.49 ± 0.08 for wired boundary conditions. For the spiral model, Toninelli, et al. [7,8] have proven that p c (∞) is the same as for direct percolation, p c (∞) ∼ = 0.705, and found µ = 0.64. Our numerical results for p c (∞) and µ are both within two error bars from their exact result, and a plot of ln(p c (∞) − p c (L)) versus ln(ln L) for the exact result of p c (∞) ∼ = 0.705 shows noticeable curvature. Also, the µ extracted from the width data for wired boundary conditions is µ = 0.32 ± 0.01 and µ = 0.38 ± 0.02 for periodic boundary conditions.
Finally, Fig. 30 shows the order parameter κ as a function of p for various system sizes for the 16NN model. As in the 24 NN model, the jump in κ increases with increasing system size suggesting that the transition is discontinuous. It also appears that β = 1 just above the transition for each individual curve (though there is some overall curvature for the set of curves).
C. Three-dimensional model
Our tests of the 26NN three-dimensional model are not as extensive as in the two-dimensional case, particularly because we have no proof that p c < 1 in this case and our system size range is limited. From the probability of spanning data using periodic boundary conditions, we have extracted W and p c (L). See Fig. 31. Deteriming p c (∞) with the TBF functional form yields p c (∞) = 0.433 ± 0.009 < 1 (not shown). Moreover, for the one-parameter p c (∞) fit, µ = 0.75 ± 0.14; from the width data, µ = 0.37 ± 0.01. In the three-dimensional case the discrepancy between the two values of µ for the same boundary condition is greater than the twodimensional force-balance models, where larger system sizes can be explored. For this three-dimensional model, one may expect a double logarthim as opposed a single logarithm with potentially µ ∼ = 0.52 using the values for three-dimensional directed percolation [46]. Given the small range of data, however, it is difficult to discriminate between the two forms. Finally, the three-dimensional transition appears to be discontinuous in the context of the onset of the infinite force-balance cluster, as in the two-dimensional cases, since the jump in the order parameter increases with increasing system size. See Fig. 32. Moreover, simulations of the probability of having a force-balance avalanche size s shows the same trends as in Figs. 12 and 13.
V. SUMMARY AND DISCUSSION
While uncorrelated percolation is very well understood, models of correlated percolation are much less so. We have presented rigorous and numerical results on several force-balance percolation models to help narrow the gap. On the rigorous side, we have proven that p c < 1 for the two-dimensional models. This result places our interpretation of the numerical data for the two-dimensional models on sounder footing. A rigorous argument that the force-balance percolation transition is discontinuous, at least in two dimensions, is more difficult than in the case of jamming percolation. In the jamming percolation models the underlying mechanism driving the transition is two disjoint, directed percolation processes. These two process scaffold upon one another such that the infinite cluster is bulky at the transition. The underlying mechanism driving the transition in the force-balance models is presumably the same as argued in Sec. IIIB. Models of newly-constructed directed percolation processes will be key in constructing a rigorous argument for discontinuity in the force-balance case. Our numerical results for the force-balance avalanches and the onset of the infinite force-balance cluster for all three force-balance models strongly suggest a discontinuous transition. For the force-balance avalanches, the probability of having an avalanche size s, is broad for intermediate avalanche sizes. There is also a well-defined avalanche size at the tail of the distribution that becomes more prominent as the system size is increased and as p is decreased towards the transition. This suggests a bulky, discontinous, transition. Looking at the usual order parameter also points towards a discontinuous transition in that the jump in the fractional size of the largest forcebalance cluster at the transition increases with increasing system size. This trend was reported in Ref. [11] for the 24 NN model only. So now that we know there is a transition (at least for the two-dimensional models) and we surmise that the onset of the infinite force-balance cluster is discontinuous, just as in the case of jamming percolation, is there a quantitative connection with jamming percolation beyond the heuristic arguments provided in Sec. IIIB?
The results for the mean culling time quantitatively suggest that at least the two two-dimensional forcebalance models and the spiral model are in the same universality class. We obtained mean culling time exponents of α = 1.226 ± 0.027, 1.240 ± 0.025, and 1.246 ± 0.027, for the 24 NN model, 16 NN model, and spiral model, respectively. Since all three exponents are within one standard deviation of 5/4, the equivalent exponent in the sandpile model, there is potential for a sandpile model-like RG treatment, at least for the culling dynamics for both sets of models. Further quantitative study of the distribution of force-balance and spiral avalanche sizes will provide further insights.
The crossover length data also indicates that for the system sizes studied, the mechanisms underlying jamming and force-balance percolation are the same for the two-dimensional models. The following table summarizes the values obtained for p c (∞) for the three twodimensional models, and the associated values of µ: We see that results for µ for all three models with wired boundary conditions are consistent with one another, although the error bars are large. So, while our measurements are not as precise, nor as accurate, as one would like (the former possibly indicated by the differing values of µ obtained from the width data), the consistency between the three different models is readily apparent. In other words, the models are most likely in the same universality class. We have also included results for periodic boundary conditions. For those, the results for p c (∞) are within the error bars of the results for wired boundary conditions; for periodic boundary conditions, the plots of ln(p c (∞) − p c (L)) versus ln(ln L) appear linear for a wider range of p c (∞), resulting in significantly larger error bars. Finally, the p c 's are independent of the boundary condition, as expected.
Our force-balance data is indeed inconsistent with a crossover length that diverges as a power law, and is just as consistent with the TBF fitting form as the spiral model data. With the present data, we are unable to conclude that the presumably two-dimensional modelindependent value of µ is independent of the boundary conditions, though with data for larger systems, and smaller error bars, such a trend should emerge. A stronger numerical test of the same underlying mechanism for jamming percolation and force-balance percolation would be to look for directed percolation using anisotropic finite-size scaling, as was done in Ref. [7]. We leave this for future work.
Our correlation length data above the critical point for the 24 NN model suggests a correlation length that grows with system size, as opposed to a finite one. However, for the largest system sizes, we do not generally see the expected power law correlations at the transition associated with this divergence. More work is needed to substantiate this trend, which would be inconsistent with a diverging correlation length. Of course, a study of larger systems might reveal a finite correlation length. Our current correlation length results are to be contrasted with 4-core percolation in four dimensions, where a finite correlation length of about 10 lattice spacings was found, suggesting a garden-variety type discontinuous transition driven by nucleation [47]. 4-core percolation in four dimensions contains finite clusters, which provide a backbone for nucleation. In force-balance percolation, however, there are no finite clusters so one may expect a more unusual discontinuous transition.
Therefore, the scenario for force-balance percolation that is most consistent with our data is that while the onset of the infinite force-balance cluster is discontinuous, there is an exponentially diverging crossover lengthscale, and perhaps a diverging correlation lengthscale. Our limited data for the standard correlation length defined in the connected phase makes it difficult to discern any trend for growth, exponential or otherwise. In continuous phase transitions, the crossover length and correlation length diverge in the same way. With this more unusual transition, it is not necessarily obvious that the same behaviour should apply. Moreover, we find quantitative agreement with the dynamical exponent for sandpile models not only for the force-balance models but for the spiral model as well, again, suggesting that they all are in the same universality class. Finally, we expect that three-dimensional versions of jamming percolation and force-balance percolation should exhibit similar behaviour as well, as our data suggests.
Though the usual order parameter, the fraction of sites in the infinite force-balance cluster, does not appear to be continuous, are there other candidates for an atypical continuous order parameter? A potential candidate is to look at the subset of the force-balance spanning cluster where the connectivity is marginal, i.e. 3-connected. However, we do not find any evidence for a fractal, spanning 3-cluster at the force-balance percolation transition, nor for a fractal, spanning 4-cluster. Given that p c is around 0.4, each site has approximately 10 neigbors at the transition. The 3-core condition is thus completely superseded by the vectorial constraint, at least for this lattice model with many nearest neighbors. While the dynamics of culling suggest a critical sandpile-like model for the removal of redundant sites, the removal of the marginal infinite cluster does not. So at this time, we have not discovered an order parameter which is continuous at the transition.
While the jamming and force-balance percolation models lead to a discontinuous transition (provably in the first case, and most likely in the latter), the fraction of the sites in the infinite cluster grows linearly in p − p c . It would certainly be interesting to uncover other finitedimensional models that have a discontinuous transition in which the fraction of sites in the infinite cluster grows nonlinearly in p − p c above the transition; such models would behave more like mean-field models. At this point, we know of no such models. It may be that finitedimensional models of correlated connectivity percolation are too simple to capture this aspect of jamming, and that one has to define forces on the network, as in rigidity percolation. We are currently working towards this direction.
MJ and JMS would like to acknowledge very helpful discussions with Andrea J. Liu and a helpful comment from Cris Moore. JMS would like to acknowledge the Aspen Center for Phyics where part of this work was completed. Finally, we acknowledge support from NSF-DMR-0645373 and NSF-DMR-0605044.
|
2008-06-10T00:32:59.000Z
|
2008-06-10T00:00:00.000
|
{
"year": 2010,
"sha1": "8004e25c6743f6944f49e4eb55cfce63d7f2a1f1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0806.1552",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8004e25c6743f6944f49e4eb55cfce63d7f2a1f1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics",
"Medicine"
]
}
|
257656318
|
pes2o/s2orc
|
v3-fos-license
|
Electrochemical Determination of Chemical Oxygen Demand Based on Boron-Doped Diamond Electrode
A rapid and environment-friendly electrochemical sensor to determine the chemical oxygen demand (COD) has been developed. The boron-doped diamond (BDD) thin-film electrode is employed as the anode, which fully oxidizes organic pollutants and provides a current response in proportion to the COD values of the sample solution. The BDD-based amperometric COD sensor is optimized in terms of the applied potential and the solution pH. At the optimized conditions, the COD sensor exhibits a linear range of 0 to 80 mg/L and the detection limit of 1.1 mg/L. Using a set of model organic compounds, the electrochemical COD sensor is compared with the conventional dichromate COD method. The result shows an excellent correlation between the two methods.
Introduction
Chemical oxygen demand (COD) is a critical parameter to evaluate water quality.COD is defined as the oxygen equivalents required to decompose the organic compound using a strong oxidizing agent [1].The conventional COD methods involve the digestion of organic compound by dichromate or permanganate followed by the colorimetric detection of the oxidant consumption.This procedure is time consuming, taking 2-4 hours to achieve a complete oxidation of the sample [2,3].Furthermore, the use of toxic reagents, such as chromium and mercury [4,5], provokes health and safety concerns.Therefore, efforts have been made to overcome these disadvantages by developing a fast and environment-friendly analytical method that can replace the conventional method.
Electrochemical advanced oxidation process (EAOP) has acquired high relevance for water reme-diation technology [6,7].EAOP can proceed in either direct or indirect pathway [8].Whereas the direct oxidation involves a direct electron transfer between electrode and organic compounds, the indirect oxidation is mediated by hydroxyl radicals and other active intermediates produced on the anode surface by the discharge of water or electrolyte [9].Hydroxyl radical is a powerful oxidant [10] possessing a high standard potential (E°( • OH/H 2 O) = 2.38 V vs. SHE) [11][12][13], thus • OH can fully oxidize the organic compound into carbon dioxide [14]: Here C y H m O j N k X q is the organic compound to be oxidized, in which X is a halogen atom.The number of electrons n that is required for complete oxidation of an organic molecule is 4y -2j + m -3k -q.Note that the oxidation product of nitrogen atoms is NH 3 not NO 2 , as the standard COD method using dichromate will not oxidize the NH 3 in strong acid solution [15].
In EAOP, electrode materials play a vital role, directly influencing the reaction mechanism [16], process and energy efficiency [17].Traditional anode materials, such as graphite and platinum, have been primarily employed for organics oxidation.However, the materials can be gradually deactivated due to the formation of an adsorbed layer on the electrode surface [18].The TiO 2 has also been studied as nontoxic, inexpensive, and photosensitive electrocatalyst for amperometric COD determination.The main disadvantage of TiO 2 is that it requires a UV light source [16] to generate electron/hole pairs.However, the high recombination rate of the active species results in a lower photocatalytic activity, working range, and irreproducibility [19].Boron-doped diamond (BDD) has been suggested as the promising anode material for electrochemical incineration.It can produce and accumulate a high density of hydroxyl radical and other active intermediates on its surface [20,21].Its wide potential window in an aqueous medium [22] enables the material to exhibit high over-potential for oxygen evolution [6,23], which is beneficial for organic oxidation.BDD electrodes also offer excellent stability over prolonged periods due to their chemical and physical robustness [24], high anodic stability, and good conductivity for electrochemical purposes [25].BDD electrodes are very effective for the degradation of organic pollutants, such as phenols [26], aromatic amines [27], textile dyes [28], and refractory herbicides [29].
Previously, the feasibility of the COD determination based on the BDD electrodes was demonstrated [30,31].However, more research is needed on mechanistic aspect of the electrochemistry on the BDD electrode.In this work, we provide a theoretical explanation for the electrochemical COD determination based on the mass-transfer-limited reaction.In addition, the effect of essential parameters was investigated to achieve optimum analytical performance.Especially, as the applied potential and solution pH mainly determine the kinetics of organics oxidation thus the efficiency of electrochemical COD measurements, the optimization of the applied potential and solution pH is important.Sensor behavior was evaluated using a set of model organic compounds.Finally, the novel E-COD sensor was validated toward the conventional dichromate method.
Chemicals
All chemicals were used as received (Daejung Chemicals) and solutions were prepared with ultrapure water (resistivity = 18 MW•cm).
Devices and Equipment
The electrochemical measurements were performed with a potentiostat (BioLogic SP-150) in a three-electrode cell configuration at ambient temperature.Experiments were conducted in an undivided electrochemical cell (PECC-2 Cells, Zahner) with a solution volume of ~7 cm 3 .A pre-treated BDD electrode with an exposed area of 2.54 cm 2 (d=18 mm) was employed as the working electrode.A Pt coil and a saturated calomel electrode (SCE) were used as the counter electrode and reference electrode, respectively.
Electrode preparation
The anode was the BDD thin film grown on a thick monocrystalline Si substrate via the hot filament chemical vapor deposition method (NeoCoat).The boron doping level was 5000 ppm, while the film thickness and resistivity were 3 µm and 100 mΩ•cm, respectively.Before loading, the BDD electrode was cleaned by sonication in acetone, isopropyl alcohol, and deionized water for 10 minutes.The degreased BDD was then treated in 1.0 M HNO 3 for an hour to remove any impurities on the surface, rinsed by ultrapure water and dried with N 2 gas.
Standard COD method
For comparison, the standard COD methods using the dichromate digestion were performed following the previous procedure [32].In short, a digestion solution was prepared containing 5.0 g/L K 2 Cr 2 O 7 + 16.0 g/L Hg 2 SO 4 in conc.H 2 SO 4 .A catalyst solution was separately prepared by dissolving 0.22 g Ag 2 SO 4 in 40 mL conc.H 2 SO 4 .For digestion of organic compounds, 2.5 mL sample solution was mixed with 1.5 mL digestion solution and 3.5 mL catalyst solution.The mixture was heated at 150 o C for 2 hours.After cooling down to room temperature, the absorbance of the digested solution at 440 nm was measured in a UV-VIS spectrometer.
Electrochemical procedures
The amperometric detection under well-stirred conditions was used to determine the COD value.
Before each E-COD measurement, potential cycling was conducted for 20 cycles to pre-condition the BDD electrode in blank 0.1 M KNO 3 .The solution pH was adjusted by adding a proper amount of 1.0 M HNO 3 or 1.0 M KOH.A proper potential was applied to allow the background current to reach a steady.Subsequently, aliquots of organic compounds were injected to obtain a current increase, which was measured as the response current.
Electrochemical characterization
Electrochemical characterizations were carried out to evaluate the anodic behavior of the BDD electrode toward organic compounds.As shown in Fig. 1(a), in a blank electrolyte (0.1 M KNO 3 ), a very low background current is observed between +1.1 V and +2.2 V (vs.SHE), which is typical of the BDD electrode with a low double layer capacitance.The anodic current begins to rise at ~2.3 V, marking the onset of water oxidation at the BDD electrode.This onset potential is close to the theoretical potential for hydroxyl radical generation (E°( • OH/H 2 O) = 2.38 V), so it is expected that hydroxyl radical is generated by the BDD above the onset potential [3].
When malonic acid is added as a sample organic compound, an additional current is observed above the background, as shown by the red line in Fig. 1(a).We note that the oxidation of the organic compound occurred at the potential region for • OH generation.(2.2-2.5 V) This indicates that the oxidation of malonic acid is mainly via reaction with hydroxyl radical rather than direct oxidation on the BDD electrode.The additional current in the presence of organic compound is the basis for E-COD measurements.
On the other hand, the voltammetry for phenol on the BDD in Fig. 1(b) exhibits a different behavior.The onset potential (~1.3 V) for phenol is much more negative than that for malonic acid, which can be attributed to the direct oxidation of the aromatic phenol on the BDD electrode [26].However, at positive potentials (>2.3 V), where the BDD electrode begins to produce hydroxyl radicals, indirect oxidation (via hydroxyl radical) should accompany the direct oxidation.Again, the current difference between the sample and the background offers the basis for the E-COD measurements.
Chronoamperometry of organic compounds
Fig. 2 shows the chronoamperometry of different concentrations of malonic acid at anodic potential where hydroxyl radical is generated and organic compound is oxidized.Note that the electrolyte solution is stirred so that mass transfer is both diffusive and convective.As long as a constant stirring is applied to the solution, the measurement gives a consistent and stable responses for extended time regime.After the initial current spike, the current approaches steady-state values.The background current (dotted line) is from pure water oxidation, while the sample current in the presence of organic compound has additional current due to oxidation of organic compound [33].A linear increase of the steady-state current is observed with the increase of the malonic acid concentration.
The linear increase of the steady-state current with organic concentration can be understood as follows: We assume that i) organic compounds are fully oxidized at the electrode surface; ii) the overall reaction rate is determined by the mass transfer of the organic compound (mass-transfer limited process); iii) the bulk concentration of organic compounds remains unchanged within the time scale of the experiment.Then, the steadystate current can be represented as [34]: (2) where i ss is the steady-state current, n the number of electrons, F the Faraday current, A the electrode area, D the diffusion coefficient, d the thickness of diffusion layer, C b the bulk concentration of organic compounds [3].Eq. 2 states that the steady-state current i ss is proportional to the bulk concentration of the organic compound.Note that the molar concentration can be converted to the equivalent COD value (mg/L of O 2 ) using: Substitute Eq. 3 into Eq. 2 to get: (4) If we assume that D and d are relatively constant for different organic compounds, we conclude that i ss is solely determined by [COD].
Effect of applied potential
In order to optimize the potential with respect to the sensor response, chronoamperometric measurements were repeated at a range applied potentials.As shown in Fig. 3, I net , which is the difference between the sample and the background currents, was highest at 2.5 V.At lower potential, I net was lower because the oxidation of organic compounds was slower.At higher potentials, I net was lower probably due to the excessive oxygen evolution which reduces the active electrode area.Furthermore, vigorous oxygen evolution at positive potential resulted in an unstable and irreproducible responses.Consequently, 2.5 V was selected for carrying the following experiments owing to sensitive current response and stability.
Effect of solution pH
The influence of pH on the analytical performance was then investigated.Fig. 4 shows the I net values as function of solution pH.At low pH (pH=1~2), the I net value is small because the onset potential of hydroxyl radical generation (E( • OH/H 2 O)) becomes more positive.
When pH was adjusted to a higher value, I net value was relatively independent of pH in the pH range of 3 to 10.At a higher pH, water molecules are readily oxidized to generate hydroxyl radicals, which leads to oxidation of organic compounds and higher I net .However, in a strong alkaline condition (pH >10), the main reaction will turn into oxygen evolution and a vigorous generation of oxygen bubbles is observed on the BDD surface, which results in irreproducible results.Accordingly, pH in the neutral range was employed for the subsequent experiments.----------------× = Fig. 2. Chronoamperometric responses recorded in 0 to 100 mg/L COD of malonic acid (solid line).Supporting electrolyte: 0.1 M KNO 3 (dashed line).Applied potential: 2.5 V (vs SHE); pH: 5. Fig. 3. Effect of applied potential on the i net value of 100 mg/L COD of malonic acid.Supporting electrolyte: 0.1 M KNO 3 .
Linear response of the BDD electrode
A typical current response toward organic compounds under the optimized conditions is shown in Fig. 5. Initially a stable background current is observed when the appropriate potential bias was applied.The injection of an organic compound causes a sharp increase of the anodic current, and a steady-state current can be reached quickly within ~10 seconds.As expected from Eq. 4, the steadystate current is increased linearly with the concentration the organic compound.
Measurement of model organic compounds
The applicability of the BDD sensor was evalu-ated through the E-COD determination of various organic compounds.A list of organic compounds was selected as model organics samples.First type was organic pollutants often contained in wastewater: malonic acid, salicylic acid, and phenol, while second type was the ones often used as standards for COD analysis: glucose and potassium hydrogen phthalate (KHP).The measurement results of I net and theoretical COD (mg/L of O 2 ) of the various organic compounds are presented in Fig. 6.As previously defined, I net is the difference between the sample and the background currents, as demonstrated in Fig. 5, and the theoretical COD is obtained by converting molar concentration of individual organic compound into COD equivalent using Eqs. 1 and 3. A strong linear relationship is found between I net and the theoretical COD for various organic compounds (the correlation coefficient R 2 =0.97).The strong linearity for different organic compounds indicates that our assumption of relatively constant D/d values for different organic compounds in Eq. 4 is valid.From the measurements, the linear range for the E-COD measurements were found to be up to ~80 mg/L and detection limit to be 1.1 mg/L (S/N=5).This compares with the previous report, which claims a linear range of 20 to 9000 mg/L and a detection limit of 7.5 mg/L [31].
In the conventional COD measurements, KHP is commonly used as standard materials for calibration.So, the E-COD sensor response for KHP in Fig. 6 is used for calibration to get the calibration equation: E-COD (mg/L) = 123 × I net (mA).From the obtained calibration equation, the I net values in Fig. 6 is converted into E-COD, as shown in Fig. S1.
Finally, in order to confirm that the novel E-COD sensor is compatible with the conventaional COD methods, the conventional COD measurements (using dichromate) of the model organic compounds are conducted.The model organic compounds are digested by dichromate in a strong acid solution and the COD values are measured through colorimetry following the well-established procedure (KHP is used as standard material for calibration).Fig. 7 shows the comparison of the E-COD and the conventional COD results for the model organic samples.A strong correlation is found between the two methods (slope = 1.0067, correlation coefficient R 2 =0.96).The average difference between the COD values from the two methods is ~7%.This indicates that the novel E-COD sensor produces measurement results that are compatible with the conventioanl COD method using strong oxidant.
Conclusions
As an alternative method to measure COD in a faster and more environment-friendly way, an electrochemical method for COD determination is developed.The novel E-COD sensor is based on the conductive diamond electrode, which exhbits a wide potential window and a high stability even in extreme anodic potential.Sample organic compounds are completely oxidaized on the BDD electrode, which produces a mass-transfer-limited current.Thus, the steady-state current is proportional to the bulk concentration of organic compounds, as demonstraed by the linear amperometric response of the E-COD sensor.The compatibility of the novel E-COD sensor with the conventional dichromate method is demonstrated.Providing a faster, safer, and more compact COD device, the E-COD sensor has a potential to replace the conventional lab-based time-consuming COD method.
Fig. 4 .
Fig. 4. Effect of solution pH on the i net value of 100 mg/L COD of malonic acid.Supporting electrolyte: 0.1 M KNO 3 .Applied potential: 2.5 V (vs.SHE).
Fig. 6 .
Fig. 6.A linear increase of I net in response to the COD of model organic compounds.(E applied : 2.50 V vs. SHE, pH: 5).
Fig. 7 .
Fig. 7. Strong correlation between the E-COD and the conventional COD using dichromate oxidant.
|
2023-03-22T15:17:34.448Z
|
2023-03-20T00:00:00.000
|
{
"year": 2023,
"sha1": "5429bc8bc1a477948acf2e2954f0b8dbbaa4dae1",
"oa_license": "CCBYNC",
"oa_url": "https://www.jecst.org/upload/pdf/jecst-2023-00017.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e25ae709924ab02ec9afcc09a1b3bd4160e3db5f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
221106522
|
pes2o/s2orc
|
v3-fos-license
|
Professional training on shared decision making with older adults living with neurocognitive disorders: a mixed-methods implementation study
Background Shared decision making with older adults living with neurocognitive disorders is challenging for primary healthcare professionals. We studied the implementation of a professional training program featuring an e-learning activity on shared decision making and five Decision Boxes on the care of people with neurocognitive disorders, and measured the program’s effects. Methods In this mixed-methods study, we recruited healthcare professionals in family medicine clinics and homecare settings in the Quebec City area (Canada). The professionals signed up for training as a continuing professional development activity and answered an online survey before and after training to assess their knowledge, and intention to adopt shared decision making. We recorded healthcare professionals’ access to each training component, and conducted telephone interviews with a purposeful sample of extreme cases: half had completed training and the other half had not. We performed bivariate analyses with the survey data and a thematic qualitative analysis of the interviews, as per the theory of planned behaviour. Results Of the 47 participating healthcare professionals, 31 (66%) completed at least one training component. Several factors restricted participation, including lack of time, training fragmentation into several components, poor adaptation of training to specific professions, and technical/logistical barriers. Ease of access, ease of use, the usefulness of training content and the availability of training credits fostered participation. Training allowed Healthcare professionals to improve their knowledge about risk communication (p = 0.02), and their awareness of the options (P = 0.011). Professionals’ intention to adopt shared decision making was high before training (mean ± SD = 5.88 ± 0.99, scale from 1 to 7, with 7 high) and remained high thereafter (5.94 ± 0.9). Conclusions The results of this study will allow modifying the training program to improve participation rates and, ultimately, uptake of meaningful shared decision making with patients living with neurocognitive disorders.
Background
The care of older adults living with neurocognitive disorders (NCDs) requires making difficult decisions. For instance, the disabling and multi-morbid nature of this condition involves selecting services to reorganize daily life, choosing pharmaceutical or non-pharmaceutical treatments, and preparing advanced care plans and directives. Because there are generally several acceptable options for these decisions, decision making should consider the experiences, preferences, and values of the older adult living with NCDs and their family or friend caregiver. The shared decision-making (SDM) process is ideal for guiding decision making in this context, as it relies on a discussion among all parties to balance evidence-based healthcare information, the expertise of the healthcare professional (HCP), and the experiential knowledge, values, and preferences of the person living with NCDs and their family/friend caregivers. A large systematic review recently established that SDM helps improve patients' knowledge of the options, congruency between their values and care choice, comfort with the decision, and engagement in decision making [1]; however adoption of SDM by HCPs in their routine practice is still suboptimal [2]. SDM implementation in the context of caring for older adults with NCDs remains largely unexplored. Implementation studies have been conducted in nursing home residents living with NCDs [3,4], and a single study has been completed to date to implement SDM among the interprofessional care team, family/friend caregivers, and community-based older adults living with NCDs, for housing decisions [5]. Decision making in the context of NCDs is particularly challenging, as decisions are often emotionally laden [6] and complicated by the disease, ethical and legal dilemmas, and the presence of multiple stakeholders [7]. As a result, older adults living with NCDs are typically excluded prematurely from decision making [7]. Hence, there is a need for studies to inform the implementation of professional training and patient/ caregiver decision aids to support SDM among community-based older adults living with NCDs, their caregivers, and interprofessional teams.
A recent systematic review of 15 studies highlights a lack of evidence on the effectiveness of different types of interventions to improve SDM adoption among HCPs, such as educational meetings, educational material, educational outreach visits, and reminders [8]. Furthermore, although an environmental scan described 168 validated professional training programs in SDM [9], the assessment of their effects and implementation is heterogeneous, and there is still a lack of evidence on the best practices to develop, implement, and assess these training programs [10]. Participation in continuing professional development (CPD) strategies is challenging for HCPs, especially for those who work in remote areas and need to travel long distances to take part [11,12]. A recent systematic review suggests that remote online training, or e-learning, could be more accessible-and equally effective-as face-to-face training [13].
We thus set out to study the factors influencing participation in a professional e-learning program on SDM comprising an e-learning activity and five Decision Boxes on the care of older adults with NCDs, by addressing three specific questions: knowledge about and intention to adopt SDM with these patients?
Description of the training program
This professional training program included (1) a selfdirected e-learning activity on SDM, lasting about 1 h, that participants could complete in several sittings at their work location or at home; and (2) five evidence summaries, or Decision Boxes (DBs), to support decision making at the point of care.
The generic e-learning activity included four successive training modules that aimed to 1) explain SDM and its implementation in daily practice; 2) describe strategies for determining patients' values and preferences; 3) describe strategies for communicating probabilities to patients; and 4) explain how to incorporate SDM into clinical encounters with patients. The minimum duration of each module was respectively 9 min for Module #1; 20 min for Module #2; 6 min for Module #3; and 28 min for Module #4. The design of this activity was based on our team's earlier work on CPD training to support SDM for acute respiratory tract infections [14].
The series of five evidence summaries described the options available to older adults living with NCDs who are faced with five important and frequent decisions that we identified in an earlier study (Table 1) [15]. We designed these evidence summaries as Decision Boxes (DBs), which are meant to provide stakeholders with evidence in a format supportive of SDM (i.e., one that avoids biasing decision making by concisely setting out the pros and cons of all available options, in absolute risks) [16,17].
Professionals who completed training (arbitrarily defined as completing the four e-learning modules and any one of the five DBs), were entitled to training credits. Participation was otherwise voluntary, since we offered no other incentives.
In parallel with this professional training program, we also designed and evaluated patient decision aids for each of these decisions, a project that is reported elsewhere [18].
Study design
In this explanatory, sequential, mixed-method study, we assessed participation in each training component using quantitative access data (Fig. 1). We also asked participants to complete quantitative questionnaires before and after training, to assess the effects of the training program on their knowledge and intention. Our hypothesis was participants' knowledge and intention would increase after training. Then, we sought to understand the factors influencing participation using semi-structured individual interviews with a subset of participants selected based on the results of the quantitative phase.
We originally planned this study as a clustered randomized trial that is described elsewhere [19]. In short, this study aimed to assess the effectiveness of the professional training program in increasing the empowerment of older adults living with NCDs and their caregivers to make health-related decisions. Unfortunately, we experienced low patient recruitment rates and rather studied the implementation of this training program and its effectiveness in influencing variables at the HCP level.
Participants
We recruited a convenience sample HCPs from various professions (e.g., family physicians, nurses, and social workers) who practiced in family medicine clinics and homecare services in the province of Quebec, Canada. We presented the study to the HCPs during one of their regularly scheduled team meetings. Those who agreed to participate signed an informed consent form, completed a study entry questionnaire on their sociodemographic and professional characteristics (age, gender, profession, years of practice, city size), and responded to a question to assess their interest in each DB topic, on a slider scale ranging from 0 to 100, with 100 high.
Implementation strategy
The participants received an email with the access codes to the e-learning activity, and to the DB. After this first email, we sent them four more emails, one every 2 weeks, to give them access to each DB and to remind them of the e-learning activity. Overall, the participants had access to the training program from February 2018 to May 2018. The DBs were distributed in the same order to all participants, starting with the topic that they rated as the most interesting, on average, and then in decreasing order of interest (Table 1).
Quantitative data collection Survey
The study participants completed an online survey before and after the e-learning activity. The survey included 27 questions: (A) one question on their prior training in SDM; (B) one question inspired by the Ottawa Decision Support Framework to assess their knowledge about SDM [20]; (C) one question to assess their knowledge about risk communication; (D) two questions with case base scenarios for each DB topic, to assess participants' perceived awareness of the available options and awareness of the options; (E) eight questions, including case-based scenarios, to assess clinical knowledge relative to the care of older adults living with NCDs; (F) five questions to evaluate participants' intention to use SDM with their next patient facing a preference-sensitive decision, and the determinants of this intention (attitude, beliefs about capabilities, moral norm, and social influence), using a brief 5-item version of the CPD-REACTION [21]; (G) eight questions to assess perceptions of their ability to adopt SDM using the novel IcanSDM scale [22]; and (H) one question to assess their preferred role in decision-making [23].
Access data
The website that supported e-learning activity allowed recording participants' access to each training component, as well as the time spent on each module in the elearning activity. We were also able to record participants' access to the DBs when they answered a questionnaire about their experience using the tool, the results of which are reported elsewhere [24].
Qualitative data collection Selection of a participant subsample
We aimed to recruit a subsample of 16 people from among the participants, to interview them about the factors that encouraged or restricted their participation in training. We estimated that this sample size should allow saturation since the sample was relatively homogenous and the questions discussed were straightforward and practical. Among the 16 people, we planned to recruit eight who had fully participated in training and eight who had not. To recruit people who had participated in training, we had a question in the survey that they completed after training, asking their permission to contact them by phone for a 30-min individual interview. We recruited people who had not participated by email.
Procedure
We conducted the individual phone interviews (roughly 30 min in length) 1 month after participants had completed the professional training program. We used a semi-structured interview guide to elicit (1) their attitude towards the training program, and (2) their beliefs about their capabilities to complete the training program. The interview guide was based on the Theory of Planned Behaviour [25], according to which a behaviour may be predicted by a person's intention (motivation) to adopt it, and a person's intention may, in turn, be predicted by several determinants, including belief about consequences, social influence, and beliefs about capabilities. We also asked a few questions to explore how they used what they learned through the program in their encounters with patients.
We recorded the interviews using audio-digital recorders, and transcribed the discussions verbatim.
Analyses
We completed descriptive analyses of all quantitative data. We used simple logistic regressions to identify the factors influencing completion of the training program. To this end, we first performed univariate between completion and each of the potential factors (sociodemographic and professional characteristics, prior training in SDM, interest for each of the DB topic). We then tested a simple logistic regression model comprising all the factors that demonstrated a significant effect on completion. All quantitative statistical analyses were conducted using the SAS statistical package (SAS Institute Inc), and bilateral statistical tests were performed at a significance level of 0.05.
We also performed bivariate analyses to compare questionnaire responses before and after the training program. In addition, we used student-dependent paired T-tests to compare mean scores and the Fisher test to compare proportions before and after training.
For qualitative data, three researchers independently conducted a thematic content analysis using an deductive approach, initially based on the individual factors described by the Theory of Planned Behaviour [25], then on the emerging themes of the discussions. The analyses aimed to describe the factors encouraging and restricting participation in the training program.
We then explored the qualitative results to discover any confirmation, contradiction, cross-validation, or corroboration of the quantitative results [26]. Drawing on the various sources, the research team met to suggest strategies to improve the training program and its implementation.
We obtained ethical approval for this project from the ethics review board of the Ministre de la santé et des Services Sociaux (reference CCER15-16-05) and the Centre Hospitalier Universitaire de Québec (reference 2016-2521). All participants signed consent forms for the study. Table 1 for DB titles)
Participants
Of the 114 HCPs invited to participate, 72 (63%) accepted (Fig. 2). Eighteen left the study before signing the informed consent and completing the study entry questionnaire. Of the 54 participants who signed the informed consent and completed the study entry questionnaire, 47 completed the baseline survey and received access to training. Of these 47 people, 17 (36%) completed the final survey after training. Most of the 47 participants who completed the baseline survey were women (83%) ( Table 2). They represented several professions, but most were physicians (36%), nurses (21%), or social workers (21%). Most of them (81%) had never had any training in SDM. They reported a mean interest in the topics covered in the DBs of 80% (± SD 14%); they were most interested in DB#1 (Non-pharmacological treatment to manage agitation, aggression, or psychotic symptoms) (mean interest = 87 ± SD 11%) and least interested in DB#5 (Deciding whether or not to prepare a power of attorney for personal care) (mean interest = 67 ± SD 32%) ( Table 1).
Level of participation in the training program
Of the 47 participants who completed the baseline survey, 17 (36%) completed the four modules of the elearning activity in addition to reviewing a minimum of one DB; 10 (21%) completed the four e-learning modules and reviewed five DBs (Fig. 2). If we consider the DBs exclusively, 26 of the 47 participants (55%) reviewed at least one DB.
Completion time of the entire e-learning activity ranged from 40 min to 9 h, for an average duration of 57 min. Some participants spent as long as 5 h on the introduction, while 68% of the 47 participants spent less than 30 min on it. These estimates could, however, reflect the time during which people were connected to the activity without being actively engaged in doing the training.
Of the 11 participants who accessed the e-learning activity and did not complete all the modules, nine stopped after less than 5 min on the Introduction or on Module 1 (82%), and two completed the first two modules only, in about 1 h (18%).
The logistic regression to describe the factors influencing participation in the training program gave no statistically significant factor explaining completion.
Interview findings: factors influencing participation in the training program
We recruited and interviewed 11 participants instead of the 16 planned, since we reached data saturation in the last interviews conducted when we failed to record any new emerging theme [27]. Of these 11 participants, six had completed all the modules of the e-learning activity and one DB, and five had not. These interviews allowed us to identify several factors encouraging or restricting participation in training, with regard to participant attitudes and beliefs about their capabilities. The study of these factors then led us to pinpoint specific strategies for improving the training program and its implementation. These findings are described in the next paragraphs.
Factors encouraging participation
Both the participants who fully completed the training program, and those who partially completed it, had general positive attitudes towards it (Table 3). They perceived the training program as useful for learning about SDM, for improving their management of the problems faced by older adults living with NCDs, and for improving their communication with them. Participants especially appreciated the fact that training allowed them to become aware of the DBs and other patient decision aids. Some participants mentioned that the DBs covered topics of interest for practice, and that they helped meet their clinical needs. Several participants also pointed to the usefulness of the DBs because they presented various interventions, with their pros and cons. A number of participants mentioned how completing the program trained them to communicate understandable information on all the options to patients, and to provide them guidance.
The participants reported several factors encouraging their participation with regard to their beliefs about their capabilities to participate. Most mentioned ease of access to the training program as a factor encouraging their participation. They mentioned that it was easy to do, as it was concise and clear, and they appreciated the short modules of the e-learning activity that made it easier to retrieve information. They also valued the DBs from a practical point of view, noting that accessing the information in the DBs is quick due to their brevity, the standardized presentation of information, and the availability of different DBs for each clinical situation. They further appreciated that DBs were printable, and mentioned their convenience as a source of information for patients/caregivers after the consultation. Participants also appreciated that completing training did not require any prerequisites. They also found that the training program was easy-to-understand and visually appealing, and they appreciated that it provided practical training. The participants reported that they found the DB well explained and that they offered practical guidance. One participant also mentioned how learning was applicable to other clinical situations. Participants also mentioned that extrinsic sources encouraged their participation, in particular the email follow-ups and the associated continuing education credits.
The participants made several suggestions to improve participation to the training program, the most important of which was to shorten training and integrate it formally into participants' working schedules. Some people also mentioned that it would be desirable to be able to adjust the speed of the narration. Regarding implementation, the participants suggested extending dissemination of the DBs, especially to employers and decisionmakers.
Factors restricting participation in the training program
All of the factors mentioned by participants as limiting their participation were aspects of their beliefs about their capabilities to participate ( Table 4). The factor most frequently reported was their lack of time, or the time required. Participants explained how it was generally difficult for them to find time to complete training. They also mentioned that time required to adopt the tools may be longer for professionals lacking experience in the topic. A number mentioned that the period selected to distribute the program was suboptimal because it was a particularly busy time.
Among the other factors limiting participation, we identified certain disadvantages of the training components themselves, technical or logistical barriers, social barriers, and difficulties in using DBs. For example, some participants reported that spacing out the training elements for several weeks made the training more difficult to follow. The technical barriers reported consisted in difficulties accessing the Internet access or navigating the website. Participants also indicated that their colleagues and employer influenced their determination to complete the training. Lastly, participants indicated that the use of DBs entailed certain costs and required adapting them to each patient individually before integrating them formally into their practice.
Strategies for improving the training program and its implementation
Based on the factors identified that encouraged and restricted participation in training, on the strategies proposed by participants themselves, on authors' experience and on scientific evidence, we identified a set of strategies for improving the learning components used in the training program, as well as strategies to improve its implementation (Table 5). A few of the proposed strategies address time constraints, such as officially incorporating training into the participant's schedule and adapting training length to individual needs and experience. Other proposed strategies address the inconvenience of Table 3 Factors encouraging participation in the e-learning program Factor Sample citation (source) a
Attitude
The program is useful for learning about SDM "For me it was an introduction to the concept of shared decision making, it raised my awareness about it." (Occupational therapist #29, home support service #3) The program is useful for practice Provides ideas on ways to manage NCDs problems "It has all kinds of information sources that are useful for my practice. I've had a slew of interventions in BPSD (behavioural and psychological symptoms of dementia) recently, and it was a good source of inspiration." (Occupational therapist #29, home support service #3) "I thought the Decision Boxes were fun. They helped give us ideas on how to conduct interviews using shared decision making. Even though there are some topics that don't have Decision Boxes, they were still useful tools for understanding how to interact with patients." (Dietician #9, home support service #1) Provides an introduction to DBs "It was interesting to learn that decision aids exist. And they were well made. The training was well organized, and it included information on the Laval University and Ottawa sites where you can find them. It was good to know." (Dietician #9, home support service #1) DBs cover topics of interest for practice "I found the topics interesting and felt they could be very useful in my practice." (Physician #55, Clinic #1) DBs help meet clinical needs "The training informed me of alternative non-pharmacological treatments to meet the needs of clients." (Nurse #65, Clinic #1) DBs present various interventions and their pros and cons "It helps us move beyond the scope where the nurse has the answer, since it's the client, instead, who has to choose. But at the same time, to make a choice, they need to understand the pros and cons, so they can make an informed decision. In some cases, it's not as easy as that, but with these Decision Boxes, it really gives us a good idea of what the options are, as well as the pros and cons, in pretty simple terms. It's good." (Nurse #26, Clinic #4) DBs facilitate the communication of information to patients "It's something I consider necessary, that is, being able to provide information that is easily understood by our clientele, by the patient, and their loved ones. The training program gives us an appreciation of the work that's been done to help facilitate our task of delivering information in a format that's easy to understand, and that can be consulted by the patient's family not during the encounter, but afterwards." (Physician #73, Clinic #2) "The training gives us a good understanding of how to guide patients using evidence, according to the topic, like driving or how to provide support to caregivers-things like that. I think it helps us see all the possible avenues, with their pros and cons, and it helps us provide guidance to the people we work with."(Physiotherapist #46, home support service #2) Beliefs about capabilities Ease of access DBs are printable "I plan to print out the Decision Boxes in question. I posted them on the family medicine intranet because I, personally, find them very useful. Once I have access to a colour printer, I plan to collect them together in a binder, so I can access them in the clinic and use them in a teaching context with our resident doctors so that they, too, can use them in their interactions with patients. Plus, I'll make available the Decision Boxes designed for patients and/or their loved ones." (Physician #73, Clinic #2) Value of having one version of the DBs for clinicians and a simpler version for patients "What's interesting too is that there's a part that's really more for the professional, to guide their intervention, and a simpler part that's more for the patient." (Social worker #20, home support service #1) Access to DB information in practice is quick due to their brevity, standardized presentation of information, and separate DB for each clinical situation "What I found interesting with the modules is that you can seek out certain specific parts. For example, if I'm faced with Problem X, I can go straight to the Decision Box on that particular topic. It's easier than having to wade through a long module that's not divided into topics, and where you have to search to find your information. But with the short Decision Boxes, you can quickly find what you're looking for." (Social worker #20, home support service #1) DBs are available to patients/caregivers after the consultation if they require more information "What is also interesting is that the DBs are available in a format that can be consulted after the consultation by patients and their families." (Physician #73, Clinic #2) Short modules make it easier to retrieve information from the e-learning activity "The training module is interesting too. It's concise, not too long, and the sheets are pretty quick to complete. I think it's a winning formula. The fact that it's short and concise makes it easier to use." (Nurse #26, Clinic #4) "They're really easy to use and to find your way around." (Nurse #65, Clinic #1) The training program is easy to do: brief, concise, clear, well-explained "I liked the fact that it's not too long, it's set out clearly, it's well explained. It was quick to use."(Nurse #65, Clinic #1) Flexible nature of the training program: easy to access at the most convenient time for the learner, and at their own pace "It's good that it is possible to access it at the moment we choose, at the right moment: it's the flexibility." (Physician #73, Clinic #2)
Ease of use
No prerequisite for the training program "There are not really any prerequisites; I would say that anyone working in a clinic with a minimum level of experience would be able to complete it." (Nurse #26, Clinic #4) Training program provides easy-to-understand, visual and practical training "I found it visually appealing, and the fact there were examples gave me a better ideas on how to interact with my patients." (Dietician #9, home support service #1) Applicability of learnings to other clinical situations "I think that it could be used afterwards for other types of clienteles. The Decision Boxes incorporate a slightly more standardized practice when it comes to sharing information with the clientele, and to shared decision making." (Nurse #26, Clinic #4) DBs are well explained and provide concrete guidance "The DBs are relatively short (2-3 pages), there's not too much information. They're easy to find your way around, easy to follow and to use in the workplace." (Social worker #20, home support service #1) Extrinsic sources of motivation Incentives: continuing education credits "There's an incentive with the training units." (Occupational therapist #29, home support service #3) Participation encouraged by reminders and follow-ups during training "I thought the emails you sent to remind us and to inform us when new Decision Boxes were available was a good approach." (Dietician #9, home support service #1) a Citations were translated from French Time required to take ownership of the tool may be longer for those professionals lacking experience in the topic "Of course, taking ownership of the tool also takes time." (Physiotherapist #46, home support service #2)
Inconvenience of training components
Training fragmented into e-learning activity and evidence summaries makes it difficult to follow; multiple stages "I think I would have preferred to do it all at once." (Social worker #50, home support service #2) DB content not well adapted to some professional fields of practice (e.g., dieticians) "The Decision Boxes are certainly interesting, but they don't apply to every situation. There weren't any in my area of practice. So for me, it's a bit less motivating. I don't think I'll be using any of the Negative influence of colleagues who did not complete the training program "I suspect I'm not the only one at the clinic who didn't have time to complete the online training because we have a lot of reading to do, forms to fill out, a family life when we can, so adding this on top." (Physician #55, Clinic #1) Lack of formal recognition of training by the employer "It would have been good to receive more official recognition from the employer." (Physiotherapist #46, home support service #2) Evidence summaries unknown to/not popular with colleagues "There is a lack of awareness that there are decision-making tools. They should be promoted even more." (Physiotherapist #46, home support service #2)
Difficulty in using DBs
Costs associated with printing the DBs "The printouts, especially the colour ones, which are more attractive, can generate costs, especially if you need them for the patients and their families as well." (Physician #73, Clinic #2) Preparation required before using them during consultation (access, printing, etc.) "It takes a lot of steps to go find the link to access the tool, to go on the internet, to then be able to print it. It is a good tool, but it would be good to have it at your fingertips so it can be used. And since I didn't have it at my fingertips ..." (Physician #73, Clinic #2) Some figures in the DBs difficult to interpret and less relevant "There are, for example, times when the percentages are not easy to interpret or apply. There are some that are relatively easy, like indoor gardening. As advantages, we see that agitation decreases for 64% of seniors: it is relatively easy. But for others, it is sometimes less obvious. We understand that the therapeutic touch decreases restlessness in 28-54% of cases. We understand that it can reduce agitation, but I don't think I will use these figures, I will use the averages more." (Social worker #20, home support service #1) DBs not available for all patients "When we're at the clinic with patients, there are often a number of priorities to be addressed. It's rare that there's only one reason for consulting, and that that reason happens to be one of the ones addressed in the Decision Boxes." (Physician #73, Clinic #2) Other tools already handed out to patients; DBs not yet incorporated into regular practice and can be cumbersome "We give out lots of advice and we hand out all kinds of stuff, plenty of documents. We haven't gotten around to giving out additional tools. It just hasn't been part of our practice so far." (Occupational therapist #45, home support service#2) a Citations were translated from French Table 5 Strategies to improve training program on shared decision making with older adults with neurocognitive disorders Strategy To improve learning modalities Offer the option of receiving the DBs in a single block rather than in sections.
Make the online activity available in print format for regions with limited Internet access.
Subdivide the longer modules.
Use podcasts.
Give participants the option of skipping the modules on topics they are already familiar with.
Clarify the availability of the tools throughout the training program, and promote their potential as a teaching aid for interns.
Create the possibility for learners to adjust the speed of the narration in the videos.
Make headphones available to learners in shared workspaces.
Make it easier to pick up training again after pausing.
Include a user guide for learners who are less tech-savvy.
To improve implementation
Include targeted messages to help promote the training program: -By clarifying learners' preferred objectives (understanding SDM, learning about the tools, understanding the evidence about the different interventions) -By highlighting the clinical issues covered by the DBs, since they are practice-oriented -By promoting the usefulness of DBs to communicate information to patients Maintain training credits as a source of motivation, enhance them if possible, and add other possible sources of motivation.
Make the training program shorter.
Officially incorporate the training into the participant's schedule by negotiating with immediate superior.
Provide training at a more convenient time of the year, e.g., in summer.
Adapt training length to individual needs and experience.
Make DBs easier to access: -Facilitate patients' access to online DBs, e.g., by giving them the website address Create DBs for all of the themes addressed in clinical encounters, and expand the practice areas covered.
Offer learners the chance to choose the DBs they wish to review, at the beginning of the training program.
In the online activity, present examples, clinical cases, or role-plays relating to various scopes of professional practice.
Simplify the data presented in the DBs.
In the online activity, explain how to present the wide confidence intervals associated with effect estimates.
Promote the tools with decision makers and employers (nursing or multidisciplinary department heads, professional bodies, universities), via webinar, for example.
Address the barriers mentioned during the learning program with presentations and credited workshops, in collaboration with officially recognized public authorities.
To improve dissemination of the tools, make them available in clinics, health institutes, libraries, and other public places.
Promote the option of doing the training as a group.
Offer incentives to participate, in the form of gifts, money, or meals.
Promote shared decision making in the population and directly support patients and their caregivers in participating to the clinical decision making process.
Promote shared decision making at level of the government.
the training components, such as facilitating access to the DBs, making the online activity available in print format for regions with limited Internet access, and including a user's guide for those learners who are less Internet-savvy.
Survey results: effects of the training program We partially confirmed our hypothesis to the effect that participants' knowledge and intention would increase between before and after training.
We did not observe any change in participants' knowledge about SDM after training (Table 6). By contrast, knowledge about risk communication statistically improved (P = 0.02). Moreover, we observed statistically significant improvements in HCPs' awareness (P < 0.001) and perceived awareness (P < 0.01) of the options after training. Training had variable effects on clinical knowledge, depending on the topic. Participants' level of intention to adopt SDM did not change between before and after training. Intention to adopt SDM was high at baseline, and remained high I make the decision alone, relying on the best scientific evidence available 0 (0.0%) 0 (0.0%) 0.82 I make the decision, but strongly considering the opinion of the patient 0 (00.0%) 3 (17.7%) The patient and I make the decision together equally 4 (23.5%) 4 (23.5%) The patient makes the decision, but strongly considering my opinion 5 (29.4%) 2 (11.8%) The patient makes the decision alone, after obtaining information on the best available scientific evidence None of the determinants of intention measured through the CPD-REACTION (beliefs about capabilities, beliefs about consequences, social influence, moral norm) changed between before and after training. There was either no significant difference in participants' perceived ability to adopt SDM and in their preferred role in SDM before and after the training program. About half (53%) the participants who completed the questionnaire after training reported having consulted the training material again after training to answer questions that came up.
The trends regarding these different variables before training were similar between participants who fully completed the training and those who partially completed the training.
Integration of findings
The interviews highlighted certain factors that help understand the low participation rates in training, and high attrition during training, such as lacking time and social support, technical and logistical barriers, and difficulties using the DBs. Although our quantitative results suggest that knowledge about SDM did not change between before and after, we observed positive attitudes towards SDM in participants who had completed it. We also note that, during the interviews, participants reported using the training material to answer questions after the training was over, and printing out and sharing the DBs with their colleagues and the residents under their supervision. Some participants also reported that they intended to integrate the DBs into their clinical and teaching practices. These findings concur with the high levels of intention to adopt SDM that we measured before and after training.
Additionally, our findings from the interviews about participants' appreciation of the usefulness, ease of access, and use of the training program converge with our quantitative results demonstrating an improvement in participants' clinical knowledge after training.
Discussion
In this mixed-method study, we described the level of participation of HCPs in a multi-component training program on SDM in the context of NCDs, and the factors influencing their participation. We found that, among those who had initially agreed to participate, only 24% (17/72) completed the training program. Qualitative interviews with HCPs revealed several factors restricting their participation, such as their lack of time to complete training and the fragmentation of training into several components. They also mentioned a number of factors that encouraged them to complete the training program, such as ease of use, the availability of continuing professional development credits, and the usefulness of content. We also found that training helped improve participants' knowledge about risk communication and clinical knowledge.
A large proportion of the participants who committed to completing the training did not even access it. The literature suggests that relevance of the topic, quality of content and the provision of CPD credits are important incentive to participate in SDM training activities [12,28]. These factors were likely not responsible for the observed limited access to the program, as we ensured that learners had a high interest in the content of the training program, and the training earned them CPD credits. Instead, the qualitative interviews suggest that a lack of time and logistical barriers caused the observed limited access to training. Cook et al. [28] similarly concluded that time required to complete the activity is an important determinant of learners' selection of a CPD activity. Perhaps an online learning activity that would be broken into several smaller pieces would be preferable to the one-hour module we designed. Indeed, the Pew Research Centre on Journalism and Media examined 15 months' worth of the most popular news videos on YouTube and concluded that the length for optimal engagement with online videos is between 2 and 5 min [29]. A modified version of the training program could therefore reflect these numbers, with modules as short as 2 min, to improve access.
Among the people who participated, a large proportion did not complete the training activities in their entirety, which agrees with several other reports of low retention in e-learning activities [30,31]. We found the high dropout rates surprising, since we had tailored the training content and training component to the needs and preferences of learners [24], and our findings suggest that the learners found the training program useful and supportive of their clinical activities. High dropout rates may then be a consequence of the barriers participants mentioned in the interviews, such as lack of time, issues with Internet access, inconvenience of the training method, difficulty in using tools, or low peer and employer support. Similar barriers to physician engagement in self-directed e-learning in CPD were reported in a scoping review of 17 studies [32]. Resource requirements (including time, cost, and labour) and lack of information-technology skills were also reported as barriers to e-learning in health sciences education in a recent systematic review [33]. Additionally, research in education suggests that the learner's isolation [34] and their inability to engage autonomously and actively in the learning process [35] might be important determinants of their participation in self-directed elearning activities. We, however, did not assess these particular aspects in the current study. Because low completion rates of e-learning programs may undermine their effectiveness [30,36], the factors influencing their participation should be considered and addressed by CPD developers to ensure the best possible learning outcomes.
We proposed several strategies to improve the learning modalities and implementation of the studied training program on SDM with older adults living with NCDs. The use of examples and role-plays are proven to be effective in training healthcare providers in SDM [37,38]. Mamary et al. [39] recommend providing a user's guide to learners who are less Internet-savvy. These authors also reported that computer training and dedicated time in the workplace for self-directed methods encouraged participation in self-directed continuing medical education. Some of the proposed strategies have been reported to support the participation of distance learners in CPD activities, such as providing access to a print version of the training material, lengthening the time available to complete training, offering individual profiles and follow-ups, proposing online collaboration, dividing modules into shorter sections, and supporting teamwork [40]. Another report suggests introducing a learning agreement between the learner and the university, offering support material, creating frequently asked questions (FAQs), using discussion boards, and monitoring learners' opinions for continuous improvement [41].
Monetary incentives have also been demonstrated to influence HCPs participation to e-learning activities [42]. Podcasts could also be considered as this technology is becoming more and more popular for CPD training and information dissemination [43].
Even if this training program allowed improving some of participants' knowledge, it did not allow increasing their intention to adopt SDM that was already high at baseline, nor their perceived capacity to adopt SDM. We also observed some improvement in clinical knowledge, but this is solely a secondary benefit of the training program that aimed at improving the adoption of SDM. Inclusion of outcomes at level of the older adults living with NCDs and their caregiver would have been required to conclude on the impact of this training program on patient care and quality of life from their own perspectives [44,45]. However, we could not recruit enough older adults to assess whether training had actual impacts on the adoption of SDM. Studies with seniors with dementia consistently report high dropout rates both of seniors, caregivers and healthcare providers [46], and we were unsuccessful in addressing these challenges.
Trustworthiness of the findings is enhanced by our detailed description of the educational context and intervention, and by the use of multiple data sources (access data, surveys and interviews), methods (qualitative and quantitative), and researchers. Moreover, by collecting data both from the participants who completed training, and those who did not, we were able to provide an accurate picture of the factors at play. However, we did not ask feedback from participants on the qualitative data or interpretation of the data (member checking). We did not either use a control group in our quantitative evaluation of the impact of the training program on knowledge and intention, and so the results are prone to confounding bias since extraneous events or changes in context around the time of the intervention could have influenced the outcomes. The use of a non-random study sample and the high participant dropout rates may also have affected the results by introduction of selection bias. In addition, given the one-month delay between participation in the training activities and the interviews, recall bias is likely, and may have led to missing some important determinants to participation in the training program.
Conclusions
Our study allowed us to identify important improvements for the development and implementation of this training program. In a next step, we plan to modify the program and implement it in a scaling-up experiment. The proposed list of strategies to counter the factors that hinder the participation of HCPs in interventions to improve SDM may be applied to several clinical contexts. These findings may support researchers in planning interventions targeting HCPs, especially those who practice in primary care contexts and those in the care of older adults living with NCDs. More studies that focus on actual SDM adoption following the implementation of professional training are now required.
|
2020-08-13T10:05:22.979Z
|
2020-08-12T00:00:00.000
|
{
"year": 2020,
"sha1": "70fab7feeff06345b20053595b143ea689d448d1",
"oa_license": "CCBY",
"oa_url": "https://bmcmedinformdecismak.biomedcentral.com/track/pdf/10.1186/s12911-020-01197-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e966221591d5bab9ceef346540882070dcf4a68",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Psychology",
"Medicine"
]
}
|
256969406
|
pes2o/s2orc
|
v3-fos-license
|
Multi-Omics Analyses Reveal the Mechanisms of Early Stage Kidney Toxicity by Diquat
Diquat (DQ), a widely used bipyridyl herbicide, is associated with significantly higher rates of kidney injuries compared to other pesticides. However, the underlying molecular mechanisms are largely unknown. In this study, we identified the molecular changes in the early stage of DQ-induced kidney damage in a mouse model through transcriptomic, proteomic and metabolomic analyses. We identified 869 genes, 351 proteins and 96 metabolites that were differentially expressed in the DQ-treated mice relative to the control mice (p < 0.05), and showed significant enrichment in the PPAR signaling pathway and fatty acid metabolism. Hmgcs2, Cyp4a10, Cyp4a14 and Lpl were identified as the major proteins/genes associated with DQ-induced kidney damage. In addition, eicosapentaenoic acid, linoleic acid, palmitic acid and (R)-3-hydroxybutyric acid were the major metabolites related to DQ-induced kidney injury. Overall, the multi-omics analysis showed that DQ-induced kidney damage is associated with dysregulation of the PPAR signaling pathway, and an aberrant increase in Hmgcs2 expression and 3-hydroxybutyric acid levels. Our findings provide new insights into the molecular basis of DQ-induced early kidney damage.
Introduction
Pesticides are the leading cause of poisoning-related accidental deaths in China. Following the discontinuation of paraquat, diquat (DQ) has become the preferred bipyridyl herbicide. However, cases of DQ poisoning have continued to increase in recent years, and the predominant route of exposure is the gastrointestinal tract [1]. The kidney is the main excretory organ as well as the primary target of DQ, and the toxic effects of the latter mainly involve the renal tubules, eventually leading to acute kidney injury (AKI) [2]. The incidence of AKI in patients with DQ poisoning is 73.3%, which is significantly higher compared to that caused by paraquat or other pesticides.
Previous studies have shown that DQ is selectively toxic to the kidneys, and has a similar chemical structure to that of the highly nephrotoxic orellanine [2]. Renal tubular dysfunction is the initial manifestation of DQ toxicity [3], and obvious renal tubular epithelial cell damage has been observed during autopsy [4]. The offspring of DQ-intoxicated rats exhibit renal duct damage. Furthermore, the prognosis of patients with DQ poisoning is closely related to AKI, which is usually reversible in the early stage. However, given the narrow time window for treatment, the incidence of endpoint events (death or uremia) exceeds 30%. Therefore, early detection and prevention of AKI are crucial in cases of DQ poisoning [5][6][7].
The clinical diagnosis of AKI is currently based on elevated blood creatinine (Scr) and blood urea nitrogen (BUN), along with low urine output [7]. However, the rise in Scr and BUN is increased when renal function has already declined by nearly 50%, while the urine output is susceptible to multiple factors such as diuretics and blood volume. Moreover, Scr and BUN are easily cleared by continuous renal replacement therapy (CRRT) and the urine volume varies with the ultrafiltration volume of CRRT. Thus, none of these indicators can accurately reflect the changes in renal function during CRRT [8]. Therefore, it is unclear whether using high Scr and oliguria as the clinical criteria for the initiation of CRRT delays the clearance of nephrotoxic substances such as DQ, and whether hemoperfusion (HP) combined with early CRRT improves prognosis [2,8]. Therefore, it is crucial to identify novel biomarkers and effector molecules for early detection and progression of kidney injury, and to guide hemodialysis treatment.
In this study, we used integrated metabolomics, transcriptomics and proteomics to explore the molecular mechanisms underlying DQ-induced nephrotoxicity at the very early stage. Based on multi-omics analyses, we found that DQ induced aberrant gene expression at the mRNA, protein, and metabolite levels. Our findings provide novel insights into DQ-induced kidney injury and identify novel biomarkers.
Animals and Chemical Reagents Treatments
Male C57BL/6 J mice aged 28 weeks and weighing 25-30 g were bought from Nanjing Medical University (NYD-L-2020082601). The mice were kept in a specialized pathogenfree environment (22-26 • C, 40%-60% humidity, and 12 h light/dark cycles) with food and water provided ad libitum. The feed used in this experiment meets the national standard. The feed mainly contains energy, protein, fat, amino acid, minerals, etc. All mice were given the same food. The mice were randomly divided into the control, low-dose DQ (200 mg/kg) and high-dose DQ (350 mg/kg) groups after one week of acclimatization (N = 30 per group). DQ and saline (control) were administered via the intragastric route. The mice were euthanized on days 1, 3 and 7 after induction, and kidney tissue samples were collected from 10 mice of each group. Ten kidney samples were used for metabolomics analysis, three were used for proteomics analysis, and three for transcriptomic analysis. Diquat (DQ) was purchased from Aladdin (D101258-100 mg).
Transcriptome Analysis
RNA sequencing (RNA-seq) was performed on three biological replicates of the DQtreated and control group kidney tissues by Biotree Biotech Co., Ltd. (Shanghai, China). Briefly, total RNA was extracted and reverse transcribed, and the double-stranded cDNA was used to construct libraries. After quality control, the libraries are pooled and sequenced on the Illumina Novaseq 6000 platform (Thermo, US). The clean reads were filtered from the raw sequencing data after checking for the sequencing error rate and the distribution of GC content. The gene expression levels were calculated as the number of fragments per kilobase of transcript per million reads (FPKM). The expression matrix of all samples was generated, and differentially expressed genes (DEGs) between the control and DQtreated samples were screened using the edgeR program with Padj < 0.05 as the criterion. The DEGs were then functionally annotated by gene ontology (GO) analysis in terms of molecular functions (MF), biological processes (BP) and cellular components (CC), as well as Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses using the clusterProfiler (http://www.bioconductor.org/packages/release/bioc/html/ clusterProfiler.html) program, (accessed on 31 December 2021). The GO terms related to molecular function, biological process and cellular component were analyzed.
Proteomics Analysis
Total protein was extracted from the kidney tissues of three biological replicates from the control and DQ-treated groups, quantified and stored at −80 • C. Proteomic sequencing and analysis were conducted by Biotree Biotech Co., Ltd. (Shanghai, China). Briefly, the extracted proteins were first quantified by the BCA assay, precipitated using acetone, and then subjected to reduction, alkylation, digestion, TMT labeling, SDC cleanup, peptide desalting and high-pH pre-fractionation. For nanoLC-MS/MS analysis, 2 µg total peptides from each sample was separated and analyzed using a nano-UPLC (EASY-nLC1200) coupled to Orbitrap Exploris 480 (Thermo Fisher Scientific) with a nano-electrospray ion source. Data-dependent acquisition (DDA) was performed in profile and the positive mode with Orbitrap analyzer for 90 min. The Tandem Mass Tag (TMT) was used to identify the proteins and screen for unique peptides with p-Value < 0.05 (Student's t test) and fold change > 1.5 as the criteria. The proteins were subjected to principal component analysis (PCA), volcano plot analysis, hierarchical clustering analysis, GO and KEGG analyses, and protein-protein interaction (PPI) network analysis.
Untargeted LC-MS Metabolomics Analysis
The kidney tissue samples from the control and DQ-treated groups (10 biological replicates per group) were prepared as previously described [9]. Metabolomic sequencing and analysis were performed by Biotree Biotech Co. Ltd. (Shanghai, China). The metabolic profiles were acquired using Quadrupole-Electrostatic Field Orbitrap Mass Spectrometer (Thermo Fisher Scientific). The single peak corresponding to each metabolite was filtered, and the missing values in the original data were reproduced. The internal standard was utilized for normalization, and the outliers were filtered based on the relative standard deviation. Partial least squares discriminant analysis (PLS-DA) and unsupervised principal component analysis (PCA) were used to identify the differential metabolites between two groups, with VIP > 1 and p < 0.05 as the criteria. The differential metabolites were subjected to correlation analysis, KEGG pathway analysis, and hierarchical clustering.
Statistical Analysis
Data visualization was performed using GraphPad Prism 5. The data were expressed as the mean ± standard deviation of the mean (SD). Data were processed by GraphPad Prism 5. The mean values were statistically analyzed by unpaired t-tests and the significant differences among different groups were assessed by a non-parametric test. Differences were considered statistically significant at p < 0.05.
Establishment and Validation of DQ-Treated Mouse Model
We established a mouse model of DQ-induced kidney injury to study the early stages of AKI (Figure 1a). While DQ did not affect serum Scr levels on day 1, serum BUN levels were not affected by 200 mg/kg or 350 mg/kg DQ. The serum UREA levels were significantly higher in mice treated with 350 mg/kg DQ compared to the control group. In contrast, 200 mg/kg DQ had no significant effect on the urea level. Subsequently, both Scr and BUN continued to rise, and significant differences were observed on the 3rd and 7th days (Figure 1b,c). Furthermore, while no substantial lesions were observed in the kidney tissues of the DQ-treated mice in the first day of exposure, the renal tubules exhibited vacuolation and necrosis 3 days later ( Figure S1). Based on these results, we selected the dose of 200 mg/kg to simulate the early stage DQ-induced kidney damage. BUN continued to rise, and significant differences were observed on the 3rd and 7th days (Figure 1b,c). Furthermore, while no substantial lesions were observed in the kidney tissues of the DQ-treated mice in the first day of exposure, the renal tubules exhibited vacuolation and necrosis 3 days later ( Figure S1). Based on these results, we selected the dose of 200 mg/kg to simulate the early stage DQ-induced kidney damage.
Transcriptomic Analysis of DQ-Treated Mice
As shown in the UpSet graph in Figure 2a, 16,927 genes were expressed in all samples. Furthermore, 869 genes were differentially expressed in the DQ-treated samples relative to the control, of which 473 genes were downregulated and 396 genes were upregulated ( Figure 2b and Table S1). The DEGs were enriched in GO terms related to fatty acid metabolism, extracellular structure organization, sulfur compound metabolism ( Figure 2c), extracellular matrix, collagen-containing extracellular matrix (Figure 2d), extracellular matrix structural constituent, and sulfur compound binding ( Figure 2e). Furthermore, KEGG analysis revealed that these DEGs were significantly associated with pathways of drug metabolism, drug metabolism-cytochrome P450, glutathione metabolism and retinol metabolism ( Figure 2f). These results indicate that DQ might dysregulate numerous pathways in the kidneys.
Transcriptomic Analysis of DQ-Treated Mice
As shown in the UpSet graph in Figure 2a, 16,927 genes were expressed in all samples. Furthermore, 869 genes were differentially expressed in the DQ-treated samples relative to the control, of which 473 genes were downregulated and 396 genes were upregulated ( Figure 2b and Table S1). The DEGs were enriched in GO terms related to fatty acid metabolism, extracellular structure organization, sulfur compound metabolism (Figure 2c), extracellular matrix, collagen-containing extracellular matrix (Figure 2d), extracellular matrix structural constituent, and sulfur compound binding ( Figure 2e). Furthermore, KEGG analysis revealed that these DEGs were significantly associated with pathways of drug metabolism, drug metabolism-cytochrome P450, glutathione metabolism and retinol metabolism (Figure 2f). These results indicate that DQ might dysregulate numerous pathways in the kidneys.
Proteomic Analysis of DQ-Treated Mice
We used TMT-based quantitative proteomics analysis to identify the differentially expressed proteins (DEPs) that might be linked to DQ-induced kidney damage. PCA revealed notable differences in protein abundance between the DQ and control groups (Figure 3a). There were 351 DEPs between the two groups, of which 133 proteins were upregulated and 218 proteins were downregulated in the DQ-treated mice (Figure 3b and Table S2). The DEPs were mainly enriched in pathways associated with Parkinson's disease, Salmonella infection, chemical carcinogenesis, PPAR signaling, phagosome, tuberculosis, ribosome, bile secretion and retinol metabolism (Figure 3c). According to the GO enrichment analysis, DEPs were primarily associated with terms such as intracellular, intracellular part, organelle, intracellular organelle, cytoplasm, membrane-bounded organelle, intracellular membrane-bounded organelle, cytoplasm part, organelle part and intracellular organelle part (Figure 3d).
Integrated Transcriptome and Proteome Datasets
Integration of the transcriptome and proteome datasets revealed that 34 genes were substantially altered by DQ exposure (Table S3). KEGG pathway analysis showed that these genes are significantly associated with the PPAR signaling pathway, retinol metabolism, asthma, cholesterol metabolism, fatty acid degradation, valine/leucine and isoleucine degradation, fatty acid metabolism, and kidney injury caused by DQ (Figure 4a). Furthermore, GSEA consistently demonstrated that these DEGs and DEPs were substantially
Integrated Transcriptome and Proteome Datasets
Integration of the transcriptome and proteome datasets revealed that 34 genes were substantially altered by DQ exposure (Table S3). KEGG pathway analysis showed that these genes are significantly associated with the PPAR signaling pathway, retinol metabolism, asthma, cholesterol metabolism, fatty acid degradation, valine/leucine and isoleucine degradation, fatty acid metabolism, and kidney injury caused by DQ (Figure 4a). Furthermore, GSEA consistently demonstrated that these DEGs and DEPs were substantially enriched for metabolism-related pathways, including the drug metabolism cytochrome P450, the PPAR signaling pathway, retinol metabolism, metabolism of lipids, amino acid metabolism, glutathione metabolism and fatty acid metabolism (Figure 4b). Taken together, the aforementioned pathways are likely targeted by DQ during kidney injury.
Metabolomic Analysis of DQ-Treated Mice
The metabolic by-products that may contribute to DQ-induced kidney injury were identified by untargeted LC-MS. The results of PCA and OPLS-DA clearly showed distinct metabolic patterns of the control and DQ-treated mice (Figure 5a,b). Overall, 96 metabolites were differentially expressed between the control and DQ-treated groups (adjusted p < 0.05), of which 40 were elevated and 56 were decreased in the latter (Figure 5c, Table S4). Furthermore, five of these differentially regulated metabolites are involved in purine metabolism, three in biosynthesis of unsaturated fatty acids, two in primary bile acid biosynthesis, one in fatty acid biosynthesis, and one in fatty acid metabolism (Figure 5d). To ascertain which metabolic pathways were most affected by DQ exposure, we performed KEGG pathway enrichment analysis. As shown in Figure 5d, the top 10 pathways were those related to purine metabolism, biosynthesis of unsaturated fatty acids, primary bile acid biosynthesis, nicotinate and nicotinamide metabolism, taurine and hypotaurine metabolism, fatty acid metabolism, amino sugar and nucleotide sugar metabolism, glycine, serine and threonine metabolism, porphyrin and chlorophyll metabolism, fatty acid elongation in mitochondria.
Toxics 2023, 11, x 7 of 13 enriched for metabolism-related pathways, including the drug metabolism cytochrome P450, the PPAR signaling pathway, retinol metabolism, metabolism of lipids, amino acid metabolism, glutathione metabolism and fatty acid metabolism (Figure 4b). Taken together, the aforementioned pathways are likely targeted by DQ during kidney injury.
Metabolomic Analysis of DQ-Treated Mice
The metabolic by-products that may contribute to DQ-induced kidney injury were identified by untargeted LC-MS. The results of PCA and OPLS-DA clearly showed distinct metabolic patterns of the control and DQ-treated mice (Figure 5a,b). Overall, 96 metabolites were differentially expressed between the control and DQ-treated groups (adjusted p < 0.05), of which 40 were elevated and 56 were decreased in the latter (Figure 5c, Table S4). Furthermore, five of these differentially regulated metabolites are involved in purine metabolism, three in biosynthesis of unsaturated fatty acids, two in primary bile acid biosynthesis, one in fatty acid biosynthesis, and one in fatty acid metabolism ( Figure 5d). To ascertain which metabolic pathways were most affected by DQ exposure, we performed KEGG pathway enrichment analysis. As shown in Figure 5d, the top 10 pathways were those related to purine metabolism, biosynthesis of unsaturated fatty acids, primary bile acid biosynthesis, nicotinate and nicotinamide metabolism, taurine and hypotaurine metabolism, fatty acid metabolism, amino sugar and nucleotide sugar metabolism, glycine, serine and threonine metabolism, porphyrin and chlorophyll metabolism, fatty acid elongation in mitochondria.
Discussion
DQ is a highly nephrotoxic bipyridine herbicide that primarily targets the renal tubules and induces AKI. The molecular basis of DQ-induced kidney injury is cell death due to excessive production of reactive oxygen species (ROS) formed during lipid peroxidation [2,10]. The prognosis of DQ poisoning is highly correlated with AKI. Although AKI is reversible in its early stages, the therapeutic window is narrow. Therefore, it is crucial to identify the biomarkers and effectors of the incipient stages of AKI for early diagnosis
Discussion
DQ is a highly nephrotoxic bipyridine herbicide that primarily targets the renal tubules and induces AKI. The molecular basis of DQ-induced kidney injury is cell death due to excessive production of reactive oxygen species (ROS) formed during lipid peroxidation [2,10]. The prognosis of DQ poisoning is highly correlated with AKI. Although AKI is reversible in its early stages, the therapeutic window is narrow. Therefore, it is crucial to identify the biomarkers and effectors of the incipient stages of AKI for early diagnosis of kidney damage.
We identified the time window of DQ-induced kidney damage by analyzing different time points and dosages. There was no evident renal parenchymal damage, or any changes in serum Scr or BUN levels after 24 h exposure to 200 mg/kg DQ, which corresponded to the early stage of the DQ-induced kidney damage. To identify the molecular mechanisms of DQ-induced renal damage at this stage, we used an integrated multi-omics approach, which revealed that exposure to DQ significantly affects the PPAR signaling pathway and fatty acid metabolism.
According to the integrated multi-omics data, the PPAR signaling pathway and fatty acid metabolism were associated with upregulation of Hmgcs2, Cyp4a10 and Cyp4a14, and the downregulation of Lpl mRNA and proteins in the DQ-treated kidneys. PPAR, a lipid-activated nuclear receptor, is abundantly expressed in tissues with high fatty acid metabolism, such as the kidney [11]. PPAR-deficient mice accumulate more lipids in their kidneys, which increases production of inflammatory mediators, eventually leading to kidney injury [12,13]. In addition, PPAR is also a transcription factor that controls genes involved in lipid metabolism and the mitochondrial fatty acid oxidation pathway [14], which fulfills a significant portion of the body's energy needs [15,16]. Integrated proteomic and transcriptomic analysis revealed that the fatty acid oxidation pathway, and subsequently fatty acid metabolism, were downregulated in the DQ-treated group.
The primary rate-limiting enzyme for ketogenesis is Hmgcs2 (3-hydroxy-3methylglutaryl-CoA synthase 2). Hmgcs2 is a key rate-regulating enzyme for ketone body formation, which is related to fatty acid metabolism and mainly exists in cell mitochondria. The HMG-CoA generated by it is converted into acetoacetic acid under the action of HMG-CoA lyase, and acetoacetic acid can be converted into hydroxybutyric acid and acetone, which are called ketone bodies. Ketogenesis of cells is an important part of fatty acid metabolism, and acetyl CoA, the product of fatty acid oxidation, is the raw material for the formation of ketosomes. Therefore, Hmgcs2 may regulate the changes in fatty acid metabolism by regulating the ketogenesis process. Upregulation of Hmgcs2 in the glomeruli of high fructose-fed rats and high fructose-treated differentiated podocytes enhanced ketone bodies level, particularly that of hydroxybutyrate (3-OHB), to block histone deacetylase (HDAC) activity [17]. Hmgcs2 is likely upregulated through the PPAR-α pathway [18]. The findings imply that enhanced renal ketogenesis due to Hmgcs2 overexpression may be significant in the pathogenesis of diabetic neuropathy DN in patients with type 2 diabetes, indicating that Hmgcs2 is a potential therapeutic target for the management of diabetic renal complications [19]. We found that Hmgcs2 gene and protein expression levels increased in the kidney tissues after DQ exposure, indicating its role in DQ-induced renal damage as well.
CYP4A (cytochrome P450, family 4, subfamily a) catalyzes the hydroxylation of medium-and long-chain fatty acids [20]. One of the pathway for fatty acid degradation is through oxidation, in which dicarboxylic acids are formed and subsequently undergo β-oxidation from the omega end. This pathway is catalyzed by CYP450 enzymes and the peroxisomal β-oxidation pathway which are regulated by PPARα [21] The mouse genome contains four Cyp4a genes: Cyp4a10, Cyp4a12a, Cyp4a12b, and Cyp4a14-all of which are localized in chromosome 4 [22]. Murine Cyp4a10 and Cyp4a14 (homologous to human CYP4A22 and CYP4A11, respectively) are highly expressed in the liver and kidneys, and are known to convert the arachidonic acid to its metabolite 20-hydroxyeicosatetraenoic acid (20-HETE), which regulates the inflammatory response through the generation of ROS [15,22]. As a result, the aberrant expression of Cyp4a10 and Cyp4a14 observed in our study may lead to fatty acid breakdown.
LPL (lipoprotein lipase) catalyzes the hydrolysis of triglyceride (TAG), which is the rate-limiting step in the lipolysis of chylomicrons and VLDL. In addition to other cell types, myocytes and adipocytes also synthesize LPL, which is then stored in the Golgi apparatus for either intracellular breakdown or secretion onto the cell surface. Patients with nephrotic syndrome often have hyperlipidemia due to the lack of LPL activators. Furthermore, the high levels of free fatty acids in the bloodstream of these patients upregulates ANGPTL4, which may inactivate LPL by either converting the active LPL dimers into inactive monomers or as a reversible non-competitive inhibitor of LPL [23]. In this study, LPL expression was downregulated in the DQ-treated kidney tissues, indicating its role in DQ-induced nephrotoxicity.
We identified eicosapentaenoic acid, linoleic acid, palmitic acid and (R)-3hydroxybutyric acid as significant metabolites involved in DQ-related kidney injury. Eicosapentaenoic acid, linoleic acid and palmitic acid are polyunsaturated fatty acids (PUFAs), which have been linked to a number of renal disorders. One study showed that retinoic acid signaling mediates production of toxic PUFAs [24]. Increased PUFA peroxidation by ROS initiates ferroptosis, an iron-dependent form of programmed cell death. Fatty acid oxidation in the liver produced high levels of 3-hydroxybutyrate acid, which is then transferred to extrahepatic tissues including the heart, brain and muscle to be used as a fuel. As one of the ketone bodies, 3-hydroxybutyric acid can directly promotes 3-hydroxybutyrylation of some proteins and functions as an endogenous inhibitor of histone deacetylases as well as an agonist of Gpr109a [25]. β-OHB is one of the intermediate metabolites of fatty acid oxidation. In addition to being a functional vector that transfers energy from liver to peripheral tissues under starvation stress, β-OHB is also an important signaling molecule and epigenetic regulatory molecule in vivo, regulating all aspects of life function. This study showed that glomerular podocytes damage and albuminuria production caused by fructose intake showed an increase in β-OHB beginning at week 8 of modeling and continuing until week 16 of the study deadline [17]. Therefore, β-OHB is a key metabolic substance in the occurrence and development of kidney injury. Taken together, dysregulated fatty acid metabolism may induce by the nephrotoxic effects of DQ.
Overall, Hmgcs2 upregulated and subsequently may promote 3-hydroxybutyric acid levels, dysregulating the PPAR signaling pathway. Our findings offer a new insight into the mechanisms underlying DQ-induced nephrotoxicity.
Conclusions
Our study is the first to investigate the mechanism of the early stage of DQ-induced kidney injury using a multi-omics approach. Our findings lay the foundation for diagnosing and treating renal damage following DQ exposure, and offer new insights into the molecular basis of DQ-induced kidney damage.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/toxics11020184/s1, Figure S1: Representative results of pathologic staining. Scale bars represent 100 µm in 20 × images. Black arrowheads indicate renal tubules exhibited vacuolation and necrosis.; Table S1. Differential genes in control and DQ treated groups; Table S2. Differentially proteins in control and DQ treated groups; Table S3. Differential genes/proteins both in transcriptome and proteome; Table S4. Differential metabolites in control and DQ treated groups. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to some data are still being analyzed.
|
2023-02-18T16:06:21.968Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9bdbe79440d8fc6d31fdd281f3ac286a7f876b87",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3ba5c8b91816ceb77c789d470c2a46edbd693a92",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253618334
|
pes2o/s2orc
|
v3-fos-license
|
JOURNAL OF AGRICULTURE AND APPLIED BIOLOGY
Conventional lowland rice cultivation involves flooding the paddy from planting to close to harvest, and high N fertilization. This practice leads to large amount of methane emissions. We studied the effect of soil water regime control on methane gas emissions and growth of several rice varieties on clayey soil. The experiment was arranged according to Split Plot Design. The main plot was water regime, i
Introduction
Climate change is a global phenomenon characterized by an increase in weather variations, such as temperature, rainfall, and wind (Ali and Erenstein, 2017). This phenomenon is expected to worsen in the future and JAAB | Journal of Agriculture and Applied Biology 119 Volume 3 | Number 2 | December | 2022 have a very large impact in developing countries that are very vulnerable to climate change (Session & Maskrey, 2007). Ali and Erenstein (2017) further explain that the existence of greater temperature fluctuations, changes in precipitation patterns resulting in reduced water availability will affect the amount of agricultural commodity production and income and even lead to poverty. The influence of climate change in the agricultural sector is very broad, covering various aspects, such as resources, agricultural infrastructure, and agricultural production systems, aspects of food security and self-sufficiency, as well as farmer welfare (Rejekiningrum et. al, 2018). Furthermore, it is also explained that the effect of climate change on agriculture is divided into two indicators, namely: 1) vulnerability to climate change, namely the decreased ability of humans, plants and livestock to adapt in carrying out their functions due to the stress of climate change, and 2) the impact of climate change. , namely the condition of loss/benefit both physically and socially and economically due to the stress of climate change.
Climate change has proven that its impact is very large on agriculture, but it cannot be denied that agriculture, is also a major contributor to global warming. In 2010 the contribution of non-carbon dioxide (CO2) greenhouse gases (GHG), such as methane (CH4) and nitrous oxide (N2O) was estimated at 5.2-5.8 gigatons of CO2 equivalent per year (Tubiello et al., 2013). Methane and nitrous oxide are two GHGs that have strong global warming potential, which are 25 and 298 times greater than carbon dioxide on a 100-year mass basis (IPCC, 2013). This GHG emission is estimated to continue to increase if plant cultivation activities are still managed conventionally. Lipper (2014) describes that GHG emissions from agriculture generally come from agricultural land, namely from the use of synthetic fertilizers, rice cultivation, and mass biomass burning, and these GHG emissions are estimated to continue to increase if agricultural cultivation activities are still managed conventionally. Particularly in Indonesia, in 2010 the agricultural sector took the third position as a GHG contributor with a total of 110.5 Mt (Kartikawati & Sopiawati 2017). One of the conventional agricultural practices that is suspected as a source of large GHG emissions is the cultivation of rice plants with a water supply system by flooding, starting from planting to approaching harvest age. Rice cultivation with continuous inundation systems will produce high GHG emissions (Linquist et al., 2012).
Several strategies to reduce GHG emissions have been implemented. However, several factors that influence GHG emissions, such as soil type, rice varieties, and their interactions with water regime regulation have not received much attention and information is still very limited. Based on this description, research on "methane gas emissions and the growth of several rice varieties on Alfisol soils through the regulation of the soil water regime" becomes interesting for further study. The aims of this research was to determine the effect of soil water regime regulation on methane gas emissions and the growth of several rice varieties cultivated on Alfisol soils.
The site
The research was carried out at the Gowa Agricultural Development Polytechnic (Polbangtan) glasshouse, while the methane emission analysis was carried out at the Pati Agricultural Environmental Research Institute Laboratory in Central Java, Agricultural Research and Development Agency, Ministry of Agriculture.
Research design
The research was arranged according to a Split Plot Design. The main plot was water regime, namely continuous flooding (2 cm inundation), and intermittent flooding. The subplots consisted of three varieties, namely Inpari 32, Mekongga, and Cisadane. Hence, there were 6 treatment combinations, repeated 4 times ( Figure 1).
Preparation of planting media
The soil used was sieved using a sieve with a hole diameter of 1 cm. The soil was put into the pot until planting media reaches a weight of 19 kg pot -1 . The soil is mixed with cow bokashi fertilizer at 10 tons ha -1 dosage. The planting medium was then incubated for 1 week. JAAB | Journal of Agriculture and Applied Biology 120 Volume 3 | Number 2 | December | 2022
Water regime settings
Two water regimes were applied, i.e. continuous flooding (the soil was inundated, 2 cm above the soil surface -the water level was kept constant every day), and intermittent flooding (flooded 2 cm, then left dried out until the soil started to crack, before water was added again to puddle and inundated 2 cm. This water control was carried out repeatedly until the seed filling phase.
Planting
Rice seeds are sown in trays/dapog with planting media of a mixture of soil and manure (1:1), after the seeds are 2 weeks old, the seeds are then planted in prepared pots.
Parameter measurement and data analysis
The parameters were measured include: 1) Emission of methane gas was carried out using a closed chamber, sampling of methane gas in a closed chamber was carried out at 52 days after planting (DAP) and 73 DAP. The GHG flux was calculated using the formula from Khalil et. al (1991) 2) Plant height measured from 1 week after transplanting until the beginning of the generative phase, 3) number of tillers per clump counted from 1 week after transplanting until the beginning of the generative phase, 4) number of productive tillers counted when the plant was 11 weeks old after transplanting at the time of panicle exit, and 5) the volume of roots was measured at harvest by means of the roots in the rice clump being cleaned and then put into a measuring cup with a predetermined volume of water, the increase in the volume of water was the same as the volume of the roots. The data obtained were analyzed using the analysis of variance test, and if there was a significant effect, then continued with the Duncan Multiple Range Test (DMRT).
Plant maintenance
Plant maintenance includes fertilization, pest/disease control, and weed cleaning. For fertilization, the dose used refers to the fertilizer recommendation for Kab. Gowa if there is an addition of 10 tons of organic fertilizer ha -1 , namely: Urea 250 kg ha -1 , SP 36 50 ha -1 , KCl 50 kg ha -1 . Meanwhile, pest and disease control is carried out by spraying biopesticides if there are symptoms of pest/disease attacks. While cleaning weeds manually, namely by pulling out the weeds.
Methane gas emissions
The results of measurement and the DMRT test for rice methane gas emissions at the age of Volume 3 | Number 2 | December | 2022 52 and 73 DAP are presented in Figures 2 and 3. In general, the flooding (A1) treatment gave higher emission values than the intermittent water supply (A2) treatment at both 52 and 73 DAP. At 52 DAP Mekongga gave the smallest emission value and was significantly different from other varieties, and at 73 DAP Cisadane gave the smallest emission value and was significantly different from other varieties.
The ability to release methane gas from each variety varies greatly, depending on the nature, age and activity of plant roots. This is in accordance with Xu et. al (2015) that the amount of CH4 gas emitted into the atmosphere from irrigated rice fields is influenced by plant age, water management during planting, input of organic and inorganic materials, soil type, temperature and variety.
Water Regime
Inpari 32 (V1) Mekongga (V2) Cisadane (V3) Volume 3 | Number 2 | December | 2022 environment, is also low in GHG emissions as a climate change mitigation measure. He further stated that in the future, rice varieties are needed that are low in GHG emissions, but still produce high production, and are in accordance with ecosystem conditions, resistant to pests and diseases and extreme conditions (Setyanto et al., 2017). Plants in their growth need sufficient and available water in the soil. The intermittent irrigation system in this study provides lower methane emissions. This is in accordance with Xu et al. (2015) that intermittent irrigation can reduce up to 60% CH4. This is in line with Haque et al. (2016) that intermittent irrigation in mid-season also significantly reduces global warming potential 46-56% compared to conventional inundation, reduces 50-53% CH4 flux without any difference in production.
Plant height
The results of measurement and the DMRT test for rice plant height at the age of 1-8 week after planting are presented in Table 1. Based on Table 1 it can be described that at weeks 1-8 there was no effect of setting the water regime on plant height in each variety, while the difference in plant height for each variety in each water regime began to be seen from the 3rd to the 8th week. In flooding (A1), the Cisadane variety gave the highest value and only have significant different from Inpari 32 at the 3rd and 4th week. While intermittent irrigation (A2), at weeks 3, 4, and 5 Cisadane have significant different from other varieties, and at weeks 6, 7, and 8, Cisadane have not significant different from Inpari 32. Note: the numbers in the same column, followed by different letters mean that they are have significant different in DMRT α 0.05 There is a difference or diversity between varieties starting from the 3rd week to the 8th week, because the plants are in a state of vegetative growth, according to the character of each variety, where each variety has different genetic potentials in responding to its growing environment. The ability of a plant or variety to adapt to its environment can be seen from the rate of plant growth, if a variety is more adaptive it will provide better growth. This is in accordance with the opinion of Utama et al. (2009) which states that plants that are tolerant of environmental stresses have the ability to adapt morphologically and physiologically.
Number of tillers
The results of measurement and the DMRT test for the number of rice tillers at the age of 1-8 WAP are presented in Table 2 Note: the numbers in the same column, followed by different letters mean that they are have significant different in DMRT α 0.05 Based on the research results obtained, the water regime treatments have not significant effect on the number of tillers in each variety. The difference in the number of plant tillers is basically more influenced by genetic traits. This is in accordance with Husna and Ardian (2010) that the number of tillers will be maximized if the plant has good genetic characteristics coupled with favorable environmental conditions or in accordance with plant growth and development. The intermittent water supply (A2) is still an ideal environment for the growth of rice plants, so that rice plants are still able to provide the maximum number of tillers, and have not significant different with the flooding (A1).
The development of the number of rice tillers is not only affected by genetic factors, but also by unfavorable environmental conditions, as described by Ikhwani et al. (2013). They noticed that the closer the spacing or the more plant population per unit area, the greater the competition between rice clumps in capturing solar radiation, absorption of nutrients and water. As a result, plant growth is inhibited, the number of tillers is reduced and the yield of rice plants is decreased.
Number of productive tillers
The results of measurement and the DMRT test for the number of productive tillers of rice at the age of 11 WAP are presented in Figure 4. In general, the number of productive tillers of the Mekongga (V2) variety was higher and have significant different from Cisadane and Inpari 32 in all water regime treatments. From Figure 1 it can also be seen that the water regime A1 (flooding) gave higher tiller yields than intermittent water supply (A2), especially for the Mekongga and Cisadane varieties. In A1 Mekongga gave the highest value of 14.50 tillers and have significant different from other varieties, the same thing happened in A2, where Mekongga gave the highest yield of 12 productive tillers and also have significant different from other varieties. As with the number of tillers, the number of productive tillers is affected by genetic factors. Husna and Ardian (2010) explain that the maximum number of tillers will be achieved if the plant has good genetic characteristics, with favorable environmental conditions or in accordance with plant growth and development. The water regime with flooding (A1) and intermittent water supply (A2) is still a suitable environment for the growth of rice plants, so that rice plants are still able to provide the maximum number of productive tillers. The increase in the number of productive rice tillers is directly proportional to the increase in the total number of tillers. Riyani et al. (2013) stated that the formation of the number of productive tillers is closely related to the total number of tillers, where the greater the total number of tillers, the greater the number of productive tillers.
Root volume
The results of measurement and the DMRT test for rice root volume at the age of 105 DAP are presented in Figure 5. In general, the root volume of the Cisadane (V3) variety was higher in all water regimes and gave significant different from Mekongga and Inpari 32 in the intermittent water supply (A2). In Flooding (A1) Cisadane gave the highest value is 105 cm 3 and only have significant different to Inpari 32 variety, while in the A2, Cisadane gave the highest yield of 112 mL which have significant different from other varieties. Measurement of the root volume of rice plants is very important in relation to rice plant methane gas emissions. Roots have an important role in the release of methane gas into the atmosphere, because it can increase the methanogenesis process through the release of root exudates which are rich in available carbon sources. With increasing oxygen concentration, the methane production process can be reduced because methane is oxidized biologically by methanotropic bacteria (Setyanto et al., 2017). The roots of rice plants are able to exchange oxygen, so they can form a thermodynamic balance in which about 60-90% of methane is produced in the rhizosphere layer through the aerenchyma vessels of rice plants. This is supported by Nisha & Arief (2018) that during vegetative growth, plants produce a lot of aerenchyma tissue.
Methane gas emissions in various varieties of rice plants are determined by differences in their physiological and morphological properties. The ability of varieties to emit methane depends on the aerenchyma cavity, number of tillers, biomass, root pattern and metabolic activity (Setyanto, 2017).
Conclusion
Intermittent flooding suppresses methane emission compared to continuous flooding. Continuously flooded soil yielded methane emission two to five times greater than intermittent flooded soil. However, the amount of methane emitted depends on rice variety; certain variety produces greater methane than others. No statistically difference in rice growth between the two water regimes examined. Intermittent flooding can be used a means of reducing methane emission from paddy field, while not affecting rice growth. Further studies on the effect of such water regime on rice production in the field is necessary.
|
2022-11-18T16:24:32.456Z
|
2022-10-04T00:00:00.000
|
{
"year": 2022,
"sha1": "030c37b2bf36d776dff9a46f43914e3760ef6813",
"oa_license": "CCBYNCSA",
"oa_url": "https://jaabjournal.org/index.php/jaab/article/download/222/61",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "621afdbb816026495f06cca822e91b865ed29dc4",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
180533
|
pes2o/s2orc
|
v3-fos-license
|
The circadian clock goes genomic
Large-scale biology among plant species, as well as comparative genomics of circadian clock architecture and clock-regulated output processes, have greatly advanced our understanding of the endogenous timing system in plants.
Introduction
Plants rely on an endogenous timekeeper to optimally prepare for the recurrent cycles of day and night, light and darkness, energy production and energy consumption, activity of pollinators, as well as seasonal changes that tell them when to fl ower or shed their leaves [1,2]. Th e 'circadian' clockwork (from Latin circa diem, about one day) is entrained to the periodic light regime of the environment: plants use this information to control internal processes so that they take place at the most appropriate time of day for maximal output and performance. Th is global system works at various genomic levels.
Th e core clockwork consists of negative feedback loops through which clock proteins sustain their own 24-h rhythm [3][4][5][6]. In the model plant Arabidopsis thaliana, the Myb-type transcription factors LATE ELONGATED HYPOCOTYL (LHY) and CIRCADIAN CLOCK ASSOCIATED 1 (CCA1) oscillate with a peak around dawn (Figure 1a). LHY and CCA1 activate the expression of four PSEUDO-RESPONSE REGULATORs (PRRs) that are sequentially expressed, starting with PRR9 in the morning, followed by PRR7, PRR5 and TOC1/PRR1. Th is activation occurs indirectly via inhibition of the evening complex (EC), which is a repressor of the PRRs (Figure 1b); three proteins, LUX ARRHYTHMO (LUX)/PHYTOCLOCK1 (PCL1) and the plant-specifi c proteins EARLY FLOWERING 3 (ELF3) and ELF4, interact to form the EC. Th e PRRs induce the EC in the late evening, whereas CCA1 and LHY repress EC expression. Th e EC, in turn, indirectly activates CCA1 and LHY by directly inhibiting the repressive PRRs. Th ese and other clock proteins regulate rhythmic molecular and biochemical processes in the cell (Figure 1c) (see section 'From a single oscillating mRNA to the rhythmic transcriptome'). Th ese molecular-genetic events have been integrated into quite sophisticated systems models (reviewed at a systems level in Bujdoso and Davis [7]).
Overall, the principles of rhythm generation in plants are the same as in mammals or Drosophila, but the components involved are largely diff erent, pointing to independent origins of the timekeeping mechanisms. In mammals, the core loop comprises the transcription factors CLOCK and BMAL1, which activate the expression of Cryptochrome and Period genes. Th e PERIOD/ CRYPTOCHROME complex, in turn, represses BMAL1/ CLOCK-mediated transcription of their own genes. Additional feedback loops consisting of transcriptional activators and repressors interlock with this central loop to regulate the expression of the core clock genes (for a detailed description, see Zhang and Kay [8], Staiger and Köster [9], and Dibner et al. [10]).
In this review, we summarize recent insights into the blueprint of the circadian clock and the function of clock proteins based on genomic studies in Arabidopsis and other plant species ( Figure 2). Furthermore, we describe how large-scale biology has greatly advanced our understanding of how timing information is translated into rhythmic processes in the plant cell.
From a single oscillating mRNA to the rhythmic transcriptome
opportunely performed just after the compilation of the Arabidopsis genome [12,13]. Cycling gene clusters could thus be linked to nearby non-coding DNA, and conserved elements in the upstream regions revealed phase-specifi c promoter elements [12,[14][15][16]. Th ese studies provided valuable insights into the genome-wide mechanism of clock outputs for the fi rst time. Groups of genes that are co-ordinately directed to certain times of the day pointed to entire pathways that were not previously known to be clock-regulated, such as the phenylpropanoid pathway [12].
Subsequently, many homologous genes were found to be clock-regulated and phased to similar times of day in poplar and rice, as they are in Arabidopsis [17]. Furthermore, the same three major classes of cis-regulatory modules of Arabidopsis were found in poplar and rice. Th e morning module consists of the morning element (CCACAC), which confers expression at the beginning of the day, and a ubiquitous G-box (CACGTG) regulatory element associated with regulation by light and by the phytohormone abscisic acid. Th e evening module consists of the evening element (AAAATATCT), which confers expression at the end of the day, and the GATA motif, which is associated with light-regulated genes. Th e midnight modules come in three variants, ATGGCC (PBX), AAACCCT (TBX) and AAGCC (SBX). Th is points to a strong conservation of clock-regulated trans criptional networks between mono-and dicotyledonous species [17]. As shown in Figure 1c, oscillations of the output genes can be accomplished through direct binding of rhythmically expressed clock proteins to phase modules in the promoters of output genes, or via intermediate transcription factors.
Th e information from numerous microarray experiments conducted under diff erent light and temperature regimes by the community were assembled into the easyto-use DIURNAL database [18]. Th is site is widely consulted to check for rhythmic transcript patterns, refl ecting the growing awareness of the importance of temporal programs in gene expression [18].
Rhythmically expressed genes in Arabidopsis were found to be over-represented among phytohormone-and stress-responsive pathways. Th is revealed that endo genous or environmental cues elicit reactions of diff erent intensities depending on the time-of-day [15,19]. Th is socalled 'gating' is thought to optimize the response to a plethora of stimuli impinging on the plant, and may be of particular relevance for sessile organisms [2]. An example of this is how the PRR5, PRR7 and PRR9 proteins contribute to the cold stress response [20]. Th ese PRRs also contribute to coordinating the timing of the tricarboxylic acid cycle [21]. In this way, one set of regulators directly link global gene expression patterns to rhythmic primary metabolism and stress signaling.
A similar systems-based approach identifi ed the circadian clock as a key player in other facets of metabolism, since CCA1 regulates a network of nitrogenresponsive genes throughout the plant [22]. CCA1 also In the evening loop, the evening complex (EC), a protein complex consisting of ELF3, ELF4 and LUX, inhibits expression of PRR9 and perhaps other PRRs. EC components are themselves rhythmic through repression by CCA1 and LHY. Additional transcription factors, such as RVE8 and CHE, modulate these interconnected loops. (c) Oscillations in the output genes can be accomplished through direct binding of rhythmically expressed clock proteins to phase modules in their promoters or via intermediate transcription factors (TF). In this way, transcripts are directed to diff erent times of the day. As one example, components involved in metabolizing sugars produced through photosynthesis peak early in the day, and components involved in starch degradation, in turn, peak in the middle of the night [12]. has a role in coordination of the reactive oxygen species response that occurs each day as part of light harvesting for photosynthesis and the reaction to abiotic stress, such as the response to high salt [23]. Another clock-optimized process is the regulation of plant immunity. Th e defense of Arabidopsis against Pseudomonas syringae or insects depends on the time-of-day of pathogen attack [24][25][26]. Furthermore, genes that are induced upon infection with the oomycete Hyaloperonospora arabidopsidis, which causes downy mildew disease, have more CCA1 binding sites in their promoters than expected [27]. cca1 mutants show reduced resistance when infected at dawn. Since lhy mutants are not impaired in disease resistance, this points to a specifi c eff ect of the CCA1 clock protein rather than a general eff ect of the clock [27]. Similarly, the RNA-binding protein AtGRP7 (Arabidopsis thaliana glycine-rich RNA binding protein 7), which is part of a negative feedback loop downstream of the core oscillator, plays a role in immunity [28][29][30]. Microarray analysis has also contributed to the question of whether there is one clock for all parts of the plant. Plants, unlike animals, do not have their circadian system organized into a master clock situated in the brain and 'slave' clocks in peripheral organs [31]. However, the diff erential oscillatory patterns of core clock genes in Arabidopsis shoots and roots point to a distinct clock in roots that runs only on the morning loop [32].
Post-transcriptional control contributes to rhythms of the transcriptome
Soon after discovering the eff ect of the clock on transcription, it became apparent that clock-controlled promoter activity does not always lead to detectable oscillations in mRNA steady-state abundance. Th is was attributable to a long half-life of the transcripts [33]. In Arabidopsis, a global search for short-lived transcripts identifi ed a suite of clock-controlled transcripts. For some of these, the mRNA stability changes over the circadian cycle [34]. Corresponding factors that may coordinately regulate the half-life of sets of transcripts are yet to be identifi ed, although candidates include RNAbinding proteins that themselves undergo circadian oscillations [35].
A prominent role for post-transcriptional control in circadian timekeeping was suggested by the long period phenotype of the prmt5 mutant defective in PROTEIN ARGININE METHYLTRANSFERASE 5 [36][37][38]. Among the protein substrates of PRMT5 are splicing factors, and Ostreococcus tauri contains single homologs of CCA1 and TOC1, respectively [71]. The PRR ortholog PPD, most similar to PRR7, in Hordeum vulgare (PPDH1) [72] and Triticum aestivum (PPDA1, PPDB1 and PPDD1, designated after the location they derive from) [73] is important for fl owering time control. The PRR7-like BvBTC1 in beet (Beta vulgaris) regulates bolting time [74]. Hordeum vulgare contains an ELF3 ortholog, EAM8 [75]. Brassica rapa retains a suite of clock genes after polyploidization and subsequent gene loss [80]. thus PRMT5 has a global impact on splicing. Alternative splicing of the clock gene PRR9 is aff ected by loss of PRMT5 and the transcript isoform encoding functional PRR9 is barely detectable in prmt5 mutants, suggesting that the circadian defect may partly be caused by changes in PRR9 splicing [36]. Additional splicing factors that aff ect circadian rhythms are SPLICEOSOMAL TIMEKEEPER LOCUS1, the SNW/Ski-interacting protein (SKIP) domain protein SKIP, and the paralogous RNA-binding proteins AtGRP7 and AtGRP8 [39][40][41]. Notably, AtGRP7 and AtGRP8 form a feedback loop through unproductive alternative splicing and decay of transcript isoforms with a premature termination codon, associating for the fi rst time nonsense-mediated decay with the circadian system [42,43].
In another approach, a high-resolution RT-PCR panel based on fl uorescently labeled amplicons was used to systematically monitor alternative splicing of the core oscillator genes [44]. Alternative splicing events were observed 63 times, and of these, at least 13 were aff ected by low temperature. Th is suggested that alternative splicing might serve to adjust clock function to temperature changes. More recently, RNA-Seq analyses identifi ed alternative splicing of many clock genes, and an event leading to the retention of an intron in CCA1 was conserved across diff erent plant species [45]. In the future, a systematic comparison of alternative splicing networks (both for core clock genes and clock output genes) to the corresponding transcriptional programs will unravel the contribution of alternative splicing to the rhythms in transcript and protein abundance.
To date, the extent to which proteins undergo circadian oscillations in the plant cell has not been systematically studied. An initial proteomic study in rice revealed a diff er ence in expression phases between mRNAs and proteins, suggesting regulation at the post-transcriptional, translational and post-translational levels [46]. Uncoupling of protein rhythms from mRNA rhythms has also been observed in mouse liver, where 20% of soluble proteins show a rhythm in protein abundance but only half of them originate from rhythmic transcripts [47].
Noncoding RNAs and the plant clock -a not-so-well defi ned connection
A prominent class of small noncoding RNAs are micro-RNAs (miRNAs), which are 19 to 22 nucleotide long single-stranded RNAs that base-pair with mRNA targets and thereby control the level of target transcripts or the level of translation of these mRNAs [48]. miRNAs that oscillate across the circadian cycle have been widely described in mammals and Drosophila. In these organisms, miRNAs target clock components and play a role in entrainment or regulation of clock output [49,50].
In Arabidopsis, a suite of miRNAs was interrogated for rhythmic expression. Using tiling arrays, miR157A, miR158A, miR160B and miR167D were found to be clock-controlled [51]. On the other hand, miR171, miR398, miR168 and miR167 oscillate diurnally but are not controlled by the clock [52]. Th e functional implications of these mRNA oscillations are not yet clear. Based on the prominent role miRNAs play in modulating the circadian clock in Drosophila or mammals, such a function is to be expected in plants, where miRNAs so far have a demonstrated role only in clock output, such as seasonal timing of fl owering [53].
Another class of noncoding RNAs is naturally occurring antisense transcripts (NATs). In Arabidopsis, rhythmic NATs were detected for 7% of the protein coding genes using tiling arrays [51]. Among these were the clock proteins LHY and CCA1, TOC1, PRR3, PRR5, PRR7 and PRR9. In the bread mold Neurospora crassa, NATs have been implicated in clock regulation. Suites of large antisense transcripts overlap the clock gene frequency in opposite phase to sense frq. Th ese NATs are also induced by light and thus appear to play a role in entrainment by light signals [54]. A causal role for noncoding RNAs in the plant circadian system has yet to be established.
Forward and reverse genetics to defi ne the core oscillator mechanism
Forward genetic screens of mutagenized plants carrying clock-controlled promoters fused to the LUCIFERASE reporter for aberrant timing of bioluminescence were instrumental to uncover the fi rst clock genes, TOC1, ZEITLUPE and LUX/PCL1 [55][56][57][58]. Likely because of extensive redundancy in plant genomes, most other clock genes were identifi ed by reverse genetic approaches and genome-wide studies. In fact, up to 5% of transcription factors have the capacity to contribute to proper rhythm generation [59]. A yeast one hybrid screen of a collection of transcription factors for their binding to the CCA1/ LHY regulatory regions revealed CIRCADIAN HIKING EXPEDITION (CHE) as a modulator of the clock [60].
Th ese CHE studies attempted to bridge TOC1 with the regulation of CCA1/LHY, but failed to fully explain the eff ect of TOC1 on CCA1/LHY expression. Subsequently, chromatin immunoprecipitation (ChIP)-Seq showed that TOC1 directly associates with the CCA1 promoter, and this interaction is not dependent on CHE [61,62]. Th us, while CHE is not generally seen as a core clock component, its analysis revealed that genomic approaches can feasibly interrogate the capacity of a given transcription factor to modulate clock performance. Genome-wide analysis of cis-elements in clock-controlled promoters should identify the motifs that control rhythmic RNA expression of a clock-controlled gene, and this facilitates the identifi cation of the trans factors that create such rhythms (Figure 1c).
ChIP-Seq revealed that PRR5 functions as a transcriptional repressor to control the timing of target genes [63]. It can be expected that the global DNA-binding activity of all core-clock components will be rapidly assembled and this will be associated with the roles of each factor in regulating global transcription, accounting for up to 30% of all transcripts [64].
Epigenetic regulation -a facilitator to rhythmic gene expression?
Rhythmic clock gene transcription is accompanied by histone modifi cation at the 5' ends. For example, in mammals transcriptional activity of the promoters of the Period clock genes coincides with rhythmic acetylation of histone H3 lysine 9 that is dependent on the histone acetyltransferase activity of CLOCK [65]. In Arabidopsis, it was shown that acetylation of H3 at the TOC1 promoter is rhythmically regulated, and this positively correlates with TOC1 transcription [66]. Later, the chromatin of other clock genes, including CCA1, LHY, PRR9, PRR7 and LUX, was additionally found to be rhythmically modulated by multiple types of histone modifi cation [67,68] (Figure 3). Th e level of the transcription activating marks, acetylation on H3 (H3ac) and tri-methylation on H3 lysine 4 (H3K4me3), increases when these clock genes are actively transcribed, whereas the level of the transcription repressing marks H3K36me2 and H3K4me2 reach their peak when the genes are at their trough [67,68]. Th ese histone modifi cations are found to be dynamically controlled such that H3 is sequentially changed as H3acH3K4me3H3K4me2 within a rhyth mic period [68]. Th e level of other chromatin marks such as H4Ac, H3K27me3, H3K27me2 and H3K9me3 at the clock gene promoter region does not change rhythmically [67,68].
So far, a number of clock components have been shown to be required to modify histones at the appropriate time. For example, CCA1 antagonizes H3Ac at the TOC1 promoter [66]. In contrast, REVEILLE8 (RVE8), a MYB-like transcription factor similar to CCA1 and LHY, promotes H3Ac at the TOC1 promoter, predominantly during the day [69]. However, it is unclear if CCA1 and RVE8 cause the histone modifi cation at the TOC1 promoter, or if histone modifi cation allows CCA1 or RVE8 to actively participate in regulation of TOC1 transcription, respectively. Th e underlying molecular mechanism of the temporal histone modifi cation and components involved are currently elusive. Furthermore, it remains to be shown whether other histone modifi cations, such as phosphorylation, ubiquitination or sumoylation [70], also contribute to the clock gene expression and change across the day.
Comparative genomics
Th e availability of an ever-increasing number of sequenced plant genomes has made it possible to track down the evolution of core clock genes. Th e Arabidopsis core oscillator comprises families of proteins that are assumed to have partially redundant functions [1,3]. Th e founding hypothesis was that the higher-land-plant clock derived from algae. Th e green alga Ostreococcus tauri, the smallest living eukaryote with its 12.5 Mb genome (10% of Arabidopsis) has only a CCA1 homolog, forming a simple two-component feedback-loop with a TOC1 homolog, the only PRR-like gene found in Ostreococcus [71]. Th is supported that the hypothesis that the CCA1-TOC1 cycle is the ancestral oscillator ( Figure 2).
Recent eff orts to clone crop-domestication genes have revealed that ancient and modern breeding has selected variants in clock components. Th e most notable examples include the transitions of barley and wheat as cereals and alfalfa and pea as legumes from the Fertile Crescent to temperate Europe. Th is breeding and seed traffi cking was arguably the greatest force in Europe leading the transition from nomadic to civilized lifestyles. It is known that ancestral barley and wheat are what are now called the winter varieties. Th e common spring varieties arose as late fl owering cultivars, which profi t from the extended light and warmth of European summers over that of the Middle East. Th at occurred from a single mutation in barley (Hordeum vulgare) in a PRR ortholog most similar to PRR7 termed Ppd-1 (Photoperiod-1) (Figure 2) [72]. In wheat (Triticum aestivum), since it is polyploid and recessive mutations rarely have any phenotypic impact, breeders selected promoter mutations at PPD that led to dominant late-fl owering [73]. Interestingly, in the beet Beta vulgaris, a PRR7-like gene named BOLTING TIME CONTROL1 (BvBTC1) is involved in the regulation of bolting time, mediating responses to both long days and vernalization [74]. Evolution at PRR7 is thus a recurrent event in plant domestication.
As barley (Hordeum vulgare) moved north, early fl owering was selected in a late-fl owering context due to the presence of the spring allele at ppdh1. Mutations in the barley ELF3 ortholog, termed EAM8 (Figure 2), were selected [75]. Interestingly, the migration of bean and alfalfa to temperate Europe also coincided with ELF3 mutations [76]. In Asia, rice varieties in domestication have also mapped to the ELF3 locus [77]. It will be intriguing to assess the genome-wide population structure of clock gene variation as a possible driving force in species migration over latitude and altitude. Genomewide eff orts to explore this show that such studies have merit [78].
One identifying feature of plants within clades of multicellular organisms is the possibility of fertile polyploids. It is speculated that, over evolutionary time, all higher-land plants were at one time polyploid, and indeed, it has been estimated that up to 80% of extant plant species are in a non-diploid state [79]. Th is raises several confounding features on the genome. For one, in autopolyploids, derived from an expansion of genomes derived from one species, the process of going from 2× to 4× obviously increases the copy number of all genes by twofold. One report to examine this comes from the comparison of the Brassica rapa oscillator repertory [80]. On average, it is possible for this species to have threefold more of an individual gene over Arabidopsis. However, this is not always the case, as gene loss of these redundant copies has occurred at numerous loci [81]. By examining the probability of gene presence, it has been shown that the retention of clock genes has been more highly favored than the retention of genes randomly sampled from the genome [81]; this was not a linkage disequilibrium eff ect, as even the neighboring genes, as known by synteny, were retained at a lower rate. Th us, Brassica rapa has gained fi tness by keeping additional copies of clock genes ( Figure 2). Why that is awaits testing.
In allopolyploids that arise from the intercrossing of species, the clock confronts allele choice issues between the potentially confl icting parental genomes. Allopolyploids are common in nature, are often easy to recreate in the lab, and are often more vigorous than the parents. Using a newly generated allopolyploid, the role of the clock in providing a genome-wide fi tness was assessed [75,76]. Epigenetic modifi cation at two morning clock genes was found to associate with vigor through regulation of metabolic processes [82]. In subsequent studies, this was further related to stress response pathways in a genome-wide analysis of mRNA decay [83]. Th us, genome-wide polyploidy acts early on clock genes to partition metabolism and stress signaling.
Outlook
High-throughput approaches have greatly advanced our understanding of the pervasive eff ect of the clock on the transcriptome and molecular underpinnings of rhythms in promoter activity. However, our knowledge of rhythms in protein abundance conferred by subsequent layers of regulation and of small RNA regulation in the plant circadian system is underdeveloped. Comparative genomics among diff erent plant species have pointed to divergences in clock-output processes, and perhaps in the clock mechanism itself. Relating the orthologous func tion of a given clock protein across the function of the plant genomes will undoubtedly continue to require large-scale genomics.
|
2017-01-07T08:35:44.032Z
|
2013-06-24T00:00:00.000
|
{
"year": 2013,
"sha1": "92e096782e9124e63671915d290aaaf0168589cc",
"oa_license": "CCBY",
"oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/gb-2013-14-6-208",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86ae4e8f29831f2181bfc122f88bbea9ce81c9f9",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
256662863
|
pes2o/s2orc
|
v3-fos-license
|
Hypoxemia and hypertension in obstructive sleep apnea: the forgotten variable
The apnea-hypopnea index (AHI), expressed as events/h, is used in order to define normality and to classify the severity of obstructive sleep apnea (OSA).(1) AHI measures the frequency of respiratory events throughout the night and the degree of sleep fragmentation, since apnea/hypopnea events are frequently followed by an electroencephalographic reaction. In addition, AHI gives us indirect information about hypoxemia, because respiratory events are often accompanied by a variable decrease in SpO2. (2) The proportion of time spent in SpO2 ≤ 90% (T90) is an accurate parameter to assess nighttime hypoxemia, since a 90% SpO2 at sea level equals a PaO2 of approximately 60 mmHg according to the oxygen-hemoglobin dissociation curve.(2) The correlation between AHI and T90 is moderate (r = 0.60.7), because not all respiratory events are followed by a drop in SpO2 ≤ 90%. OSA has been identified as a risk factor for hypertension in population-based studies.(3,4) Underlying mechanisms would be related to sympathetic activity secondary to hypoxia/hypercapnia cycles, increased intrathoracic pressure, and microarousals that follow apneas/hypopneas, which favor an increase in blood pressure.(4) Both animal and human studies have shown that intermittent hypoxemia can trigger hypertension.(4) Similarly to the hypothesis of other authors, ours was that T90 might be associated to hypertension in OSA patients.
TO THE EDITOR:
The apnea-hypopnea index (AHI), expressed as events/h, is used in order to define normality and to classify the severity of obstructive sleep apnea (OSA). (1) AHI measures the frequency of respiratory events throughout the night and the degree of sleep fragmentation, since apnea/hypopnea events are frequently followed by an electroencephalographic reaction. In addition, AHI gives us indirect information about hypoxemia, because respiratory events are often accompanied by a variable decrease in SpO 2 . (2) The proportion of time spent in SpO 2 ≤ 90% (T90) is an accurate parameter to assess nighttime hypoxemia, since a 90% SpO 2 at sea level equals a PaO 2 of approximately 60 mmHg according to the oxygen-hemoglobin dissociation curve. (2) The correlation between AHI and T90 is moderate (r = 0.6-0.7), because not all respiratory events are followed by a drop in SpO 2 ≤ 90%.
OSA has been identified as a risk factor for hypertension in population-based studies. (3,4) Underlying mechanisms would be related to sympathetic activity secondary to hypoxia/hypercapnia cycles, increased intrathoracic pressure, and microarousals that follow apneas/hypopneas, which favor an increase in blood pressure. (4) Both animal and human studies have shown that intermittent hypoxemia can trigger hypertension. (4) Similarly to the hypothesis of other authors, ours was that T90 might be associated to hypertension in OSA patients.
In a preliminary and retrospective study based on the systematic collection database from the Hospital Británico of Buenos Aires sleep unit between 2011 and 2019, we included consecutive adult patients who underwent home-based respiratory polygraphy due to suspicion of OSA according to the results in the Berlin questionnaire (high risk), the Epworth Sleepiness Scale (ESS; > 10 points), or the STOP-Bang questionnaire (> 3 components). The study was approved by the institutional research ethics committee and the Plataforma de Registro Informatizado de Investigaciones en Salud de Buenos Aires in accordance with the standards of the Declaration of Helsinki, as amended (protocol #1242).
The diagnosis of hypertension was considered when it was self-referred, whether it was documented in the medical records, or whether the patient was on antihypertensive medication. These diagnostic strategies for hypertension have been validated and showed a good performance. (5) Automatic signal analysis was performed by trained physicians and was followed by manual corrections based on international criteria. (6) We calculated the T90 in % and the number of oxygen desaturations ≥ 3% (ODI, oxygen desaturation index) over valid recording time, after sequential manual revision. Multiple logistic regression analysis was used in order to establish the relationship between hypertension (dependent variable) and age, sex, BMI, AHI, and T90 (independent variables). For this purpose, study physicians performed a ROC analysis to establish the best cutoffs to differentiate between patients with and without hypertension. Predictive models also relied on traditional cutoffs, such as AHI (≥ 10 and ≥15 events/h).
The best area under the ROC curve (AUC-ROC) to differentiate between patients with and without hypertension included the following cutoffs: age ≥ 52 years; BMI ≥ 30 kg/m 2 ; AHI ≥ 14 events/h; and T90 ≥ 3%. Our main finding was that nighttime hypoxemia defined as T90 ≥ 3% was independently associated to the development of hypertension in OSA patients. This highlights the importance of nighttime hypoxemia as an independent risk factor for hypertension in patients with OSA, who represent the population of patients treated in a sleep unit. This observation was consistent after adjusting for other covariates (age, sex, BMI, and AHI), which is in line with experimental models that have established the role of hypoxemia as a mechanism of hypertension in OSA. (7,8) Two large studies reported that AHI ≥ 30 events/h and T90 ≥ 12%, (9) or quartiles 3 and 4 of ODI ≥ 4% (ODI4) (10) were independently associated with hypertension-T90 ≥ 12% (OR = 1.46; 95% CI: 1.12-1.88) and ODI4 (OR = 2.01; 95% CI: 1.6-2.5). In a study involving patients with moderateto-severe OSA in use of CPAP, (8) CPAP was withdrawn for two weeks, and the patients were randomized to receive supplemental oxygen or air (sham) during sleep. Those who received supplemental oxygen had the rise in morning blood pressure virtually abolished. (8) Dean et al. (9) demonstrated that every standard deviation of increment
LETTER TO THE EDITOR
in log-transformed hypoxic burden was associated with a 1.1% increase in systolic blood pressure and a 1.9% increase in diastolic blood pressure among those patients not using antihypertensive medications. Using a large cohort of moderate-to-severe OSA patients in South America, Labarca et al. (10) developed predictive models of cardiometabolic risk from indicators of hypoxemia (T90, minimum SpO 2 and ODI) and showed that a T90 > 10% was a predictor of arterial hypertension.
The limitation of the present study was that the diagnosis of hypertension was based on selfreporting, medical history records, or the history of antihypertensive drug use. However, despite this limitation, our observations are in line with the important body of experimental evidence in animals and humans that links intermittent hypoxemia with the development of hypertension.
In conclusion, nighttime hypoxemia defined as T90 ≥ 3% was an independent risk factor for hypertension in a clinical population of patients with suspected sleep apnea. This preliminary finding must be confirmed in prospective longitudinal studies.
|
2023-01-24T17:41:53.072Z
|
2023-01-16T00:00:00.000
|
{
"year": 2023,
"sha1": "b1abf7ec8586db36145cf6d9ebec6411c9ed1457",
"oa_license": "CCBYNC",
"oa_url": "http://www.jornaldepneumologia.com.br/export-pdf/3799/2023_49_1_3799_portugues.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3e7809a048f9709054d8ef072704a46d54425fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52177006
|
pes2o/s2orc
|
v3-fos-license
|
Immunogenicity of a plasmid DNA vaccine encoding G1 epitope of bovine ephemeral fever virus G glycoprotein in mice
The aim of this study was to investigate the immunogenicity of a plasmid deoxyribonucleic acid (DNA) vaccine encoding the G1 epitope of bovine ephemeral fever virus (BEFV) G glycoprotein in mice. A plasmid DNA carrying the G1 gene was constructed and designated as pcDNA3.1-G1. The expression of the target gene was confirmed in human embryonic kidney 293 (HEK 293) cells transfected with pcDNA3.1-G1 by indirect immunofluorescent staining. Immunisation experiments were intramuscularly carried out by vaccinating 6-week-old female mice in four groups, including the pcDNA3.1-G1 construct, pcDNA3.1 (+) plasmid alone, BEF-inactivated vaccine and phosphate-buffered saline (PBS) (1X) three times with 2-week intervals. Fourteen days after the last immunisation, the animals were bled and the resulting sera were tested for anti-G1-specific antibodies by immunoblotting analysis, indirect enzyme-linked immunosorbent assay (ELISA) and virus neutralisation (VN) test. Serological assays showed that the pcDNA3.1-G1 construct expressing G1 protein was able to elicit specific antibodies against this antigen. Virus neutralisation test showed that pcDNA3.1-G1 could induce anti-BEFV-neutralising antibodies in mice. Our findings indicated that a new dimension can be added to vaccine studies for bovine ephemeral fever (BEF) using eukaryotic expression plasmids encoding the G1 antigen in the future.
Introduction
Bovine ephemeral fever (BEF) is a viral disease of cattle and water buffalos seen in Africa, the Middle East, Australia and Asia. Infected cattle can show a wide spectrum of clinical signs, including a sudden onset of fever (41 °C -42 °C) with loss of appetite, increased breathing and heart rate, stiffness, lameness, cessation of rumination and constipation (Walker 2005;Zheng et al. 2009). Bovine ephemeral fever is an economically important disease, which can be spread rapidly and lead to considerable losses in the cattle industry, through reduced milk production in dairy herds, loss of condition in beef cattle and the immobilisation of draught animals (Aziz-Boaron et al. 2013;Walker 2005). Bovine ephemeral fever is caused by BEF virus (BEFV) and transmitted through mosquitoes or biting midges. Bovine ephemeral fever virus is classified as a member of the genus Ephemerovirus in the family Rhabdoviridae. Bovine ephemeral fever virus has a bullet-shaped morphology, contains a 14.9 kb single-stranded, negative-sense ribonucleic acid (RNA) genome, which encodes five structural proteins, including a nucleoprotein (N), a polymerase-associated protein (P), a matrix protein (M), a large RNA-dependent RNA polymerase (L) and a glycoprotein (G) spanning the viral envelope and a non-structural glycoprotein (GNS). G protein is the main protective antigen of the virus and the target of anti-BEFV-neutralising antibodies and harbours five distinct antigenic sites -G1, G2, G3a, G3b and G4 -on its surface (Cybinski et al. 1992;Dhillon et al. 2000;Kongsuwan et al. 1998). Epitope-G1 is a linear site (Y 487 -K 503 ) in the C-terminal region of the ectodomain (Trinidad et al. 2014) that only reacts with sera against BEFV, but other antigenic sites have cross-reactions with the sera against the related viruses besides BEFV (Yin & Liu 1997).
The prevention and control of BEF infection can be achieved through vaccination and treatment of affected cattle (Aziz-Boaron et al. 2013;Wallace & Viljoen 2005). Various studies have been conducted to develop an efficient vaccine for BEF, including live attenuated, inactivated, subunit G protein-based and recombinant vaccines (Walker & Klement 2015). An effective vaccination has been obtained using the BEFV G glycoprotein split from a semi-purified virus in cattle (Bai et al. 1993). In addition, the BEFV G glycoprotein delivered in recombinant virus vectors has induced specific neutralising antibodies and cell-mediated immune responses in cattle (Hertig et al. 1996;Wallace & Viljoen 2005). Therefore, it appears that the recombinant expressed BEFV G protein The aim of this study was to investigate the immunogenicity of a plasmid deoxyribonucleic acid (DNA) vaccine encoding the G1 epitope of bovine ephemeral fever virus (BEFV) G glycoprotein in mice. A plasmid DNA carrying the G1 gene was constructed and designated as pcDNA3.1-G1. The expression of the target gene was confirmed in human embryonic kidney 293 (HEK 293) cells transfected with pcDNA3.1-G1 by indirect immunofluorescent staining. Immunisation experiments were intramuscularly carried out by vaccinating 6-week-old female mice in four groups, including the pcDNA3.1-G1 construct, pcDNA3.1 (+) plasmid alone, BEF-inactivated vaccine and phosphate-buffered saline (PBS) (1X) three times with 2-week intervals. Fourteen days after the last immunisation, the animals were bled and the resulting sera were tested for anti-G1-specific antibodies by immunoblotting analysis, indirect enzyme-linked immunosorbent assay (ELISA) and virus neutralisation (VN) test. Serological assays showed that the pcDNA3.1-G1 construct expressing G1 protein was able to elicit specific antibodies against this antigen. Virus neutralisation test showed that pcDNA3.1-G1 could induce anti-BEFV-neutralising antibodies in mice. Our findings indicated that a new dimension can be added to vaccine studies for bovine ephemeral fever (BEF) using eukaryotic expression plasmids encoding the G1 antigen in the future. may serve as a useful vaccine antigen (Johal et al. 2008). Deoxyribonucleic acid (DNA) immunisation is a promising approach for vaccination by injection of an isolated eukaryotic expression plasmid encoding the antigen (Watts & Kennedy 1999). DNA vaccination has been used successfully to immunise various animal species against many infectious agents and has several advantages over other vaccination approaches (Corr et al. 1996;Fynan et al. 1993;Robinson, Hunt & Webster 1993;Sakaguchi et al. 1996). However, no effort has been made so far regarding the evaluation of the efficacy of a DNA vaccine based on BEFV G glycoprotein against BEF. Hence, the purpose of this study was to investigate the immunogenicity of a plasmid DNA vaccine encoding the G1 epitope of BEF virus G glycoprotein in mice.
Virus, cell lines, bacterial strain and vector
The strain of BEF virus used in this study was procured from Razi Vaccine and Serum Research Institute (Hesarak, Karaj, Iran). Basic local alignment search tool (BLAST) analysis based on G gene sequence showed that this strain had the highest identity with the YHL strain isolated in Japan's Yamaguchi prefecture in 1966 . Hamster lung (HmLu-1) cells were used to propagate the BEFV using Roswell Park Memorial Institute (RPMI) medium (Bio Idea, Iran) supplemented with 5% fetal bovine serum (Gibco, UK).
Construction and preparation of expression vector
The 420 base pairs (bp) fragment of BEFV G1 gene was previously cloned into the pcDNA3.1 (+) vector under the control of the human cytomegalovirus (CMV) promoter using the G1 specific primers G1-fwd-KpnI: 5'-GTGGG TACCGCCACCATGGTGAGAGCTTGGT GTGAATACA-3' and G1-rev-BamHI: 5'-CATTGGATCCTCACCAACCTA CAA CAGCAGATA-3'. Then, the pcDNA3.1-G1 construct containing the 420 bp fragment was transfected into HEK 293 cell line to consider protein expression. Finally, the expression efficiency was verified by indirect immunofluorescent staining (Pasandideh et al. 2018). After verification of protein expression, the pcDNA3.1-G1 construct was amplified in E. coli DH5α and purified with the Endofree Plasmid Purification Kit (Qiagen, Germany), according to the manufacturer's instructions, and then used for immunisation of mice.
Mice and immunisation
Six-week-old female mice of N-MARI strain were intramuscularly inoculated in four groups of five each. Group A mice were immunised with 100 μg/animal of the endotoxin-free pcDNA3.1-G1 construct in an appropriate rate of phosphate-buffered saline (PBS) (1X) in the anterior quadriceps muscle. Group B mice were vaccinated with 200 μL/animal of BEF-inactivated vaccine (Kyoto Biken Laboratories, Japan). Control groups were only inoculated with 100 μL/animal of PBS (1X) or 100 μg/animal of empty pcDNA3.1 (+) plasmid. Immunisation was repeated two more times with 2-week intervals. Fourteen days after the third immunisation, the animals were bled and the resulting sera were tested for anti-G1-specific antibodies by immunoblotting analysis, indirect enzyme-linked immunosorbent assay (ELISA) and virus neutralisation (VN) test. A purified prokaryotic G1 protein with ~18 kDa molecular weight, produced in our previous study, was used as a coating antigen to develop immunoblotting and indirect ELISA in this study.
Detection of anti-G1-specific antibodies by immunoblotting
Immunoblotting analysis was used to investigate the immunogenicity of pcDNA3.1-G1 and detect anti-G1-specific antibodies in the serum of mice. For this purpose, the 18-kDa prokaryotic G1 protein was electrophoresed on 15% sodium dodecyl sulphate (SDS)-polyacrylamide gel and then transferred to a nitrocellulose membrane by electroblotting at 60 volt (V) for 3 hours. The blotted membrane was blocked by PBS with 0.05% Tween-20 (PBST) containing 5% skim milk overnight at 4 °C and then washed three times with PBST. The nitrocellulose membrane was cut into strips and incubated with the mouse sera diluted 1:20 in PBST containing 5% skim milk for 2 h at room temperature, individually. After washing with PBST, an anti-mouse IgG (H+L)-HRP (Bio-Rad Laboratories, USA) diluted 1:3000 in PBST containing 5% skim milk was added to the strips for 1 h at room temperature. After extensive washing, the strips were dipped in PBS containing H 2 O 2 , 4-chloro-l-naphthol and methanol to view the result of reaction. The colour developing reaction was stopped with distilled water.
Evaluation of anti-G1-specific antibody titers by indirect enzyme-linked immunosorbent assay
Anti-G1-specific antibody titers in the serum of each immunised mouse were determined by an indirect ELISA. Ninety-six-well immunoplates were coated with the prokaryotic G1 protein diluted 1:200 in carbonate coating buffer (0.5 M NaHO 3 /Na 2 CO 3 , pH 9.3) for 16 h at 4 °C. The final concentration of coating antigen was 0.25 μg/well by calculation. After washing three times with PBST to remove the unbound antigen, the plates were blocked with 300 μL of PBST containing 5% skim milk for 3 h at 37 °C. The plates were washed again with PBST and then incubated with the mouse sera (50 μL/well) diluted 1:100 in PBST containing 5% skim milk for 45 minutes at room temperature. After washing as previously mentioned, an anti-mouse IgG (H+L)-HRP (Bio-Rad) diluted 1:3000 in PBST containing 5% skim milk was added for 45 min at room temperature. The plates were washed four times with PBST and finally 50 µL of the chromogen substrate (tetramethylbenzidine 1%, 0.1 M sodium acetate [pH 6] and H 2 O 2 3%) was added to each well and incubated at room temperature in the dark for 10 min.
http://www.ojvr.org Open Access The reaction was stopped by adding 50 µL of chloridric acid (0.1 M) and the absorbance was read immediately at 450 nanometers (nm) by an ELISA spectrophotometer.
Detection of anti-bovine ephemeral fever virus neutralising antibodies
Mouse sera were tested for the presence of anti-BEFVneutralising antibodies by VN assay. Briefly, the sera were heat-inactivated at 56 °C for 30 min, and then 50 µL of twofold serial dilutions of each serum was mixed with 50 µL of 100 TCID 50 of BEFV in the 96-well tissue culture plate and then incubated for 1 h at 37 °C in 5% CO 2 . After the incubation period, 50 µL of the Vero cell suspension containing 15 000 cells was added to each well and the plate was incubated for 4 days at 37 °C in a humidified incubator with an atmosphere of 5% CO 2 . After the incubation, the cells were examined for BEFV-specific cytopathic effects (CPEs) using an Olympus IX71 inverted optical microscope (Olympus Australia, Mt. Waverley, Australia).
Statistical analysis
SAS software (version 9.1; SAS Institute) was used for the analysis of ELISA data. Least significant difference (LSD) test was used to compare the mean of each group of mice, and a p-value of less than 0.01 was considered statistically significant.
Ethical consideration
The authors declare that the project underwent ethical review and was given approval by an institutional animal care and use committee or by appropriately qualified scientific and lay colleagues. The care and use of experimental animals complied with local animal welfare laws, guidelines and policies. Animal studies have been approved by the appropriate ethics committee of the Khuzestan Agricultural Sciences and Natural Resources University (1/411/1189).
Verification of anti-G1-specific antibodies using immunoblotting
Immunoblotting analysis was used to evaluate the immunogenicity of the pcDNA3.1-G1 construct in mice. For this purpose, the 18-kDa prokaryotic G1 protein was blotted onto a nitrocellulose membrane and exposed to serum of inoculated mice, separately. As shown in Figure 2, the appearance of a distinct band with an approximate molecular weight of 18 kDa for immunised mice by pcDNA3.1-G1 and BEF-inactivated vaccine indicated that specific antibodies against BEFV G1 protein were induced in these groups.
Anti-G1-specific antibody titers after DNA vaccination
Two weeks after the last immunisation, anti-G1-specific antibody titers in serum of inoculated mice were assayed using ELISA. It was observed that anti-G1 antibody titers in mice immunised with pcDNA3.1-G1 were significantly higher than those in control groups for plasmid and PBS 1X (p < 0.01). The most significant anti-G1-specific antibodies were elicited in mice vaccinated with BEF-inactivated vaccine (p < 0.01) ( Table 1).
Induction of neutralising antibodies against bovine ephemeral fever virus
The presence of anti-BEFV-neutralising antibodies in the sera of immunised mice was investigated by VN assay. The sera collected from mice immunised with pcDNA3.1-G1 and BEF-inactivated vaccine neutralised all the virus activity up to 1:50 dilution and prevented BEFV-specific CPEs from developing in Vero cells (Figure 3). Therefore, our results indicated that immunisation with the pcDNA3.1-G1 construct could elicit neutralising antibody responses against BEF virus. As expected, the CPEs caused by the virus proliferation were observed in the cells treated with the sera collected from the control groups for plasmid backbone and PBS (1X).
Discussion
Natural BEF infection leads to long-term immunity in affected animals (Mackerras, Mackerras & Burnet 1940). Hence, vaccination can be considered as an effective approach of prevention against the disease. So far, several studies have been developed to produce diverse vaccines for BEF, including live attenuated, inactivated, subunit G protein-based and recombinant vaccines. Live attenuated, inactivated and subunit vaccines are being used in the field (Walker & Klement 2015). However, according to our knowledge, no study has been performed to design a DNA vaccine based on G1 gene for immunisation against BEF and this was the first study in this area. In this study, a eukaryotic expression construct for G1 epitope of BEFV G glycoprotein gene was designed in order to evaluate its immunogenicity and efficacy in mice. Serological assays showed that the pcDNA3.1-G1 construct expressing G1 protein was able to induce specific immunity and produce antibodies against this antigen. However, the anti-G1-specific antibody titers against pcDNA3.1-G1 were significantly lower than those against BEF-inactivated vaccine. It may be for this reason that only an antigenic site of BEFV G glycoprotein gene was used in the pcDNA3.1-G1 construct. Virus neutralisation test showed that pcDNA3.1-G1 could induce anti-BEFVneutralising antibodies in immunised mice.
In previous studies, it was found that BEFV G protein could induce virus-specific neutralising antibodies and confer passive protection against intracerebral infection of suckling mice (Cybinski et al. 1990) and protect cattle against experimental intravenous BEFV challenge (Hertig et al. 1996;Johal et al. 2008;Uren et al. 1994). The nucleotide and amino acid sequences of the G1 antigenic site of G glycoprotein have been highly conserved among all isolates, except for an amino acid substitution at position 499 for a few strains (Kato et al. 2009;Zheng & Qiu 2012). However, amino acid variations detected in the main neutralisation sites (G1, G2 and G3) of the G protein did not affect the neutralisation properties of these epitopes (Trinidad et al. 2014). High immunogenicity of G glycoprotein and the fact that G1 epitope has been genetically and antigenically conserved among various isolates of BEFV allow the use of G1 as a useful vaccine antigen. Therefore, G1 antigen was chosen for application as a possible DNA vaccine for immunisation of mice in this study. Today, DNA vaccines are widely considered because of many advantages, such as safety, stability, low costs and longer immunogenicity (Porter & Raviprakash 2017).
The induction of anti-BEFV-neutralising antibodies in mice by the pcDNA3.1-G1 construct expressing G1 protein in our research was consistent with the immunogenicity of subunit and recombinant vaccines based on G glycoprotein in previous studies. For example, vaccination using the BEFV G protein split from a semi-purified virus induced a neutralising antibody response and protected 50% of cattle in China (Bai et al. 1993). In Australia, administration of a G protein subunit vaccine with Quil A adjuvant protected 100% of cattle against experimental challenge (Uren et al. 1994). Vaccination with recombinant New York Board of Health (NYBH) strain of vaccinia virus expressing the BEFV G protein could elicit specific neutralising antibodies in cattle, but the protection experiment was inconclusive (Hertig et al. 1996). In the same experiment, four doses of the Neethling strain of lumpy skin disease virus expressing the BEFV G protein induced a specific neutralising antibody and cell-mediated immune responses in cattle but protection failed 10 weeks after the last dose (Wallace & Viljoen 2005 there are few reports about the assessment of the vaccines under conditions in the field and their usage rates are often low. It appears that protective immunity for most of these vaccines continues for a limited period and their efficiency may be insignificant unless additional booster doses are administered at intervals of 6 months to 1 year (Walker & Klement 2015). Therefore, other advanced technologies are needed to decrease the required number of doses and prolong the duration of protection. On the other hand, the cell-mediated responses may also be involved in protection against BEF, especially for the longer-term sequelae that occur in some animals (Della-Porta & Snowdon 1979;Walker & Klement 2015). Regarding the ability of DNA vaccines to induce protective humoral and significant cellular immune responses to the expressed antigens (Khan 2013), it seems DNA vaccination can be an appropriate approach against BEF. However, as found in this study, the eukaryotic expression plasmids encoding the antigens usually induce fewer responses compared to live or inactivated pathogen immunisation (Shah et al. 2011). We suggest the use of genetic adjuvants such as cytokine genes with the G1 antigen to improve the efficacy of the pcDNA3.1-G1 construct in future studies.
Conclusion
This study demonstrated that the pcDNA3.1-G1 construct could induce immunity and protection against the BEFV in an animal model. Our findings indicated that a new dimension can be added to vaccine studies for BEF using eukaryotic expression plasmids encoding the G1 antigen in the future. Obviously, further studies are needed to improve this type of vaccines and obtain more comprehensive information about their performance in the main hosts.
|
2018-09-16T05:47:13.016Z
|
2018-08-28T00:00:00.000
|
{
"year": 2018,
"sha1": "0cbe60f0867469b1764be17a895a335515956ef6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4102/ojvr.v85i1.1617",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0cbe60f0867469b1764be17a895a335515956ef6",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
252220970
|
pes2o/s2orc
|
v3-fos-license
|
Lucid dreaming increased during the COVID-19 pandemic: An online survey
The COVID-19 pandemic changed people’s lives all over the world. While anxiety and stress decreased sleep quality for most people, an increase in total sleep time was also observed in certain cohorts. Dream recall frequency also increased, especially for nightmares. However, to date, there are no consistent reports focusing on pandemic-related changes in lucid dreaming, a state during which dreamers become conscious of being in a dream as it unfolds. Here we investigated lucid dreaming recall frequency and other sleep variables in 1,857 Brazilian subjects, using an online questionnaire. Firstly, we found that most participants (64.78%) maintained their lucid dream recall frequency during the pandemic, but a considerable fraction (22.62%) informed that lucid dreams became more frequent, whereas a smaller subset (12.60%) reported a decrease in these events during the pandemic. Secondly, the number of participants reporting lucid dreams at least once per week increased during the pandemic. Using a mixed logistic regression model, we confirmed that the pandemic significantly enhanced the recall frequency of lucid dreams (p = 0.002). Such increase in lucid dreaming during the pandemic was significantly associated with an enhancement in both dream and nightmare recall frequencies, as well as with sleep quality and symptoms of REM sleep behavior disorder. Pandemic-related increases in stress, anxiety, sleep fragmentation, and sleep extension, which enhance REM sleep awakening, may be associated with the increase in the occurrence of lucid dreams, dreams in general, and nightmares.
a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 1 Introduction be associated with an increase in dream recall frequency, nightmares, anxiety, sleep fragmentation, and sleep extension.
Participants
The study enrolled a convenience sample of 1,903 respondents from all the 26 Brazilian states and the Federal District. Participants had access to the survey through university communication channels, links on social media, and email groups, during the period from May to June 2020. We excluded from the statistical analysis underage (<18 years old) volunteers (n = 35; 1,84%) and those participants who classified their genders as other than male or female (n = 11; 0.6%), due to the very small sample size of this particular cohort. Thus, the final sample consisted of 1,857 respondents (75% women and 25% men; age mean = 32,7 +/-12,1 years; S1 Fig in S1 File). Participation in the study was voluntary and unpaid. All respondents completed a term of consent, which described in detail the study and provided ethical permission. The need for consent was waived by the ethics committee of Brain Institute-the Federal University of Rio Grande do Norte, Natal-Brazil, and the study was conducted according to the principles expressed in the Declaration of Helsinki.
Questionnaire
We used the International COVID Sleep Study Questionnaire (ICOSS) [43], which aimed to understand the possible effects of COVID-19 on sleep, dream, and wakefulness. The questionnaire included the following sections: socio-demographic data (e.g. state of residence, age, gender, level of education, race, and ethnicity); pandemic and COVID-19 (e.g. was financial status affected by the pandemic?; did the respondent have COVID-19?; was the respondent living with other people during the period of social isolation?); psychological aspects (e.g. anxiety and depression); habits and general health (e.g. smoking and drinking habits; diagnosed diseases); sleep (e.g. at what time the respondent usually falls asleep?; how is the quality of sleep?); dreams (e.g. with which frequency the respondent remembers dreams); sleep and dreams before and during the pandemic (e.g. how often the respondent has nightmares currently vs. during the pandemic?). An additional question about lucid dreams was included in the Brazilian version of the questionnaire, which asked about people's frequency of lucid dreaming recall before and during the pandemic.
Descriptive analysis
The descriptive analysis was performed using RStudio, and graphics were created from correlation tests. The variables were codified for the Likert scale, and those analyzed to verify a possible correlation with the lucid dream recall frequency were divided into three types: 1) Sociodemographic variables: gender (male; female), age (>18 y), marital status, number of residents in the house, among others; 2) Pandemic variables: stress, sleep extension, anxiety, etc.; 3) Sleep variables: dream recall frequency, nightmare recall frequency, REM sleep behavior disorder symptoms, sleep quality, and others.
Pre-processing
At this stage, redundant information was pruned out from the dataset according to the distance in a hierarchical clustering of variables. Whenever the similarity between a pair of variables turned out to be too high, the variable with lower general polychoric correlations with the remaining variables was kept in the dataset, whereas the other was excluded. The amount of missing data was used as an additional criterion to filter redundant information, except when deductive replacement of non-available information was possible. The question addressing the occurrence of diseases was segregated into individual binary variables for each disorder of interest. In addition, the maximum number of residents in the same household was truncated at 10.
To investigate the effect of the COVID-19 pandemic, we dichotomize the frequency of lucid dreams into two categories: 1 -Low frequency (people reporting lucid dreaming less than monthly); and 2 -High Frequency (at least monthly, i.e., the union of categories "less than weekly", "1-2 times a week", "3-5 times a week" and "almost every night"), according to the Gackenbach criterion [53]. In turn, when the analysis aimed to test the factors influencing the change in lucid dreams frequency along with the pandemic, the response reflected whether the recurrence of lucid dreams increased or not during the pandemic, as compared to before COVID-19 had emerged.
Statistical analysis
To test the effect of the pandemic on the frequency of lucid dream recall, the recurrence of these oneiric events was modeled by a mixed logistic regression model as a function of the pandemic while controlling for relevant demographic predictors. In this context, the respondents were considered the random effect, being an individual intercept fitted to each participant. For the analysis of the factors influencing an increase in the frequency of lucid dreams during the pandemic, we fitted a logistic regression model to predict lucid dreams frequency enhancement in terms of characteristics and events related to sleep. The statistical significance of the coefficients from the model describing each effect was tested by the Wald test, whose significance level was kept at 0.05. The presented models, which provide statistical grounds for the inference of effects, were validated by diagnostic measures, confirmatory and residual analysis. All statistical analysis was performed in R software.
Sociodemographic characterization of the sampled population
The analyses were based on the 1,857 subjects who voluntarily replied to the online questionnaire and met eligibility criteria. The demographic profile of the sample is shown in Table 1.
Descriptive analysis of lucid dreaming recall frequency
As the main substratum for the present work, we report the frequency of lucid dreams in Fig 1. Accordingly, the number of respondents declaring to have lucid dreaming progressively decreased according to its frequency, i.e., whereas the majority declared to have lucid dreaming less than monthly, few subjects declared to have lucid dreaming almost every night ( Fig 1A). That is, the number of people declaring to have lucid dreaming almost every night (47 before and 76 during the pandemic) was much smaller than the number of people declaring to have lucid dreaming less than once a month (935 before and 876 during the pandemic). This analysis evidences the rarity of the phenomenon of lucid dreaming. However, a different scenario appears when using the Gackenbach criterion [53] of classification of low and high frequency of lucid dreams. According to this criterion, we dichotomized the data into two categories: 1 -Low frequency (people having lucid dreaming less than monthly); and 2 -High Frequency (at least monthly, i.e., the union of categories "less than weekly", "1-2 times a week", "3-5 times a week" and "almost every night"). With this approach, we observed that the number of high-frequency lucid dreamers before the pandemic (50,4%) is almost the same and discretely superior compared to low-frequency lucid dreamers before the pandemic (49,6%) ( Fig 1C).
Besides that, in contrast with remembrance before the pandemic, the total of participants reporting having lucid dreams at least once per week increased during the pandemic ( Fig 1A). In line with this, a considerable fraction (22.62%) of them informed that lucid dreams became more frequent, although most participants (64.78%) maintained the level of declared frequency of lucid dreams, and a smaller subset (12.60%) reported rather a mitigation of such oneiric events during the COVID-19 pandemic (Fig 1B). It is worth noting that the correlation structure (in compliance with the repeated measure nature of the data) does not take into account this qualitative exploratory analysis. Thus, at least in qualitative terms, it sounds reasonable to hypothesize that the pandemic increased lucid dreams recall frequency. This was confirmed when we used the Gackenbach criterion [53] of classification of low and high frequency of lucid dreams, as shown in Fig 1C: the percentage of frequent lucid dreamers is discretely higher during the pandemic (52,8%), compared to before the pandemic (50,4%).
The effect of the pandemic on the frequency of lucid dreams
To test this hypothesis, we fitted a mixed logistic regression model that could account for the putative effect of the pandemic on the frequency of lucid dreams while controlling for (otherwise) confounding variables of demographic nature. Such statistical modeling was supported by descriptive analysis (Fig 1) and, for this purpose, the recurrence of lucid dreams was measured in a binary scale: low (less than monthly) or high (more than monthly). In this context, the effect of pandemic on lucid dreams recall frequency was found to be statistically significant ( Table 2; Wald Test: estimate = 0.3031, std error = 0.1019, z value = 2.976, p = 0.00292). In other words, the proportion of subjects recalling their lucid dreams more often (more than monthly) was found to increase during the pandemic, being such an increment supported by its statistical significance. Accordingly, the strength of the association between pandemic and lucid dreams frequency could be measured by an odds ratio equal to 1.3541, indicating that the odds for high frequency of lucid dreams during the pandemic were estimated as 35.41% higher than equivalent odds before COVID-19. This scenario emerges as a consequence of a modest (although significant) increase in the proportion of respondents with high lucid dreams frequency during the pandemic as compared to before it (Fig 2A). Once only demographic predictors were controlled, such an association measure can represent a quantification of the total effect of the pandemic on lucid dreams frequency. (A) The number of subjects decreases as the frequency of lucid dream recall increases, but the number of frequent lucid dreamers increased during the pandemic. (B) Most participants maintained the level of declared frequency of lucid dreams despite the pandemic, but a considerable fraction of them informed that lucid dreams became more frequent. (C) When using the Gackenbach criterion for low and high frequencies of lucid dreaming, it is notable that On modeling the effect of the pandemic, the only demographic predictors with enough influence on the frequency of lucid dreams to demand being controlled were respondents' age and the household size (in terms of the number of residents). In this context, the impact of age on the frequency of lucid dreams was found to be statistically significant ( Table 2; Wald Test: estimate = -0.0448, std.error = 0.0090, z value = -4.974, p = 0.00000066). Accordingly, when other predictors were fixed, the odds for high frequency of lucid dreams reduced 4.38% for every one year of age increment (odds ratio = 0.9562). The magnitude of such an effect is depicted in Fig 2B in terms of the proportion of high lucid dreamers. In addition, the influence of the household size (in terms of number of residents) on lucid dreams frequency was also statistically significant ( Table 2; Wald Test: estimate = -0.1745, std.error = 0.0748, z value = -2.332, p = 0.01971). In this case, while other predictors are kept constant, as compared to low frequency of lucid dreams, the odds for high recurrence of these oneiric events decrease 16.01% for every unit increase in the household (odds ratio = 0.8399). This effect is represented in Fig 2C, suggesting a subtle decrease in the proportion of high lucid dreamers as the number of residents increases.
Other demographic predictors, such as ethnic group, gender, and perception of economic loss because of the pandemic, lacked enough evidence to endorse their statistical significance, hence were pruned out from the statistical model for the sake of parsimony. In turn, education and marital status could not be jointly tested along with age because of considerable correlations among these variables (S2 Fig in S1 File). At last, the effect of the respondents (i.e., the random component of the statistical model) was estimated to vary in the population according to a standard deviation equal to 3.256. The statistical model substantiating the inference was validated by residual analysis (S3 Fig in S1 File).
Sleep variables influencing the increase in lucid dreams frequency
After corroborating the statistical significance of the association between the pandemic and the frequency of lucid dreams, we investigated factors that could help to elucidate the increment of these oneiric events. Besides the information from exploratory analysis (S4 Fig in S1 File), we used AIC (Akaike information criteria) based stepwise to select the predictors embodying a logistic regression model to predict the probability of increased frequency of lucid dreams during the pandemic. Pairs of predictors with short distance (on hierarchical clustering, S5 Fig in S1 File) or substantial polychoric correlations were avoided to protect the inference against bias from multicollinearity.
The resulting model has its coefficients exhibited in Table 3. Accordingly, the increase in the recurrence of lucid dreams during the pandemic was found to be associated with an the proportion of frequent and infrequent lucid dreamers is almost the same, and that the percentage of frequent lucid dreamers is discretely higher during the pandemic. https://doi.org/10.1371/journal.pone.0273281.g001
PLOS ONE
Lucid dreaming increased during the COVID-19 pandemic increment in the frequency of remembering nightmares (Table 3; Wald test: estimate = 1.1877, std. error = 0.1384, z value = 8.584, p < 2e-16). For the respondents who remembered less than one nightmare a month, 10.81% of them experienced an increment in lucid dreams frequency. This proportion progressively increased as remembering nightmares became more usual, achieving a maximum of 41.62% of the participants, whose nightmares were remembered from three to five times a week. Among subjects declaring (almost) daily frequency of nightmares, the proportion that experienced an increase in the frequency of lucid dreams was slightly lower, 37.78%. The strength of such association measured by an odds ratio is equal to 3.2795. In this context, the odds for an increase in lucid dreams frequency were 227,95% higher when the recurrence of nightmares also enhanced during the pandemic as compared to the odds among respondents whose frequency of nightmares became stable for the same period. This association is illustrated in Fig 3A, which indicates that the proportion of respondents declaring increased frequency of lucid dreams was substantially greater (3.62 times higher) among those that also experienced an increment in remembering nightmares as compared to peers whose frequency of nightmares remained stable during the pandemic. The association between increased recurrence of lucid dreams and nightmare frequency lacked enough evidence to endorse its statistical significance when respondents declaring a reduction in nightmares frequency were compared to those reporting their stability during the pandemic (Table 3; Wald test: estimate = 0.3309, std. error = 0.2805, z value = 1.180, p = 0.23807).
In addition, an increment in the frequency of remembering dreams during the pandemic was also found to be associated with an increase in the recurrence of lucid dreams in the same period (Table 3; Wald test: estimate = 1.3511, std. error = 0.1388, z value = 9.38, p < 2e-16). Among participants reporting remembering less than one dream a month, 9.35% of them experienced an increase in lucid dreams frequency during the pandemic. Such a proportion mounted up, achieving 32.45% of those who remembered three to five dreams a week, then reducing to 26.50% for participants remembering dreams (almost) daily. Such an association was characterized by an odds ratio equal to 3.8617, indicating that the odds for an increased lucid dreams frequency were 286.17% higher in cases where remembering dreams became more usual during the pandemic as compared to those subjects whose frequency of recalling dreams remained unchanged. On the other hand, a reduction in the frequency that dreams were remembered during the pandemic seems to have no statistically significant consequence to the probability of increasing the recurrence of lucid dreams (Table 3; Wald test: estimate = 0.3046, std. error = 0.1985, z value = 1.534, p = 0.12503). As a result, when compared to participants whose remembrance of dreams remained constant during the pandemic, the proportion of respondents declaring increased lucid dreams frequency was considerably (3.64 times) higher among those who also experienced an increment in remembering dreams in that period (Fig 3B). In turn, the difference for such a proportion was much less expressive between the sets of respondents with a reduced and stable frequency of remembering dreams along with the pandemic.
Likewise, we could only find enough evidence to corroborate an association between an increase in lucid dreams frequency and the recurrence of singing during sleep in cases that this behavior increased during pandemic (Table 3; Wald test: estimate = 1.2010, std. error = 0.2288, z value = 5.249, p = 1.57e-07), but not when the frequency of sleep singing reduced (Table 3; Wald test: estimate = -0.3664, std. error = 0.3768, z value = -0.972, p = 0.33083). Thus, we have no evidence to consider that the odds ratio would depart from the unit when the respondents experiencing a reduction in the frequency of sleep singing were contrasted with those who declared stability for such behavior during the pandemic, in terms of the probability of an increased recurrence of lucid dreams. In turn, the odds for an enhanced lucid dreams frequency were found to be 232.34% higher in cases in which the frequency of singing while sleeping increased during the pandemic as compared to peers who did not experience changes in this behavior for the same period (odds ratio = 3.3234). Such an association is reflected in Fig 3C, which indicates that the proportion of respondents declaring increased frequency of lucid dreams during the pandemic was much (about 3.07 times) higher among those who also experienced an increment in the frequency of sleep singing.
Lastly, sleep quality was also found to be significantly associated with lucid dreams frequency. However, such an association occurs when good sleep is compared to neutral quality (Table 3; Wald test: estimate = -0.5064, std. error = 0.1596, z value = -3.173, p = 0.00151). In this context, it is worth noting that the five-point scale used to acquire the quality of sleep was compressed into a three points scale while preserving for the neutral point. Thus, the categories for poor and considerably poor sleep were collapsed as poor in the same way that good and considerably good were collapsed as good. Under these circumstances, the odds for an increased frequency of lucid dreams were 39.73% lower among respondents declaring a good sleep during the pandemic than corresponding odds from subjects reporting a neutral sleep for the same period (odds ratio = 0.6026613). In turn, when compared to the neutral quality, any consequence of poor sleep was devoid of evidence to endorse its statistical significance (Table 3; Wald test: estimate = -0.2532, std. error = 0.1564, z value = -1.619, p = 0.10540), hence keeping the corresponding odds ratio as a unit. This scenario is illustrated in Fig 3D, where differences regarding the proportion of respondents declaring increased frequency of lucid dreams become noticeable only when other categories for sleep quality are contrasted to good sleep (about 45% reduction).
When all these predictors (changes in the frequency of recalling nightmares, remembering dreams, singing during sleep, and sleep quality during the pandemic) were controlled, respondent's age and the number of residents lacked enough evidence to support their statistical significance, hence were removed from the model for the sake of parsimony. Needless to say, as compared to the model introduced in Table 2, the model described in Table 3 addresses a different hypothesis concerning a different aspect of the relation between the pandemic and lucid dreams. Thus, it is not surprising that different predictors were found to be relevant for these models. In addition, statistical models must be appreciated in the context of the limited information provided by the sample. Under these circumstances, each model aims to make the best use of available information to incorporate the predictors with the most prominent effect so as to describe them, because such predictors contribute most to the understanding of the phenomenon. In this regard, when a predictor is not included in a model, it does not imply that it has no effect, but rather it means that its effect is smaller (relative to the natural variability) and, after identifying the big effects, the evidence provided by the sample turned out to be insufficient to endorse its statistical significance. Finally, it is worth noting that the statistical model (introduced in Table 3) was further validated according to diagnostic measures (S7 Fig in S1 File), confirmatory (S1 Table in S1 File), and residual analyses (S8 Fig in S1 File), thus endorsing the inferences derived from it.
Discussion
We observed that a considerable fraction of the subjects informed that lucid dreams became more frequent during the pandemic. In line with that, the total of participants who report having lucid dreams at least once per week increased during the pandemic. Using a mixed logistic regression model, we confirmed that the COVID-19 pandemic increased significantly lucid dreaming recall frequency. We also observed that this increase in lucid dreaming during the pandemic was significantly associated with an enhancement in both dream and nightmare recall frequencies, as well as with sleep quality and symptoms of REM sleep behavior disorder. To our knowledge, this is the first consistent report that focused on lucid dreaming during the COVID-19 pandemic.
Lucid dreaming recall frequency
As expected, the number of respondents declaring to have lucid dreaming progressively decreased according to frequency. That is, the number of people declaring to have lucid dreaming almost every night was much smaller than the number of people declaring to have lucid dreaming less than once a month (Fig 1A). A similar pattern of lucid dreaming recall frequency was observed in a German sample [54] and in a meta-analysis conducted by Saunders et al. [55], in which they suggest that 55% of the world population had a lucid dreaming episode at least once in a lifetime, while only 23% reported experiencing lucid dreaming at least once a month. It is essential to note that the scales used in these studies are different from the scale used here. In our scale, the lower frequency considered, "less than monthly", represents a large window, because it does not allow us to distinguish clearly between rare lucid dreamers (e.g.: a lucid dream episode per year, or two on a lifetime) and moderate lucid dreamers (e.g.: a lucid dream every two or three months). Despite that, it is still clear that the number of subjects decreases as the frequency of lucid dream recall increases (Fig 1A).
Nevertheless, using the Gackenbach criterion [53] of classification of low and high frequency of lucid dreams (Fig 1C), we observed that the number of high frequency lucid dreamers (50,4%) before the pandemic is almost the same as low-frequency lucid dreamers (49,6%) before the pandemic. Thus, the number of high-frequency lucid dreamers in our study is much superior to the expected incidence of 23% reported by Saunders et al. [55]. In these meta-analyses, only one study presented a prevalence of frequent lucid dreamers higher than 32%, which is Stumbrys et al. [56], who found a prevalence of 49,8%. In a previous study in a Brazilian sample, we observed that about 16% of participants experience lucid dreaming every week or almost every day [57].
We consider two main hypotheses to explain the high incidence of frequent lucid dreamers found in our present study. First, our sample is mainly constituted of young women, who comprise the cohort with higher dream recall frequency, which is closely associated with high lucid dreaming recall frequency [58]. Women are also more prone to report dreams and tend to be more interested in dreams than men [59]. Second, since our respondents had access to the survey through university communication channels, social media, and email groups, the recruitment method could have biased our sample since the people who agreed to participate are probably interested in dreams and lucid dreaming, and it is known that people more interested in dreams also tend to have more lucid dreams [60].
The pandemic effect
We found that the COVID-19 pandemic increased significantly lucid dreaming recall frequency. On modeling the effects of the pandemic, the only demographic predictors with enough influence on the frequency of lucid dreams to demand being controlled were respondent age and the household size, in terms of the number of residents, both showing to be statistically significant on the influence over the frequency of lucid dreams (Table 2, Fig 2).
As far as we know, there are no previous consistent studies focused on the investigation of lucid dreams during the pandemic. Nevertheless, Scarpelli et al. [24] investigated the impact of the end of COVID confinement on dreams in an Italian sample and found an increase in lucid dreaming frequency during the pandemic, compared to the end of the lockdown. Scarpelli et al. [61] also investigated the dream activity in a sample of narcoleptic patients in Italy, during the COVID-19 lockdown, and found higher lucid dreaming frequency compared to controls. However, the authors found no evidence of the influence of the pandemic on these changes in lucid dreaming frequency.
There are many elements in the COVID-19 pandemic that could be influencing this increase in lucid dreaming. The pandemic has been a period characterized by social isolation, fear of one's own death, and fear of losing family and friends, thus, being characterized as a period of increased stress rates [5][6][7]. Studies conducted during the pandemic suggest that the higher levels of stress and tensions had a major influence on patterns of sleep and dreaming [32,33], including heightened dream recall frequency [34,35,61], increased nightmare frequency [25] and more intense and negative dreams [41].
Bottary et al. [62] reported anecdotal evidence regarding an increase in dream recall frequency and more vivid dreams during the pandemic and hypothesized that it may be attributable to increased sleep fragmentation caused by the sleep extension many people are experiencing. A recent study identified that 73.9% of people with low levels of stress during the lockdown experienced an increase in the duration of sleep when compared with the pre-pandemic period [25]. Some studies, however, bring results contrary to those obtained so far.
Brand et al. [63], for example, found that increased dream recall frequency was independently predicted by factors such as female gender, sleep quality, and creativity, whereas perceived stress, awakenings during the night, and sleep duration had no predictive value.
We previously observed, also in a Brazilian sample, but not in a pandemic scenario, that stress and sleep with no pre-determined time to wake up were pointed out for the subjects as some of the main factors to influence the occurrence of lucid dreaming [57]. This could be explained by an increase in the awakenings from REM sleep that is associated with stress [64]. Due to the lockdowns and consequent interruption of school classes and the adoption of a homework schedule, some people are staying more time at home, and thus sleeping more [16,23]. This may have increased the duration of REM sleep, which is the sleep stage more associated with lucid dreams [42,43], despite lucid dreams can also happen during N1 and N2 sleep stages [65]. Thus, stress and sleep extension likely explain the increase of lucid dreaming frequency during the COVID-19 pandemic. In the model presented here, however, stress and total time slept had no statistical significance over the increase of the frequency of lucid dreams. Despite that, there is a possibility that stress and total time slept are interfering with the frequency of lucid dreams indirectly, through the influence on sleep variables such as dream and nightmare recall frequencies, and sleep quality. Nevertheless, there is no concrete data to sustain the hypothesis of an indirect association between stress, time slept, and lucid dreaming recall frequency.
Time of quarantine and knowing someone with COVID-19 also seem to have no association with the frequency of lucid dreams. The number of residents in the same household correlated negatively with lucid dream frequency, but this seems to have no significant influence on the role of the pandemic over the increase of lucid dream frequency, once the sleep variables were controlled. Given the circumstances, it remains unclear which particular pandemic factors interfere with the increase in the frequency of lucid dreams. The variables that were significantly associated with the increase in the frequency of lucid dreams will be discussed below.
Factors influencing lucid dream recall frequency: Sleep variables as a bridge for the pandemic effect
The following variables showed an association with the increase of lucid dreaming recall frequency during the pandemic: dream recall frequency, nightmare recall frequency, singing during sleep (as a possible symptom of REM sleep behavior disorder), and sleep quality (Table 3, Fig 3). It is important to look at these factors carefully, in order to try to explain their associations with lucid dream recall frequency, once they are also closely related, and exert a complex (even not well-known) relation to each other.
We expected that dream and nightmare recall frequencies would be the main factors (from the dataset) influencing the probability of an increment in lucid dreaming recall frequency during the pandemic. This expectation is reinforced by hierarchical clustering of variables, showing that (along with respondent's age and the number of residents) dream and nightmare recall frequencies have the shortest distances to the increase in lucid dream recall frequency (S5 Fig in S1 File). In other words, as compared to the binary scale to measure the increase in lucid dreams during the pandemic, dream and nightmare recall frequencies present the highest similarities, hence carrying more similar information and higher capacity of prediction. In the next sessions, we will discuss the influence of these sleep variables over lucid dreaming frequency, in the context of the COVID-19 pandemic.
Dream recall frequency.
In our study, we found that the increment in the dream recall frequency during the pandemic was associated with an increase in the recurrence of lucid dreams in the same period. Other studies also reported an increase in dream recall during the pandemic [34,35,37]. The association between dreams and lucid dreaming recall has been already largely documented, i.e., the more a person remembers dreams in general, the more they will be able to remember lucid dreams (for review see [56,58,60,66,67]. An example of this is the dream diary, which consists of keeping a daily report of the dreams one remembers having during the night [68]. The dream diary is an instrument primarily projected to increase dream recall and became one of the most used and most efficient methods to induce lucid dreams. In accordance with that, Aspy [69] reported a significant increase in lucid dreaming frequency in the one-week diary compared to the participants' retrospectively estimated lucid dreaming frequency, i.e., focusing on dreams not only increased dream recall, but also lucid dreaming recall.
However, studies suggest that the use of a dream diary only helps to increase lucid dreaming frequency significantly if the person intends to enhance lucid dreaming [69,70], i.e., an increase in dream recall frequency alone is not sufficient to increase lucid dreaming frequency significantly-the intention to have a lucid dream is also essential. In that manner, we can say that higher dream recall frequency may be a sign of higher interest in the dream world. People who are more interested in their own dreams may spend more time trying to remember them [71,72], including lucid dreams. Once our study counted with volunteers who spontaneously agreed in participating in the survey that they found online, there is a possibility that our sample is self-selected for people who are interested in dreams, which explains in part the association between dream and lucid dream recall frequencies found in the present study.
At last, an exploratory evaluation of our data suggests that the frequency of remembering dreams is related to the recurrence of lucid dreams. Especially during the pandemic, the proportion of respondents who declared an increment in lucid dreams frequency gradually increased as remembering the dreams became more usual. Among participants reporting remembering less than one dream a month, 9.35% of them experienced an increase in lucid dreams frequency during the pandemic. Such a proportion mounted up, achieving 32.45% of those who remembered three to five dreams a week, then reducing to 26.50% for participants remembering dreams (almost) daily. Furthermore, our sample was mainly constituted of women, who are the public with higher dream recall [41,58,[73][74][75][76] mainly when passing through stress [77]. However, in the present study, gender lacked enough evidence to endorse its statistical significance regarding the increase of lucid dreaming during the pandemic. 4.3.2 Nightmare recall frequency. We found a significant association between the increase of nightmare recall frequency and lucid dreaming recall frequency during the pandemic. The association between those variables is already known [78][79][80][81], and their relationship could be intermediated by dream recall frequency [81]. Nightmares may even trigger lucidity [82,83], that is, when subjects are experiencing a nightmare, they try to figure a way out of it, and the thought "this is just a dream" is a relieving strategy to get through a nightmare, once it consists of realizing they are dreaming [81,84]. Besides that, heightened physiological arousal during sleep was reported for both nightmares [85] and lucid dreaming [86,87]. Thus, physiological arousal seems to be an important intervenient factor between nightmares and lucid dreaming recall frequency.
Furthermore, a higher nightmare recall is associated with emotionally relevant events for the dreamer, such as stress [88][89][90] and trauma [91]. Certainly, the collective trauma generated by the COVID-19 pandemic influenced the increase of this phenomenon in the population. Accordingly, it was found an increased frequency of nightmares during the pandemic [92] and an increased frequency of threatening events in dreams [93]. Both studies investigated patients with PTSD whose symptoms were previously (before the pandemic) in remission, and one found the re-emergence of symptoms of PTSD [92]. Nightmares are considered one of the most frequent symptoms of PTSD, reaching up to 80% of the patients [94]. In the context of PTSD, the constant interruption of REM sleep may be associated with an increase in nightmares and thus inadequate processing of emotional experiences, since REM sleep is associated with a reduction in emotions subjective the next day [95]. It is important to note that the increase of nightmares did not necessarily include COVID-19 themes, but rather pre-existing traumas.
According to other authors, however, the contents and emotions of dreams seem to be related to the difficulties imposed by the new reality of the pandemic [32,33,96,97]. In this context, some studies have shown that women respond more intensively to stressful events, tending also to incubate these events more into dreams [73,98]. Some studies have found that during the pandemic, female participants had considerably high rates of negative feelings and emotions (such as anxiety, anger, and sadness) in their dreams [99,100]. In a study carried out with students at the University of Toronto, there was an increase in nightmares in women during the pandemic [101]. This study drew attention to the fact that the aggressive content theme of dreams increased. Another stressful factor for women is domestic violence, which increased significantly during the pandemic [102]. Compared with a previous study [103], at a time outside the pandemic, men usually had dreams with more aggressive themes. As said before, however, in the present study, the effect of gender on the (binary) frequency of lucid dreams looks negligible.
The Threat Simulation Theory [104] proposes that dreaming prepares the dreamer for future experiences. In that way, some authors interpret the increase of nightmare recall during periods of stress as a mechanism of emotional regulation [105]. Similarly, lucid dreams have a therapeutic function, being important for treating recurrent nightmares [84], promoting resilience [106], decreasing depression symptoms [107], and contributing to psychological growth [108]. In this sense, the increase of nightmares and lucid dreams during the pandemic may have a therapeutic/emotional regulation function, helping to deal with the stress of this period. In the present study, stress did not show a significant association with the increase of lucid dreaming. However, the lack of enhancement on stress rates in the present study may be a reflex of the therapeutic role of dreams and lucid dreams.
REM sleep behavior disorder symptoms.
We observed an association between singing during sleep (as a possible manifestation of REM sleep behavior disorder) and lucid dream recall frequency. Sleep singing was quite rare for most respondents, turning the sample considerably unbalanced regarding the levels of this variable, thus eroding much of the confidence on this speculative scrutiny. Nevertheless, the proportion of respondents declaring an increase in the frequency of lucid dreams seemed to enhance according to the frequency of sleep singing during the pandemic, from 20.57% among participants who used to sing while asleep less than monthly up to 62.50% among those reporting to sing while asleep three to five times a week.
REM sleep behavior disorder is a parasomnia characterized by physically acting out dreams with vocal sounds and violent movements [109]. REM sleep behavioral events, such as vocalization and motor behavior are associated with dreaming [110], once these behaviors are a reflex of the actions of the dreamer in the dream. Both REM sleep behavior disorder and lucid dreaming [111] are considered REM sleep dissociated phenomena [112]. Thus, since singing or sleep-talking are signs that a dream is occurring, it makes sense that sleep singing is associated also with the occurrence of a lucid dream and, consequently, with higher lucid dreaming frequency, as we found in our study.
Sleep quality.
We observed that self-reported sleep quality was significantly associated with lucid dreaming recall frequency (particularly during the pandemic). Many studies reported poorer sleep quality during the pandemic [15][16][17]61], but while some studies found no link between lucid dreaming and sleep quality [113,114], others reported a significant relationship between lucid dream frequency and poor sleep quality [115]. However, this relation was explained by the association between nightmares and poor sleep quality [116,117]. Lucid dreams were also associated with positive emotions in the morning [118,119]. In our study, it is possible that this association between sleep quality and lucid dreams is due to the positive humor in the morning provided by the experience of having a lucid dream.
Limitations
The present study utilized self-reported measures, which are prone to bias. Besides that, as the survey was applied via the internet, there is no certainty that the subjects responded to the questions carefully and that they understood the questions completely. Other limitations include uneven distribution of women and men in our sample and a brief assessment of psychological variables. Finally, the scale used to measure lucid dreams frequency has its limitations, as the window for the low-frequency lucid dream (less than monthly) was too large and represented by only one option. New studies should consider a more balanced scale, including separated options for the low-frequency lucid dream (e.g.: "never", "less than once a year", "less than monthly" etc.).
Conclusions
Lucid dreaming recall frequency significantly increased during the pandemic. This increase in lucid dreaming was significantly associated with an enhancement in both dream and nightmare recall frequencies, as well as with sleep quality and sleep singing. A possible explanation for this increase in the oneiric activity in general (including dreams, lucid dreaming, and nightmares) during the COVID-19 pandemic is an increase in stress, sleep fragmentation, and sleep extension, which can increase awakening, especially for REM sleep-the sleep stage more associated with lucid dreaming, dreams, and nightmares. On the other hand, more specifically regarding lucid dreams, given that the pandemic has been a period of intense negative emotions and enhanced stress, the increase of lucid dreaming recall may appear as an important factor for the promotion of resilience [106] and decreasing of depression symptoms [107]. In this sense, the increase in lucid dreaming during the pandemic may have helped deal with pandemic-related negative impacts. More research is necessary to clarify this relationship.
|
2022-09-15T06:16:44.008Z
|
2022-09-14T00:00:00.000
|
{
"year": 2022,
"sha1": "8f5904f9a8e24a0a563058f09e4ddd6cc375309b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0273281&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad3edd4b5044ed793154f30950813ea5c431f033",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118660088
|
pes2o/s2orc
|
v3-fos-license
|
Non-minimal coupling of the phantom field and cosmic acceleration
Motivated by the recent interest in phantom fields as candidates for the dark energy component, we investigate the consequences of the phantom field when is minimally coupled to gravity. In particular, the necessary (but insufficient) conditions for the acceleration and superacceleration of the universe are obtained when the non-minimal coupling term is taken into account. Furthermore, the necessary condition for the cosmic acceleration is derived when the phantom field is non-minimally coupled to gravity and baryonic matter is included.
Introduction
Nowadays there is a general consensus that the universe is experiencing an accelerated expansion. Recent observations from supernovae type Ia [1] associated with Large Scale Structure [2] and Cosmic Microwave Background anisotropies [3] have provided strong evidence for this cosmic acceleration. To interpret the cosmic acceleration, a so-called dark energy component has been proposed. The nature as well as the microphysics of the dark energy component are still ambiguous. The simplest candidate for dark energy is a cosmological constant with the equation of state parameter to be ω = −1. Notwithstanding, this scenario suffers from serious problems such as the huge fine tuning and the coincidence problem [4]. The field models that have been widely discussed in the literature consider: (a) a canonical scalar field named as quintessence [5], (b) a phantom field, that is a scalar field with a negative sign for the kinetic term [6,7], or (c) the combination of quintessence and phantom in a unified model named quintom [8]. Alternative approaches to explain the universe's late-time acceleration include the k-essence [9], the tachyon [10], the holographic dark energy [11] and many others. Therefore, scalar fields play a crucial role in modern cosmology [12]. In the inflationary scenario scalar fields generate an exponential rate of evolution of the universe as well as the density fluctuations due to the vacuum energy. There are many theoretical evidences that suggest the incorporation of an explicit non-minimal coupling (NMC) between a scalar field and gravity in the action [13,14]. The nonzero non-minimal coupling arises from the quantum corrections and it is also required by the renormalization of the corresponding field theory. Amazingly, it has been proven that the phantom divide line crossing of dark energy described by a single minimally coupled scalar field with a general Lagrangian is either dynamically unstable with respect to the cosmological perturbations, or realized on the trajectories of zero measure [15]. Furthermore, inflation is the theorized exponential expansion of the universe at the end of the grand unification epoch, driven by a negative-pressure vacuum energy density [16]. As a direct consequence of this expansion, all of the observable universe was originated in a small causally connected region. Inflation answers the classic conundrum of the big bang cosmology: why does the universe appear flat, homogeneous and isotropic in accordance with the cosmological principle when one would expect, on the basis of the physics of the big bang, a highly curved, heterogeneous universe? It is well known from the inflationary theory of the primordial universe [17] that a scalar field φ slowly rolling in a nearly flat section of its self-interaction potential V (φ) can fuel a period of accelerated expansion of the universe. The inflaton field φ satisfies the Klein-Gordon equation in a curved spacetime and one needs to introduce, in general, a non-minimal coupling term that accommodates the scalar field φ and the Ricci curvature of spacetime R. The classical works on inflation have neglected the coupling term (hereafter, this theory is called ordinary inflation), as opposed to the generalized inflation which has included the non-minimal coupling term.
In the present paper we study the cosmic acceleration and thus the inflation when a non-minimally coupled phantom scalar field is included. The remainder of the paper is as follows. In Section 2, we present the field equations when the phantom field is present and non-minimally coupled to the Ricci scalar. We have chosen a specific formulation of the Einstein equation for which the corresponding energy-momentum tensor is covariantly conserved. In Section 3, when the non-minimally coupling term is taken into consideration, the necessary condition for the universe to accelerate, and thus inflate, is derived. In Section 4, due to the presence of the non-minimal coupling term extra conditions compared to the minimally coupled case, have to be satisfied in order the universe to superaccelerate. In Section 5, apart from the non-minimally coupled phantom field, baryonic matter is included and the necessary condition for the cosmic acceleration is obtained. Finally, in Section 6 a brief description of the conclusions is given.
Field equations for the phantom field
The action for a phantom field φ which is non-minimally coupled to gravity is given by where g is the determinant of the metric g ab , G is the Newton's gravitational constant, R is the Ricci curvature of the spacetime, V (φ) is the phantom field potential, and − ξ 2 Rφ 2 depicts the non-minimal coupling of the phantom field to the Ricci scalar. The dynamical equation for the phantom field is obtained from the variation of the action (1) with respect to the phantom field, i.e.
and takes the form In the case that one wants to include matter fields ψ m in the system under consideration then the action (1) will be where where κ ≡ 8πG , S EH is the Einstein-Hilbert action which is the purely gravitational part of action (1), S int is the part of the action that denotes the coupling between the gravitational and the phantom field, S φ is the action of phantom field, and S m describes the matter fields other than φ. Thus, action (1) can explicitly be written as The variation of Eq. (5), or equivalently of Eq. (4), with respect to the metric field g ab , yields the Einstein equation slightly modified due to the presence of the coupling whereT while the energy-momentum tensor of the matter fields is given bỹ The modified Einstein equation (6) can be rewrite as where At this point a couple of comments are in order. First, the specific formulation described by Eq. (9) of the modified Einstein equation (5) adopted here is chosen due to the fact that the corresponding energy-momentum tensor, i.e. T (total) ab , is covariantly conserved as a result of the contracted Bianchi identities, i.e. ∇ a G ab = 0, [14]. Second, one has to be very careful with the values of the coupling constant ξ. In particular, when ξ ≤ 0 the quantity is definitely positive and the possible formulations of the modified Einstein equation, mentioned in the previous point, are all equivalent except from the covariant conservation of the energy-momentum tensor that has to be investigated case by case. However, when ξ > 0 there is the possibility the phantom field to be equal to the roots of equation where In this case the corresponding energy-momentum tensor T (total) ab diverges. Thus, the phantom field has to avoid these values, i.e. φ = φ ± . Furthermore, when |φ| < φ c , or equivalently φ − < φ < φ + , the quantity C(ξ, φ) is positive, i.e. C(ξ, φ) > 0, while when |φ| > φ c , or equivalently φ > φ + or φ < φ − , the quantity C(ξ, φ) is negative, i.e. C(ξ, φ) < 0. However, one has to take into account that in one of the formulations of the modified Einstein equation C(ξ, φ) appears to be inversely proportional to an effective Newton's gravitational constant. Therefore, C(ξ, φ) has always to be positive and this is what we adopt here from now on. Finally, it is evident that in the case of minimal coupling, i.e. ξ = 0, C(ξ, φ) = 1.
Phantom field and the acceleration of the universe
In this section, we are interested in the consequences of the presence of the phantom field in the cosmic evolution of the universe and in particular during the phase of cosmic acceleration, and hence of the inflation. For this purpose, we consider the spatially flat Friedmann-Robertson-Walker (henceforth abbreviated to FRW) universe. The line element of the FRW cosmological model is given as where we have used the comoving coordinates (t, x, y, z). The Friedmann equations are given by where ρ and p are the energy density and the pressure of the phantom field, respectively. These two quantities are the diagonal components of the energy-momentum tensor T ab [φ] and using (10) one gets It is straightforward to rewrite the 1st and 2nd Friedmann equations, namely Eqs. (16) and (17) respectively, in terms of the scale factor a(t), the phantom field φ(t) and the coupling constant ξ. Thus, substituting Eqs. (18) and (19) in Eqs. (16) and (17), the Friedmann equations take the form Cosmic acceleration meansä(t) > 0 which in turn, using Eq. (17), is translated to the condition ρ + 3p < 0. Employing Eq.(21), this condition for acceleration is now written as At this point one needs to eliminate from the aforementioned condition the second timederivative of the phantom field. Therefore, one employs the Klein-Gordon equation (3) which for the spatially flat FRW model has the form Using the above equation, namely Eq. (23), as well as Eq. (18), the condition for the cosmic acceleration, i.e. Eq. (22), becomes Since the condition for the acceleration as given by Eq. (24) is quite cumbersome, one makes the first assumption which is to adopt the weak energy condition, i.e. ρ > 0. Therefore, the necessary (but not sufficient) condition for the cosmic acceleration when the phantom field is non-minimally coupled becomes Now, we recall that the Ricci scalar for the spatially flat FRW universe is of the form which is a positive quantity if and only if the FRW universe is accelerating, i.e.ä > 0. At this point, one has to make the second assumption and demands ξ > 0. Thus, one obtains the necessary (but not sufficient) condition that the phantom scalar potential has to satisfy for the cosmic acceleration, and thus for the inflation, It is noteworthy that in the case of minimal coupling the necessary condition becomes V > 0. It is obvious that in the case of the non-minimally coupled phantom field it is "easier" for the acceleration, and hence for the inflation, to be achieved. However, it should be stressed that condition (27) is not sufficient.
Phantom field and the superacceleration
It is known that the spatially flat FRW universe superaccelerates, i.e.Ḣ > 0, when a phantom field is minimally coupled. In particular, when the phantom field is minimally coupled the two Friedmann equations are written as As it has already been mentioned, acceleration means (ä/a) =Ḣ + H 2 > 0. However, if Eqs. (28) and (29) are combined, one obtainṡ thusḢ > 0 and we say that the universe superaccelerates whenφ 2 = 0, or equivalently that the phantom field is a form of superquintessence. It is therefore interesting to see if the phantom field continues to be a form of superquintessence when is non-minimally coupled. From Eq. (21), one obtainṡ and if Eq. (20) is employed, one obtains the equivalent of Eq. (30) for the case of non-minimal coupling, namelẏ It is evident that when the phantom field is non-minimally coupled the spatially flat FRW universe does not superaccelerates if extra conditions are not satisfied.
Phantom field, matter and the acceleration
It is of interest to add baryonic (ordinary) matter which is pressureless, i.e. p = 0, with energy density ρ m ∝ a −3 , to the content of the universe. The total energy density of the universe is now given as Following the analysis of Section 3, the necessary and sufficient condition for the acceleration of the universe (and thus for the inflation), i.e. ρ + 3p < 0, takes the form Implementing Eq. (33), the necessary and sufficient condition takes the form or, equivalently If one makes again the same assumptions, namely that the energy densities ρ m and ρ φ are positive and that the coupling constant takes only positive values, then the necessary condition for the acceleration of the universe to be achieved is as before
Conclusions
It is widely believed that our universe is in a phase of accelerating expansion. This belief is mainly supported by observational data that has shown up in the last ten years. One of the ways to interpret the acceleration is through the presence of a dark energy component whose nature and microphysics are still lacking. Scalar fields are a key player in this scenario. In particular, phantom scalar field is one of the main candidates for dark energy.
In this work we investigate the consequences of the presence of the phantom field when it is non-minimally coupled to gravity. The necessary condition for the acceleration, and so for the inflation, of the universe is derived. It is easily seen that the acceleration is "easier" achieved when the non-minimal coupling term is present. However, this is not the case for the superacceleration. It is known that when the phantom field is minimally coupled the universe superaccelarates. Nevertheless, the necessary condition in the presence of the non-minimal coupling term is quite complicated and extra conditions should be satisfied in order to get superaccelaration. Furthermore, we consider the case where baryonic matter is present. The necessary condition for the cosmic acceleration is the same with the one in the case where there is no matter at all. Finally, it should be stressed that all aforesaid conditions are necessary but not sufficient and thus one has to be very careful when handling them since one is not accredited to claim that it is easier to get cosmic acceleration and thus inflation when the non-minimal coupling of the phantom field is employed.
|
2010-03-16T18:43:48.000Z
|
2009-06-23T00:00:00.000
|
{
"year": 2009,
"sha1": "b493f389ee736dff7aa750253bfc1f85466124ce",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0906.4237",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b493f389ee736dff7aa750253bfc1f85466124ce",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
261209659
|
pes2o/s2orc
|
v3-fos-license
|
Fallen Sanctuary: A Higher-Order and Leakage-Resilient Rekeying Scheme
. This paper presents a provably secure, higher-order, and leakage-resilient (LR) rekeying scheme named LR Rekeying with Random oracle Repetition (LR4), along with a quantitative security evaluation methodology. Many existing LR primitives are based on a concept of leveled implementation, which still essentially require a leak-free sanctuary ( i.e. , differential power analysis (DPA)-resistant component(s)) for some parts. In addition, although several LR pseudorandom functions (PRFs) based on only bounded DPA-resistant components have been developed, their validity and effectiveness for rekeying usage still need to be determined. In contrast, LR4 is formally proven under a leakage model that captures the practical goal of side-channel attack (SCA) protection ( e.g. , masking with a practical order) and assumes no un-bounded DPA-resistant sanctuary. This proof suggests that LR4 resists exponential invocations (up to the birthday bound of key size) without using any unbounded leak-free component, which is the first of its kind. Moreover, we present a quantitative SCA success rate evaluation methodology for LR4 that combines the bounded leakage models for LR cryptography and a state-of-the-art information-theoretical SCA evaluation method. We validate its soundness and effectiveness as a DPA countermeasure through a numerical evaluation; that is, the number of secure calls of a symmetric primitive increases exponentially by increasing a security parameter under practical conditions.
Background
Side-channel attacks (SCAs) are physical attacks on cryptographic implementations [KJJ99].SCA countermeasures are roughly categorized into three types: masking, hiding, and leakage-resilient (LR) cryptography.Both masking and hiding are basically designed to suppress/eliminate the leakage for a given algorithm/device.However, it has been shown experimentally [BS21] and theoretically [DFS15,IUH22a,MRS22] that it might be difficult for masking to achieve secure implementation on some low-end devices with trivial noise.Hiding (e.g., a secure logic style like WDDL [TV04]) is also sometimes unsuitable because its effectiveness depends heavily on the given device/technology.In contrast, LR cryptography features cryptographic algorithms capable of secure computation up to a specific, predetermined level of leakage.For developing practically secure cryptographic modules, it is essential to investigate the possibility and limitations of LR cryptography as well as masking and hiding.against higher-order attackers, while the number of traces for key recovery increases exponentially by the masking order [DFS15,IUH22a,MRS22].
On the other hand, the security of LR-PRFs has been discussed using a bounded data or trace complexity model, which means that the number of plaintexts/traces available in attacking the PRF with a secret key is bounded.Here, secure LR-PRF implementation requires a component resistant to DPA with m plaintexts or traces (i.e., m-bounded DPA-resistance).This is a more relaxed and practical condition than unbounded DPA resistance.Nonetheless, the validity of the bounded complexity model needs to be clarified in practice.At least, the model is valid only if it works with a dedicated protocol, but there has been little discussion about its practical use.In addition, it is not trivial how to determine m for a given device about SCA success rate on the LR-PRF.See Appendix A for the details of existing LR-PRFs.
In summary, the existing LR cryptography schemes have some non-trivial limitations in their practical use, and the relation and combination of the aforementioned LR cryptography schemes have not been comprehensively discussed.It remains an open problem to determine how far away LR cryptography is from a leak-free sanctuary.
Our contributions
We present a cryptographic scheme and its security evaluation to address the abovementioned challenges.
New LR rekeying scheme.We propose a provably secure, higher-order, and LR rekeying scheme, named LR Rekeying with Random oracle Repetition (LR4).For this purpose, we introduce a new leakage model for rekeying and provide a formal security proof of LR4.We then show the validity of the leakage model and how to utilize LR4 in practice.We also discuss its practical aspects and analyze its implementation cost, efficiency, and low latency.We compare LR4 to state-of-the-art LR encryption schemes in Section 3.3, and confirm the advantage and effectiveness of LR4.
From a technical viewpoint, our major contributions include the new definition of leakage function, rather than the development of security proof technique or security notion.The definition of the leakage function for security notion/proof has been extensively studied, but its link to bounded trace complexity is largely unexplored.Currently, Accumulated Interference (AI) in [DMP22] (see Section 2.2) is the state-of-the-art for it.In this paper, we define another leakage function regarding trace complexity bound, which captures practical features of side-channel leakage and overcomes some drawbacks of existing leakage functions.We then prove the LR4 security using a promising security notion.Thus, LR4 achieves preferable features for practical rekeying (see Section 3), compared to existing LR schemes.
Evaluation methodology for practical usage.We propose an information-theoretical methodology for evaluating the attack cost and success rate on LR4 given a device/condition by utilizing and extending state-of-the-art SCA evaluation methods [dCGRP19,IUH22a,MRS22].So far, the success rate has been commonly used for evaluating the SCA capability/resistance [SMY09], and many studies have been devoted for its practical and feasible estimation on symmetric primitive [MOS11, DFS15, dCGRP19, IUH22a, MRS22, BCG + 23], while there are few studies on the success rate evaluation on mode of operations (rather than primitive).Thus, we formally define the attack success rate on LR4, and then formally analyze the relationship between the attacks on LR4 and the underlying symmetric primitive, which enables a quantitative evaluation of the attack cost as success rate and the number of attack traces.Our methodology is able to determine the rekeying order d and the trace bound m as the rekeying interval from a quantity of target device (i.e., mutual information or signal-to-noise ratio (SNR)).
Validation.Using the proposed methodology, we show a numerical evaluation of the attack cost on LR4 instantiated with AES.The results confirm an exponential increase of the number of secure calls of a symmetric primitive by increasing the rekeying order d under practical conditions.We also discuss properties required for a secure rekeying.
Technical challenges.The main technical challenge in our work is the simultaneous pursuit of high practicality and strong provable security on leakage resilience in rekeying mechanism.The former excludes several conventional techniques in LR cryptographic schemes, most notably a leak-free component and generation and transmission of true random values (where generation must also be secure against leakage).Our LR4 could be viewed as an alternative interpretation of classical GGM PRF with a counter, however, we need to show its leakage resilience both from theory and practice.This requires us to develop a dedicated security model capturing practical protection methods applicable to each module in LR4, such as higher-order masking of a practical order.Meanwhile, we should consider how to determine the parameter in the security proof (i.e., trace bound m), given a device, for the practical usage of LR4 with a quantitative security guarantee.For evaluating LR4's practical security, we develop a formal definition of SCA success rate on LR4 and extend a state-of-the-art information theoretical evaluation method [dCGRP19] to its evaluation.Consequently, we are able to show that LR4 has exponential security with respect to the parameters (number of modules and leakage resistance of each module) both from practical and theoretical viewpoints.
Conventional studies on rekeying
Rekeying is one of the primordial countermeasures against DPA suggested by Kocher et al. [KJJ99].The basic form of rekeying is illustrated in Figure 1.The rekeying schemes exploit the fact that most SCAs on symmetric primitives require a number of traces (i.e., calls of the target primitive) for the key recovery.The basic idea behind the rekeying is to use a temporal key (i.e., session key) k tmp generated from a master key k mst , and then update the session key using a deterministic rekeying function (g in Figure 1) and a (random) IV r at a frequency that does not allow any attackers to succeed in the temporal key recovery.Here, the target primitive is assumed to have a minimal resistance against SCAs with a certain number of traces (m-bounded DPA-resistance, or SPA-resistance if the number of traces is one) because the temporal key is discarded after the number of calls (or one).In contrast, the rekeying function should be DPA-resistant or leak-free because it is called many times with the master or internal temporal key.
The above idea was formalized by Medwed et al. in 2010 (MSGR) [MSGR10] as Fresh Rekeying.Fresh rekeying and its variants have been extensively studied from various viewpoints such as efficient instantiations, formal or practical security, and both with and without leakage [GFM13, MPR + 11, BDSH + 14, DEMM14, MS14, PM16, DFH + 16, DMMS21].In particular, realization of key derivation function (g in Figure 1) is one of the central research topics in Fresh rekeying.MSGR suggested to use non-cryptographic operation (ring/field multiplication) for the key derivation based on the observation that key derivation should have no black-box security as the derived temporal key is never given in clear.Dobraunig et al. [DEMM14] pointed out a problem of this instantiation by showing a chosen-plaintext (master) key recovery attack.Their attack is a simple time-memory tradeoff using a set of precomputed ciphertexts with guessed temporal keys and a fixed plaintext.Dziembowski et al. [DFH + 16] proposed rekeying components based on lattice cryptography backed by a certain theoretical guarantee.This direction has been further explored by Duval et al. [DMMS21].Despite reports on some attacks [DEMM14,PM16], in reality, the root of security-an unbounded DPA-resistant/leak-free module-is barely implemented with sophisticated SCA countermeasures (e.g., masking) to the best of authors' knowledge.
Paper organization
Section 2 introduces notations and existing attack/leakage models for LR cryptography.Section 3 proposes our higher-order and LR rekeying scheme, named LR4.Section 4 formally proves the security of LR4, with a formalization of leakage model for rekeying.Section 5 presents a quantitative and information-theoretical evaluation methodology based on formal analysis on attack cost and success rate on LR4 and how to use LR4 in practice.This is followed by Section 6, which demonstrates the validity of LR4 through numerical evaluations and discussion.Section 7 discusses the relation, comparison, and compatibility of LR4 with existing LR cryptographic schemes.Finally, Section 8 concludes this paper.
Basic notations
Let [i] denote {1, 2, . . ., i} for any positive integer i.We define {0, 1} i and {0, 1} * as the set of i-bit strings and the set of all arbitrary-length strings, respectively.Let log denote the binary logarithm.
A tweakable block cipher (TBC) is a keyed function E : K × T w × M → M, where K is the key space, T w is the tweak space, and M = {0, 1} n is the message space, such that for any (K, T w ) ∈ K × T w , E(K, T w , •) is a permutation over M. We interchangeably write E(K, T w , M ), E K (T w , M ), or E Tw K (M ).The decryption routine is written as ( When T w is a singleton, it is essentially a block cipher and is written as E : K × M → M. For sets X , Y, and T w , Func(X , Y) denotes the set of all functions from X to Y, Perm(X ) denotes the set of all permutations over X , and TPerm(T w , X ) denotes the set of all functions f : T w × X → X such that for any T w ∈ T w , f (T w , •) is a permutation over X .A tweakable uniform random permutation (TURP) with a tweak space T w and a message space X , P : T w × X → X , is a random tweakable permutation with uniform distribution over TPerm(T w , X ).The decryption is written as ( P −1 ) Tw (•) for TURP given tweak T w .A random oracle (RO) is a random function that is uniformly distributed over Func({0, 1} * , n) for some fixed n (which can be implemented with a lazy sampling).An ideal cipher (IC) is a random block cipher that is uniformly distributed over TPerm(K, X ) for some fixed finite sets K and X (that is, the set of all the block ciphers with key space K and message space X ).These are assumed to be publicly accessible when involved in the game.An IC accepts both encryption and decryption queries with a chosen key.
Attack/leakage models for LR cryptography
Data and trace bounded attacker.In an m-bounded data complexity model, the attacker can call the underlying symmetric primitive with m different plaintexts using an identical secret key [MR04,MSJ12].If the data complexity bound m is sufficiently small, we expect that the attacker cannot obtain sufficient information about the secret intermediate variable (e.g., Sbox output) and secret key.Thus, a key-recovery SCA is (believed to be) difficult if the number of available plaintexts is sufficiently bounded.However, Medwed et al. reported in [MSJ12] that, even if m = 2, the attacker can recover the secret key from low-noise devices (e.g., a low-end microcontroller) with a non-trivial success probability.Accordingly, they suggested considering the number of available traces for an attack success.In this paper, we suppose that an m-trace bounded attacker can utilize not more than m traces.In the rekeying context, m corresponds to the rekeying interval.Note that a malicious attacker may call the primitive with a fixed input many times, especially if attacking the decryption module of a nonce/IV-based (authenticated) encryption scheme.This observation means that we should consider how to implement a symmetric primitive under m-bounded trace complexity in practice (which may require considering a high-level protocol).LR4 handles this consideration, as discussed in Section 3.2.
Bounded leakage function.
In LR cryptography, secret information (e.g., temporal key and internal value/state) leaks through each call of the underlying symmetric primitive.Let s (i) be the i-th state (which may include the i-th temporal key).At the i-th call, information about s (i) is leaked as L i (s (i) ) for each i, where L i is the i-th leakage function.From practical constraints, we consider a non-adaptive leakage model: a fixed L exists such that L = L i for any i [SPY + 10].The leakage function is given by, for example, some bits of s (i) and its Hamming weight with noise; or, it is defined using the number of leaked bits so that a state leaks at most λ bits.Note that the leakage function is usually bounded somehow.The attacker trivially wins if he/she gets all the information about the secret key or the intermediate value/state through leakage.
Accumulated interference (AI).
Related to the bounded leakage function, Dobraunig et al. presented the concept of Accumulated Interference (AI) [DMP22] in 2022, which models leakages of permutation-based LR-AEs through both SCA and fault attacks.This model allows evaluating the attacker's advantage by the accumulated gain (AG).AG is defined using an input dataset for a given SCA, and AI is related to the trace complexity bound.However, AG has been evaluated only experimentally and empirically for specific SCAs; therefore, the evaluated advantage value cannot be an upper bound regarding all possible (even theoretically optimal) SCAs, which is essential for leakage resilience.Although an asymptotic approximation of the attacker's advantage for a given SCA is important to evaluate the SCA resistance of a device in some applications, a theoretical upper-bound evaluation is essential for both the theory and practice of LR cryptography.Dobraunig et al. then presented an LR encryption scheme asakey and its variant strengthened asakey based on AI/AG.In particular, the variant utilizes a caching strategy similar to LR4 for improved leakage resilience.In Section 3.3, we compare LR4 and asakey to demonstrate the significance of LR4.
Adaptive vs. non-adaptive leakages.
Major existing LR cryptography schemes adopted a non-adaptive leakage model (e.g., [DP10, YSPY10, FPS12, DMP22]).Actually, the practical validity of the non-adaptive leakage model/assumptions were discussed in [SPY + 10, YSPY10, FPS12]; power/EM attackers must fix leakage function before obtaining any leakage/output because of physical constraints of power/EM measurement (e.g., on-board pin/connector and EM probe).Although extreme attackers might move 3 Proposed scheme
Basic concept
As discussed in Section 1, while (fresh) rekeying is a promising approach for LR cryptography, its full practicality is questionable due to the need for a leak-free function.We present a solution to this problem, dubbed LR4, and detail its construction below.Let a positive integer d be the rekeying order.For each i ∈ [d], let G i : {0, 1} n k × {0, 1} nctr → {0, 1} n k be a function called a rekeying component, which takes an n k -bit key and an n ctr -bit counter and outputs an n k -bit key.We require each G i should behave like an independent RO; that is, for any input, the output looks random, and the outputs from the same input to G i and G j for i ̸ = j are uncorrelated.Let E, D : {0, 1} n k × {0, 1} n bc → {0, 1} n bc denote the encryption and decryption routines of an n bc -bit block cipher with an n k -bit key.The LR4 rekeying scheme with order d consists of an encryption function LR4.E and a decryption function LR4.D such that LR4.E, LR4.D : {0, 1} n k × ({0, 1} nctr ) d × {0, 1} n bc → {0, 1} n bc .LR4.E (resp.LR4.D) takes an n k -bit master key k mst , a d-tuple of n ctr -bit counters ctr d = (ctr 1 , ctr 2 , . . ., ctr d ), and an n bc -bit plaintext (resp.ciphertext) as inputs, and outputs an n bc -bit ciphertext (resp.plaintext).Figure 3 show LR4.E and LR4.D using a temporal key derivation function R, such that R : {0, 1} n k × ({0, 1} nctr ) d → {0, 1} n k .R takes an n k -bit master key k mst and a d-tuple of n ctr -bit counters ctr d as inputs, and generates a temporal (or session) key k tmp (as defined in Figure 3).See also Figure 2 for illustration.
For LR security, we assume that each G i does not leak anything up to m ≤ 2 nctr encryption calls with the same key under SCAs.Similarly, we assume that E does not leak up to m ′ encryption and decryption calls with the same temporal key.See Section 4 for the formal treatment and Section 5 for how to determine m (and m ′ ).
A sound SCA countermeasure should increase the number of traces for an attack success by increasing the security parameter(s).The LR4 can generate m d temporal keys securely under the m-bounded trace complexity model.Thus, the number of secure E calls increases exponentially by d, from m to m d , under the bounded trace complexity model (although m should be determined dependently on d, as discussed in Section 5).Formal and rigorous security with a leakage model is defined and proven in Section 4. Note that the proposed scheme does not contribute to the decrease of an SCA success rate, although we confirm the exponential increase of the number of secure E calls under some practical conditions as demonstrated in Section 6.1.Thus, the proposed scheme can be another direction for provably secure SCA countermeasures apart from masking.
LR4 has a structural similarity to the classical GGM PRF, as the temporal key generation is represented by an m-ary tree with nodes of counter value, as shown in Figure 4.As mentioned in Section 1, GGM PRF has also been adopted by several LR-PRFs [FPS12, MSJ12, MS14], although our objective-an LR rekeying scheme without an (unbounded) leak-free component-makes LR4 different from these LR-PRFs in terms of the interface and the security notion under leakage.See Appendix A for GGM and existing LR-PRFs.
On implementation efficiency.Compared to the existing related LR cryptography (e.g., GGM, the LR-PRFs described in Appendix A, and asakey), LR4 achieves an advantage of high-rate construction with provable security.For example, asakey, which is a stateof-the-art nonce-based and sponge-based LR encryption scheme in 2022 [DMP22], has a nonce-processing part with bit-by-bit absorption (represented by a binary tree like GGM) for its leakage resilience.Due to the leakage function definition, asakey is enforced to use a nonce processing part with a very low rate (i.e., 1-bit absorption per permutation) for provable security with leakage resilience.In other words, it is difficult to achieve a high-rate construction with provable security under its leakage function, as well as the existing LR-PRFs.In contrast, LR4 is the first GGM-like LR scheme that achieves both the provable security and a high rate.For example, if we instantiate ROs using SHA-3 as G i (k i , ctr i ) = SHA-3(k i ∥ ctr i ∥ i), LR4 readily absorbs more than 128 bits with temporal key using only one SHA-3 computation, whereas asakey requires n permutations to absorb an n-bit nonce.See Section 3.3 for quantitative comparisons.
Caching intermediate keys for improved SCA security and computational efficiency
We propose that the computation of LR4 utilizes caching of all intermediate keys as long as they can be used later in order to improve the computational efficiency and SCA security.
For example, if we increment a counter from 0 to 1 in Figure 2, k 2,1 (k i,j denotes the j-th intermediate/temporal key of k i ), which has been computed by processing counter 0, is cached to derive k 3,2 for counter 1, and the computing device releases it when the counter becomes 3.This is essential because re-computation of an intermediate key would lead to unexpected side-channel leakage violating the trace bound (see also Remark 1).
Figure 5 2 and Figure 3, we require an additional counter ctr d+1 to guarantee that E is called not more than m ′ times with an identical k tmp (note that the value of ctr d+1 has no influence on the output, except for ⊥).Here, we consider a total counter value as d+1 i=1 2 nctr(i−1) ctr d+2−i .This indicates that the counter is incremented from ctr d+1 .In R C , we first check the validity of input counters to detect counter replay attacks at Line 1.If the counter value is smaller than the cached one (i.e., replayed value) or out of range (i.e., ctr i ≥ m for 1 ≤ i ≤ d or ctr d+1 ≥ m ′ ), we abort the encryption/decryption.Otherwise, we compute the temporal key from the counters and master key with minimal computations.If ctr i = ch i and flag = 0, the computation is omitted and the cached key is used to avoid the leakage (where flag represents the necessity of computation).Note that ch d+1 and ctr d+1 are known to the attacker, which indicates that the side-channels (e.g., power/EM and timing) of the if branches in R C leak no secret information.
Latency and computational efficiency.The caching strategy also improves the latency and computational efficiency2 , as stated in Proposition 1.This is a substantial improvement over the straightforward computation requiring d times G.
Proposition 1.For a rekeying interval m, we need to run G only 1 + 1/(m − 1) < 2 times on average.Proof.Let d and m denote the rekeying order and interval, respectively.By definition, LR4 takes at most m d distinct counter values.The average number of G calls for processing these m d values is Let S d := d i=1 im −i .Taking the sum of geometric progression, we obtain 12 return k tmp ; kh d+1 , ch d+1 with cache invokes R C instead of R. The caches ch d+1 and kh d+1 are initially given as ch d+1 = (0, 0, . . ., 0), kh i+1 = G i (kh i , 0) for each 1 ≤ i ≤ d, and kh 1 = k mst .Here, kh i should be called at Line 9 only when it is actually used.
Hence, the right-hand side of Equation ( 1) is This completes the proof.
Memory overhead.The cache-based LR4 requires a non-volatile memory (NVM) to cache the intermediate keys.The memory overhead is given by (n k + n ctr )(d + 1) bits.For a practical parameter of n k = 128 and n ctr = 20 (see Section 6.1), the memory overhead is 148(d + 1) bits (e.g., 888 bits (= 111 bytes) when d = 5), which is efficient and sufficiently practical, compared to existing ones (see Section 3.3).For example, even low-end microcontrollers such as Atmel AVR Xmega128D series, which may be a major target of SCA countermeasures, have a 16K-128K byte flash memory and 1K-2K byte EEPROM.The memory overhead of LR4 is less than 10% of the very low-end ones.Thus, we confirm the practicality of cache-based LR4.
Explicit synchronization and replay detection.As the nonce (i.e., counters) forms a total order and LR4 caches its internal counters, LR4 detects replayed queries by comparing the orders of query and internal counters at the validity check at Line 1 in R C (ctr d+1 ; kh d+1 , ch d+1 ), before performing encryption/decryption.So, the attacker cannot perform any replay (for trace averaging and trace complexity bound violation).In contrast, if receiving a forwarded nonce (maliciously or accidentally), the LR4 module updates the internal counter to the nonce at Line 11 (as well as a valid counter) and performs encryption/decryption using the temporal key corresponding to the values.This is because such counter-forwarding never yields a violation of trace bound.Thus, LR4 offers secure explicit synchronization for the cases of malicious/accidental synchronization Figure 6: The rekeying function of asakey.K is a master key, and p is an underlying permutation.N 1 , . .., N k are one-bit split nonces where an input nonce N is written as failures.Note that denial-of-service attacks by counter forwarding to exhaust key lifetime remains an open problem to be prevented, while LR4 can prevent key-recovery SCAs.
Remark 1 (SCA security and potential threat).This paper focuses on a countermeasure against non-invasive power/EM analysis, in which any cached key does not leak unless it is called [MOP07].We consider the trace complexity bound as how many times the cached key is called; thus, the caching strategy essentially improves the SCA security of LR4.
In contrast, (semi-)invasive attacker may utilize a static leakage directly from memory components [MOP07,SSAQ02].Such a (semi-)invasive attacker attempts to directly read secret/cached keys to bypass SCA countermeasures.For such cases, the number of calls and trace complexity bound are no longer meaningful.Such (semi-)invasive attacks are outside the scope, as in many existing studies on LR cryptography and SCA countermeasures.(Semi-)Invasive attacks should be prevented by, for example, tamper-resistant memory and memory encryption, rather than SCA countermeasures.
Comparison with state-of-the-art
The high rate and leakage function of LR4 yield an efficient implementation with provable security, in comparison with existing LR cryptography such as asakey [DMP22] and ISAP [DEM + 20].asakey and ISAP are sponge-based encryption and AE, respectively, and thus they are functionally different from LR4.However, LR4's temporal key derivation function R and asakey/ISAP's rekeying function are functionally compatible; therefore, we compare them.As depicted in Figure 6, asakey uses a nonce-processing part as a rekeying function, which consists of a bit-by-bit absorption of the nonce, followed by a spongebased encryption.The intermediate state derivation in the nonce absorption of asakey is representable as a binary GGM.The strengthened asakey, which is a cache-based variant with an up-counter nonce, caches the intermediate states during the nonce processing part.
To validate the effectiveness of LR4, we show its comparison to the strengthened asakey.
We instantiate the strengthend asakey using Keccak-p[1600, 12] with 128-bit key according to [DMP22], while we instantiate RO in LR4 using Keccak-p[1600, 24] (i.e., SHA-3).We mainly discuss (strengthened) asakey in the following paragraphs because ISAP has a similar rekeying function as that of asakey.Note that ISAP has more parameters than asakey, such as the length of nonce absorption per one permutation call and the number of rounds of the underlying permutations.However, all instances of ISAP determine the length of nonce absorption to be one, which is the same as asakey.The number of rounds of the underlying permutations varies in each instance (mainly 1 round or 12 rounds), but we leave out the schemes using 1-round permutations from our comparison since we would like to focus on the schemes having provable security (also see the last paragraph in comparison of computational cost/latency).
Memory overhead.
The strengthened asakey requires an NVM to cache all its internal states, whose bit length is the product of the permutation length (1,600 here) and key/nonce length.In the above instantiation, the memory size of asakey is 1600 × 128 = 204800 bits.Although it can be reduced by limiting the number of calls by m, the required NVM are given by 1600 log m bits, which is still very high cost for practical m.In contrast, cache-based LR4 requires an NVM of only (n k + n ctr )(d + 1) bits, which is far fewer for realistic d (e.g., 888 bits for d = 5 and a practical parameter of n k = 128 and n ctr = 20 for example).Thus, LR4 is adoptable of even low-end microcontrollers with a 16 K-128 K byte flash memory and 1 K-2 K byte EEPROM as mentioned before.
Computational cost/latency.Moreover, to improve computational cost and latency, asakey and ISAP specify instantiations that use reduced-round permutations.For example, two instances of ISAP employ a 1-round Keccak/Ascon permutation in the nonce processing part except for the first and the last permutations.It significantly reduces computation cost and latency.To the best of our knowledge, any practical weaknesses have not been reported in these instances.However, in the asakey/ISAP's security proofs, the underlying primitive should be a public random permutation, which is intepreted as non-existence of structural distinguisher in practice.The use of a 1-round permutation implies a deviation from this assumption.
Provable security.The LR notions of asakey and LR4 share a (common) principle; consider a distinguishing game involving a leaky oracle, non-leaky oracle, and an idealized primitive oracle, while asakey's leakage model and ours are different and incomparable.Our model captures features of practical SCA countermeasures (e.g., higher-order masking) and dedicated to the tree-based re-keying schemes.As Section 4.3 shows, LR4 can be used as a replacement for leak-free/fresh-rekeying components in some of the existing LR-AEs (e.g., [Men20]), which helps make these schemes real.
Deleated and moved into above partially: We hereafter compare LR4 to ISAP's rekeying function (it is similar to asakey).ISAP uses lightweight primitives to overcome its low rate.In fact, two schemes of the ISAP family use the 1-round permutation of asconp/keccak-p[400] as a primitive in their rekeying functions.However, in the asakey/ISAP's security proofs, the underlying primitive should be a random permutation, which is far from a 1-round permutation.Thus, its rekeying function cannot avoid a huge gap between provable security and practical construction.In contrast, LR4 has both provable security and efficiency.
Leakage evaluation.In [DMP22], asakey's leakage resilience was evaluated for specific SCAs (e.g., CPA) to derive the AG value.In other words, asakey's leakage resilience can be evaluated for SCAs feasible by the evaluator/designer.However, an advanced attacker may mount stronger SCAs, which makes the asakey's practical security unclear.In contrast, LR4's security proof and leakage evaluation method (in Section 5.2) consider theoretically-optimal SCAs (i.e., most advanced SCA attacker) and capture the practical aspects of (higher-order) masking.Thus, LR4's leakage resilience includes practical security and covers all possible SCA attackers including one evaluated in [DMP22].
Security definition
We introduce a formal security notion under leakage for rekeying schemes, including LR4.The core idea of our security model is the same as [BMOS17].We define the security of an LR rekeying scheme as the probability that an adversary querying some leakage oracles successfully distinguishes the two worlds: real and ideal.Following Mennink [Men20], we model a rekeying scheme as a TBC, where a tweak corresponds to the IV (r in Figure 1) of a rekeying scheme.Our model allows the adversary to choose tweaks arbitrarily in the game and hence is more general than assuming it is a random value or a counter (although a practical choice would be a counter).In the real world, the adversary accesses LR4.E and LR4.D, while, in the ideal world, it accesses a TURP P of tweak space {{0, 1} nctr } d and message space {0, 1} n bc .Regarding leakage oracles, we define LR4-L.E and LR4-L.D as those of LR4.E and LR4.D, respectively.We will detail them later.We also assume that G 1 , G 2 , . . ., G d are independent ROs, and E and D are IC.We define the leakage-resilience of Fresh Rekeying (LFR) advantage for the security of LR4 as where LR4 ± denotes a pair of oracles LR4.E and LR4.D, and LR4-L ± denotes LR4-L.E and LR4-L.D. Also, P ± (resp.E ± ) denotes P (resp.E) and its inverse P −1 (resp.D), and G := {G 1 , G 2 , . . ., G d }.We call LR4 ± and P ± construction oracles and call queries to them construction queries.Similarly, we call LR4-L ± leakage oracle and call queries to it leakage queries.The use of idealized primitives, such as RO, can be found in the theoretical analysis of LR schemes, particularly for obtaining efficient schemes [YSPY10, BGP + 19, DJS19, DM19, GSWY20, FPS12].See also [BBC + 20] for an overview and discussion.
Leakage oracle.We here define LR4-L ± to capture the m-bounded trace complexity model introduced in Section 3.1, including caching keys shown in Section 3.2.We assume that LR4-L.E and LR4-L.D have the same input/output as LR4.E and LR4.D, except that they additionally output leakage Leak ∈ ({0, 1} n k ∪ {⊥}) d+1 .For the definition of Leak, recall that we assume each rekeying component and a block cipher can securely perform m and m ′ calls with the same key, respectively.Also, we assume m = 2 nctr to simplify the proof.To capture this leakage assumption, we define that LR4-L.E and LR4-L.D leak an overused key, which we detail below.We first assume that the leakage oracle records all the invoked intermediate and the temporal key values of the underlying G and E that appeared in the queries to the leakage oracle.For a query to the leakage oracle, when the first i ∈ [d] counters (ctr 1 , . . ., ctr i ) are the same as some previous queries, the leakage oracle merely refers to the memorized key value of (i + 1)-th intermediate key when G i is invoked with the same key (e.g., k i ∈ {0, 1} n k ) more than m times, we define k i as an overused key and set the i-th element of Leak as its value k i .Otherwise, we set it as ⊥.Similarly, when E is invoked with the same temporal key k d+1 m ′ times, we define k d+1 as an overused key and set the (d + 1)-th element of Leak as k d+1 ; otherwise, we set it as ⊥.We also write Leak = ⊥ to mean that no keys are overused, i.e., Leak = ⊥ = (⊥, ⊥, . . ., ⊥).Here, our model regards the side-channel leakage with m traces during the computation of all intermediate values and outputs.Thus, our leakage model and proof consider the best possible SCAs (including an optimal SCA in Section 5.1).
Query rules.We assume the adversary can query the same counters up to m ′ times in the leakage queries to prevent a trivial leak of a temporal key.Note that there is no restriction on repeating the same counters in the construction queries.Also, we suppose the adversary does not perform repeating/replaying and forwarding queries; it does not repeat a query across different oracles or the same oracle.In construction queries, we assume that the adversary does not query (ctr, C) to LR4.D after querying (ctr, M ) to LR4.E and obtaining C, and vice versa.The same assumption applies to the leakage queries.Note that the adversary can query any oracle in any order and can query counters in any order in construction and leakage queries.
Relation to existing leakage models.Our security notion under leakage is defined with a distinguishing game consisting of real and ideal worlds involving leaky and non-leaky (classical) oracles, where the former is in the both worlds.The latter in the real world performs real encryption whereas in the ideal world, it is idealized and returns always random.We also allow the adversary to query primitive oracles (G and E) in the both worlds.This framework itself is identical to those in the literature [BMOS17, KS20, DEM + 20, DMP22].The main difference is the definition of leakage function (the response of leaky oracle).A leakage function in existing models is stateless, namely does not reflect the query/response history, while ours depends on the previous queries as we care about how many times the same key has been used in each rekeying module.In addition, leakage functions in the existing LR-AEs, such as [BMOS17, KS20], are a direct composition of those defined for internal components (e.g., key/tag derivation functions, encryption function and message hashing function).The leakage functions for leak-free components (typically key/tag derivation functions) is vacuous and those for leaky components leak everything about its computation.In case of sponge-based LR-AEs, the internal primitive is typically a single cryptographic permutation and the leakage function determines the input and output leakage per every permutation call occurred in an encryption/decryption query to the leaky oracle.LR4 consists of two components, G and E, and defines different leakage functions that are dependent on the query histories.So our model shares some basic principles with existing works, however, the leakage function is dedicated to capture what is aimed by practical protection methods, e.g., high-order masking.
Security bound for LR4
Theorem 1.Let A be the adversary following LFR game.Let q be the total number of construction queries, and q L be the total number of leakage queries.For i ∈ [d], we assume A queries p i times to G i and p I times to E. Then, we have where p = d i=1 p i and q ≤ 2 n bc −1 .We also assume q L ≤ m ′ 2 nctrd .This theorem shows that LR4 has birthday-bound security regarding the internal key length and almost optimal security regarding the block cipher length since m ′ is small.Also, the term (q + q L )(p + p I )/2 n k indicates the relationship between the upper bounds of online and offline complexities that the adversary requires to attack LR4.
Proof.We use the H-Coefficient technique [Pat08,CS14] for the proof.See Section B.1 for the technical background.We first define transcripts, a set of input/output values of oracles in the LFR game the adversary obtains.
and Q L be the transcripts consisting of input/output of the construction oracle, G 1 , . .., G d , E and D, and the leakage oracle, respectively.In detail, we define where IK i,• is the input key; IV i,• is the input counter; and OK i,• is the output key.We also define To simplify the security proof, we assume that, after A finishes its interactions with all oracles, the leakage oracle (LR4-L.E and LR4-L.D) reveals all the involved keys (the master key, intermediate keys, and temporal keys) in computing the outputs.Then the construction oracle also reveals keys involved in its output computations.Note that constructions oracle in the ideal world uses P and P −1 hence have no keys to reveal; instead, the oracle outputs dummy values sampled uniformly at random from {0, 1} n k .To prevent a trivial win of A, the construction oracle (of both worlds) does not reveal the keys already revealed by the leakage oracle.For example, in Figure 4, assume that the adversary queries three counters (Lctr d 1 , Lctr d 2 , Lctr d 3 ) = ((0, 0), (1, 0), (1, 1)) to the leakage oracle and queries three counters (ctr d 1 , ctr d 2 , ctr d 3 ) = ((0, 1), (1, 1), (2, 0)) to the construction oracle.In this case, as shown in Figure 7, the leakage oracle reveals ) and then the construction oracle reveals only (k 3,2 , k 2,3 , k 3,7 ) since k 1,1 , k 2,1 , k 2,2 , and k 3,5 are already revealed by the leakage oracle.Let ]} be the transcript of keys revealed by the leakage and the construction oracles, where a indicates which oracle reveals the key: a = 0 means that the leakage oracle reveals the key, and a = 1 means that the construction oracle does.The index i indicates the depth of the key, and j indicates the index of the key in i-th depth keys, as shown in Figure 4 and Figure 7.In the case of the above example (i.e., Figure 7), the adversary obtains We introduce four bad events.Roughly, if the transcripts defined above fulfill any bad event, the adversary successfully distinguishes two worlds with high probability.
Bad1: A collision between the elements of Q K in the same depth.That is, the event that there exists i ∈ i,j2 .This event also includes the event that A obtains Leak other than ⊥.
Bad2: A collision between the revealed key in i-th depth and the input key of G i , where i ∈ {1, . . ., d}.That is, the event that there exists i ∈ {1, . . ., d}, j ∈ [2 nctr(i−1) ], a ∈ {0, 1}, Bad3: A collision between the revealed temporal key and the input key of E. That is, the event that there exists j ∈ Bad4: A ciphertext (resp.plaintext) collision between construction queries and leakage queries when counters are the same and plaintexts (resp.ciphertexts) are distinct.That is, the event that there exists i ∈ [q], j ∈ [q L ], and An upper bound of Adv LFR LR4 (A) would correspond to an upper bound of p bad := Pr [Bad1 ∪ Bad2 ∪ Bad3 ∪ Bad4] in the ideal world.This argument holds because the second part of the H-Coefficient technique, the so-called good transcript probability ratio, is lower bounded by 1 (see e.g., [CS14] for details).We here move out the derivation of this part to Appendix B because it is a typical one for birthday-secure constructions and is tedious but rather straightforward.For readers unfamiliar with H-Coefficient, we refer Second, the keys circled by blue will be revealed-following the tree in the real world and randomly sampled in the ideal world-except those not already revealed (i.e., circled by red).Thus, the latter step only reveals k 3,2 , k 2,3 , and k 3,7 .
to [CLS15, Theorem 1] to get an idea of how typical H-coefficient proofs with the case that the good transcript probability ratio is larger than one are conducted.We evaluate p bad in the ideal world.We start by evaluating Pr[Bad1].For each i ∈ [d+1], let nk i be the number of elements k , the number of revealed keys in i-th depth).Now, we have nk 1 = 1 assuming q + q L ̸ = 0, and nk 1 ≤ nk 2 ≤ • • • ≤ nk d+1 ≤ q + q L .In the ideal world, the elements k (1) •,• are chosen uniformly at random and independently, and k (0) •,• are derived from ROs.Thus, we obtain Pr For Bad4, we define Cnc as the number of distinct counters in construction queries, and ctr d 1 , . .., ctr d Cnc as the distinct counters.Let q 1 , . .., q Cnc be the number of construction queries whose counter is ctr d 1 , . .., ctr d Cnc , respectively; thus, recall that the adversary queries to LR4-L with the counter ctr d i , at most m ′ times.Assuming Bad1 ∩ Bad2 ∩ Bad3 happens, for each i ∈ [τ ], the probability of a plaintext/ciphertext collision is at most n bc holds.We evaluate Theorem 1 by summing up the four probabilities of bad events.Tightness of Theorem 1.The bound in Theorem 1 is tight, as we have two matching attacks.We here present two distinguishing attacks to show the tightness of Theorem 1.The attacks try to invoke the events corresponding to Bad1 and Bad4 defined in the proof.
The first attack shows the tightness of the term dq 2 L /2 n E +1 , and it corresponds to Bad1.The attacker first repeats queries to LR4-L.E with the same plaintexts and distinct counters.With a sufficient number of queries, the attacker can find a key collision defined in Bad1 with a high probability by obtaining Leak other than ⊥ or finding collisions of some ciphertexts.Once the attacker finds the key collision, it can distinguish two construction oracles LR4 and P by querying twice to a non-leakage oracle with the same plaintext and the counters where the key collision occurs.If ciphertext collision occurs, the attacker figures out that it queries LR4 with a high probability; otherwise, it does that it queries P.This attack requires q = O(1) and sufficiently large q L ≈ O(2 n E /2 ) since the probability of the key collision is at most dq 2 L /2 n E +1 .Note that we can show the tightness of the term dq 2 /2 n E +1 in the same manner as the above attack.The attacker repeats construction queries with the same plaintexts and distinct counters.If ciphertext collision occurs in some counters, the attacker can distinguish two worlds by checking if other plaintexts also collide with the counters.
The second attack shows the tightness of the term 4m ′ q/2 n bc , and it corresponds to Bad4.The attacker repeats construction queries and leakage queries with the same counters and distinct plaintext.If the attacker queries to the real world, the output ciphertexts cannot collide.However, if it queries to the ideal world, the probability of ciphertext collision is at most 4m ′ q/2 n bc , as aforementioned.
Applications of LR4
As a primary application of fresh rekeying, Medwed et al. considered a challenge-response (CR) protocol with low-cost devices (e.g., RFID) [MSGR10].LR4 can be used for CR protocols, utilizing counters instead of random challenge values.A more practically useful application is LR-AE (e.g., [DJS19, KS20, Men20, BGP + 19]).However, one line of research has presented various LR-AEs based on different leakage models for different security goals, utilizing different (LR and non-LR) primitives.Pinpointing how known LR-AEs can benefit from our proposal is not easy due to this wide variety of problem settings.In a very general sense, if an LR-AE scheme uses a nonce-based rekeying component which is assumed to be leak-free (e.g.. ISAP [DEM + 20]) it could be replaced with LR4 (but again it depends on the details of the scheme and needs ad-hoc security analysis).Some examples.Given the aforementioned limitations in mind, we describe example applications of LR4 to existing LR-AEs in more detail.The first is the proposal of Mennink at Asiacrypt 2020 [Men20].Mennink proposed a class of LR-AEs based on ΘCB (a TBC-based idealized version of OCB) [KR11].He proposes to instantiate ΘCB by replacing the internal TBC with a fresh rekeying scheme, where the input value r of Figure 1 is used as a tweak.Mennink presented the black-box security of the proposal but did not clearly show what leakage-resilience security would be possible3 .Still, the crucial point of his argument is that any TBC-based AE can use a fresh rekeying scheme as long as each encryption takes distinct tweaks determined by the nonce and the length of input variables (rather than the value of an input variable itself).If this holds and the nonce is a counter, each tweak is unique and determined incrementally, and we can use LR4 as the underlying TBC of ΘCB efficiently.As a result, the encryption of ΘCB does not leak anything from the TBC up to the bound of Theorem 34 .We should point out that Mennink's proposal does not strictly follow the rule mentioned above of tweak update in the processing of AD, as a TBC encrypts each AD block without taking a nonce (see e.g., [Men20, Fig. 5]).However, this can be easily fixed by involving the nonce for each AD block encryption.This fix does not harm the security under the standard model, namely, without leakage.Unlike many existing LR-AEs, the resulting scheme is parallelizable and the rate is one; namely, it needs just one TBC (realized by LR4) call to process one input block, and thus it is quite efficient.
ΘCB has a relatively large state size due to its parallel structure.If we want to reduce the implementation size, serial counterparts such as PFB_plus and PFB ω by Naito et al. [NSS20] could be used instead.These modes are specifically designed with (higherorder) masking implementations in mind.Romulus [IKMP20], a finalist of the NIST Lightweight cryptography project [Nat23], could also be used, with a similar modification to the AD processing as mentioned above to the Mennink's scheme.Meanwhile, some TBC-based LR-AEs do not follow the condition mentioned above on the tweak values used in encryption, such as HOMA [NSS22] and TEDT [BGP + 19].LR4 is not suitable for these because the tweak cannot be updated in an incremental manner.Designing efficient LR rekeying schemes (or TBCs) that suit these LR-AEs remains an interesting open problem.
The second example is FGHF [DJS19] or its improvement [KS20].These LR-AEs are encryption-then-MAC composition, where the encryption and MAC function uses single-block leak-free PRFs.The first is directly replaceable with LR4 as it takes nonce N as an input (which will be a counter of LR4; the input block of E could be a fixed constant).The second leak-free PRF takes the (key-less) hash value V of the tuple of (ciphertext, nonce, associated data), and does not take a nonce N .Using LR4, this PRF can be modified so that it takes V as E's input and the next nonce (N + 1) as the counter of LR4.We do not go into the details here, however, the security proof with leakage could be obtained in a similar manner to that of [KS20] (given certain restrictions on the decryption leakege imposed by m ′ ).
5 Quantitative success rate evaluation methodology for rekeying schemes
SCA backgrounds and success rate
Notations for SCA.We introduce notations for the discussion about SCA.An uppercase letter (e.g., X) denotes a random variable/vector on a set denoted by the calligraphic character (e.g., X ), and a lowercase character (e.g., x) denotes an element of the set (i.e., x ∈ X ), unless otherwise defined.Let Pr be the probability measure and p be the density or mass function.A side-channel trace is defined as x ∈ X ⊂ R ℓ , where ℓ is the number of sample points.Let m be the number of traces available for an attack.Let X and T be the random variables for side-channel trace and n b -bit partial plaintext/ciphertext, respectively.We suppose that m side-channel traces X m = (X 1 , X 2 , . . ., X m ) and plaintexts/ciphertexts T m = (T 1 , T 2 , . . ., T m ) are independent and identically distributed (i.i.d).A secret variable utilized in SCA is denoted by Z.If we need to specify a secret key k, we denote it by Z (k) .For example, Z (k) = Sbox(T ⊕ k) and n b = 8 for major AES software implementations.
Optimal SCA.In SCA on symmetric ciphers/primitives, we usually compute the rank of key candidates from side-channel traces and partial plaintexts/ciphertexts, and estimate the correct key according to the score.Let S : K × X m × T m → R be a score function and let δ S : X m × T m → K be an SCA distinguisher using S.For example, the correlation power analysis (CPA) utilizes the absolute value of Pearson's correlation coefficient as a score function, combined with a leakage function [BCO04] is proven to provide an optimal attack, where p Z|X denotes the true conditional probability distribution of Z given X.In other words, δ L (X m , T m ) = arg min k L(k; X m , T m ) is an optimal distinguisher5 .The function L is called NLL.For a given device, considering such an optimal attack is sufficient to evaluate the SCA resistance (and leakage resilience) against all possible SCAs.
Success rate.Success Rate (SR) has been commonly used for evaluating SCA performance and the validity of SCA countermeasures for various cryptographic implementations [SMY09].SR given from m traces, denoted by SR m , is defined as the probability that the rank of correct key k * becomes one, as where rank(k * , m) denotes the correct key rank in an optimal SCA, defined as Here, 1 denotes the indicator function.Note that, for a simplified notation, we here omit the inputs of X m and T m to rank and L; therefore, rank and NLLs are random variables in this context.A sound SCA countermeasure should exponentially increase the number of traces required to achieve an SR by an increase of security parameter(s).For example, masking schemes are proven to satisfy this property: the SR of SCA on masked implementations exponentially decreases by an increase of the masking order, which corresponds to the number of traces to achieve the SR, under a condition about mutual information [DFS15, IUH22a, MRS22].
SR upper-bound evaluation.
As the true probability distribution is usually unknown and unavailable, it is quite difficult to evaluate the SR of an optimal SCA in practice.Currently, one of the most popular methods for evaluating an optimal SR would be to use DL-SCA: the evaluator profiles the device under test (i.e., trains an NN to imitate the true probability distribution p Z|X ) and repeats an attack with m traces using the trained NN to evaluate the SR m empirically [ZBHV19, PHJ + 19, dCGRP19].However, an empirical approach like this incurs a non-negligible computational cost, and the soundness and validity of the evaluation result are sometimes uncertain due to NN approximation error, NN hyperparameter variations, and the stochastic aspects of learning.Alternatively, an inequality evaluation is sometimes useful for estimating the theoretically achievable SR from a quantity (e.g., mutual information and SNR) for a given device/implementation, as stated in Theorem 2.
Theorem 2 (SR upper-bound [dCGRP19,IUH22a]).Let I(Z; X) be the mutual information between the secret intermediate value Z and side-channel trace X.Let SR m be the success rate of SCA with m traces.SR m is upper-bounded as where ξ(SR m ) is a function ξ : [0, 1] → R + 0 , defined as In Equation (4), H(K) denotes the entropy of K, log is the binary logarithm, and H 2 is the binary entropy function; namely, Usually, it holds H(K) = n b and |K| = 2 n b , where n b denotes the bit length of a partial secret key targeted by the SCA.Inequality (3) evaluates the SR of an optimal attack, and every SCA (on Z) must satisfy Inequality (3)6 .Note that an SR upper-bound conversely represents the lower-bound of the number of traces required to achieve a given SR.
Formalization
In the following, we consider the cache-based LR4.This section presents a methodology to evaluate the overall success rate in attacking LR4 (AR in short) in a quantitative manner.Our methodology is derived as a combination/unification of trace complexity bound and bounded leakage from underlying primitive(s) through an information-theoretical SR evaluation to quantify the attack cost and AR, whereas the existing studies utilize only the trace complexity bound as a threshold value for attack success/failure.For this purpose, we consider the bounded leakage as the mutual information I(Z; X), which is a common leakage representation, as in many previous studies, and extend Inequality (3) for the numerical evaluation of attack cost/AR.This unification is essential for the practical usage of LR4 with a guarantee of quantitative security.Hereafter, we refer to the success rates of overall attack on LR4 and a partial key recovery as AR and SR, respectively.
In this paper, as a common case, we suppose that the rekeying components for LR4 have a construction similar to an AES-like block cipher; that is, its round function consists of n s parallel evaluations of an n b -bit Sbox for key-plaintext XOR (corresponding to SubBytes following AddRoundKey in AES).We also suppose that the LR4 instantiates rekeying components using an identical primitive.Recall that the d-th order LR4 consists of d rekeying components.with m-bounded traces (i.e., rekeying interval of m).Here, the computations of the rekeying components under an m-bounded traces model (meaning rekeying interval of m) are performed for d i=1 m i−1 different keys, as the i-th rekeying component generates m i different temporal intermediate keys for the (i + 1)-th rekeying component.This indicates that the attacker has m i−1 chances/trials for key-recovery SCA with m traces on the i-th rekeying component.It would be sufficient for the attacker to recover at least one key among all the SCA trials.Here, we consider AR as a probability that an attacker can achieve the full recovery of at least one intermediate/temporal key of a rekeying component by all possible SCA trials with m traces.Using the rank metric as well as SR, AR is formally defined as follows.
Definition 1 (Success rate of SCA on LR4).Let AR d,m be the probability that an attacker succeeds in at least one full key recovery by attacking the d-th order m-bounded LR4 instantiated using rekeying component(s) with n s parallel Sboxes.Using the rank metric, AR d,m is defined by where k * i,j,h denotes the h-th n b -bit correct partial key of the j-th temporal key at the i-th rekeying component of LR4.
In Definition 1, the right-most intersection ns h=1 means that the attacker should recover a full key from SCA on n s parallel Sboxes; the center union j=1 means that the attacker can mount an m-bounded-trace SCA (i.e., a trial) on the i-th rekeying component m i−1 times with different keys; and the left-most union d i=1 means that the attacker can mount the trial m i−1 times for d different rekeying components.It is sufficient for the attacker to succeed in at least one full-key recovery among all trials.Hence, we should take the union of events ns h=1 rank(k * i,j,h , m) = 1 in terms of i and j, whereas the full-key recovery of a trial is represented as the intersection in terms of h.
Remark 2 (On payload encryption).LR4 generates m d temporal keys, and we call the payload encryption m ′ × m d times in total.Definition 1 considers SCAs on rekeying components, excluding the payload encryption.If we use a primitive for the payload encryption same as the rekeying component (implying that m = m ′ ), it is sufficient for the evaluation including the payload encryption calls to consider AR d,m , as in the numerical evaluation in Section 6.1.Even if we use a distinct primitive for the payload component, we can readily evaluate its AR by an union of full-key recovery events for the primitive involved in m ′ in a similar manner, which is extendable to Theorem 3 below.
Remark 3 (Relation to multi-user security).Our definition is similar to the security notion in the multi-user encryption setting [DLMS14, BT16, LMP17, HTT18, DGGP21, NSSY22], which has been used to determine the rekeying frequency in real-world cryptographic protocols such as (D)TLS and QUIC [Res18,RTM18,TT21].Security analysis of LR4 is related to a cryptanalysis in the multi-user setting.This is because we perform multiple rekeying component evaluations using different keys, which would correspond to the case that multiple users evaluate a block cipher with distinct keys.Note that, in attacking LR4, rekeying component outputs are available only through leakage, different from common cryptanalyses.The numbers of queries and users correspond to the trace bound m and key lifetime (or the number of SCA trials) described below as σ d,m , respectively.In other words, the above definition and the following theorem(s) are used to evaluate the success rate (or advantage) and the rekeying frequency in a setting similar to multi-user encryption with an SCA leakage.
Information-theoretical evaluation
We next formally provide the relation between AR and SR to derive a concrete and quantitative AR evaluation method, under some standard and realistic assumptions.
Lemma 1 (Relation between AR and SR).Let AR d,m be the overall success rate of SCA on the d-th order m-bounded LR4.Suppose that all temporal keys are mutually independent.Suppose that, for rekeying components, the SR of SCA on n b -bit partial key recovery is identical for all Sboxes/partial keys.Let SR m be the partial key recovery success rate of an SCA with m traces.It holds that where σ d,m = d i=1 m i−1 denotes the number of SCA trials available for the attacker.
Proof.We first show that Equation (6) holds.Let (Ω, F, Pr) be a probability space.Let [A] c denote the complement of a set A ∈ F (i.e., [A] c = Ω \ A).Let A m i,j,h denote an event of rank(k * i,j,h , m) = 1.According to De Morgan's law, Equation ( 5) is transformed into Here, events of ns h=1 A m i,j,h (i.e., success of full-bit temporal key recovery in a trial) for all i and j are mutually independent, as temporal keys are supposed to be mutually independent owing to RO.In addition, the rekeying components are performed on an identical device, which indicates that the events ns h=1 A m i,j,h are i.i.d in terms of i and j.Therefore, Equation ( 8) is followed by Due to the assumption on SR of SCA on n b -bit partial key recovery, events A m i,j,h for all h are also mutually independent, which indicates that it holds Pr ns h=1 for any i and j.In addition, owing to the assumptions, we consider an identical SR for all i, j, and h because SR m = Pr[A m i,j,h ] as well as Equation (2).Thus, we conclude as required.Equation ( 7) is derived from Equation (6) as and finally we conclude Lemma 1 states how much/little SR is required to achieve an AR, and Equation ( 7) is used for deriving the SR corresponding to a given AR.In designing a cryptographic module with SCA countermeasure(s), an acceptable AR is determined in advance.For LR4, the parameters (rekeying order d and interval m) should be determined for a required AR and given device with mutual information I(Z; X) as a leakage amplitude (or an SNR, which upper-bounds I(Z; X) via the Shannon-Hartley theorem).For the evaluation, we introduce an upper-bound of AR as Theorem 3.
Theorem 3 (AR upper-bound).Let AR d,m be the success rate of m-bounded-trace attack on the d-th order LR4, as in Definition 1.Let I(Z; X) be the mutual information between the secret intermediate value Z and side-channel trace X.With the same assumption as Lemma 1, the AR d,m is upper-bounded as where σ d,m = d i=1 m i−1 and ξ is the function defined as Equation (4) in Theorem 2.
Proof.It is obvious from Inequality (3) in Theorem 2 and Equation ( 7) in Lemma 1.
Theorem 3 states the relation between AR with m-bounded traces and the mutual information I(Z; X) considered as a leakage from each rekeying component; thus, this is a unified security metric of the bounded trace model and underlying primitive leakage.Using Theorem 3, we can determine the security parameters d and m for a required AR and given device with I(Z; X) or SNR.
Meanings of Theorem 3.
According to [dCGRP19,IUH22b,IUH22a], function ξ represents the number of bits required to achieve an SR.For example, SR = 1/2 n b implies that the attacker has no advantage in the attack, as represented by by ξ(1/2 n b ) = 0. Conversely, SR = 1 implies that the attacker obtains the full-bit information of a secret key, as represented by ξ(1) = n b .In contrast, mI(Z; X) represents the amount of information that the attacker receives through m traces.Note that n s I(Z; X) = λ, where λ is the bounded leakage and n s denotes the number of parallel S-boxes, and m means the traces bound.Thus, Theorem 3 reveals the relation between the bounded leakage function and trace complexity bound in an analytical and quantitative manner.In practice, Inequality (9) evaluates an upper-bound of SR m to achieve a given AR d,m (i.e., attack cost), to determine the appropriate trace bound m (i.e., the rekeying interval).
Remark 4 (On SR range).In Inequality (9), SR m is defined in the range of [0, 1].However, the minimum value of SR m should be 1/2 n b , which means that the attacker has no advantage in guessing the secret key.In other words, it makes no sense to consider the case that SR m ∈ [0, 1/2 n b ), because any attacker trivially achieves SR m = 1/2 n b by a random guess.Therefore, we should determine d and m such that they satisfy SR m ≥ 1/2 n b in addition to Inequality (9).Conversely, if SR m ≥ 1/2 n b is not achievable for a given AR and I(Z; X), such AR cannot be reached by the device.
Remark 5 (Relation between Theorem 1 and Theorem 3).Theorem 1 proves the security bound against an SCA attacker under a theoretical (yet reflecting the idea of practical protection methods, e.g., high-order masking) leakage model with idealized primitives.In contrast, Theorem 3 states a bound of overall success rate of an optimal and real SCA on actual symmetric primitive(s).Each theorem captures different and essential aspects of LR4.
Practical usage of LR4
We can utilize LR4 as an SCA countermeasure with a guarantee of quantitative security evaluated by our methodology in Section 5.2.The proposed design flow is as follows.
Step 1: Device profiling.We first need to know the value of mutual information I(Z; X) or achievable SNR of side-channel measurement by profiling the target device.For example, a deep-learning based profiling method in [IUH22a, Section 6] is useful to evaluate a tight upper-bound of I(Z; X).In addition, according to the Shannon-Hartley theorem, I(Z; X) is upper-bounded by SNR as I(Z; X) ≤ 1/2 log (1 + SNR) (assuming that noise is additive Gaussian).This indicates that it would be sufficient to evaluate the SNR, which may be easier than I(Z; X) evaluation.
Step 2: Determination of master key lifetime and acceptable key recovery success rate.In this paper, we define the master key lifetime as the number of temporal keys generated under a given I(Z; X) or SNR.The master key lifetime should be considered with the number of calls the target cipher or LR-AE required for the application.At the same time, we determine an acceptable full-key recovery success rate as a threshold value of AR ∈ (0, 1]. Step 3: Determination of security parameters.We then determine the security parameters including the rekeying order d and rekeying interval m (i.e., appropriate trace complexity bound for the situation) using Theorem 3 such that, for a given AR, the key lifetime exceeds the desired value determined in Step 2. Namely, for a given AR, we should determine d and the corresponding maximum value of m with satisfying Inequality (9) and SR m ≥ 1/2 n b such that the master key lifetime requirement is met.If the requirement cannot be met with practical d and m (see also Remark 4), we need to mitigate/reduce the leakage by other SCA countermeasures such as masking and hiding.
Here, if we adopt a masking scheme, we do not have to profile masking gadgets because we can precisely estimate the resulting leakage from masked implementation using the aforementioned profiling method in [IUH22a].Alternatively, under some conditions (see [IUH22a]), we can use another inequality instead of Inequality (9) in the following corollary: Corollary 1.Let e be the masking order.Let I(S; L) be the mutual information between a masking share S and its corresponding leakage L. It holds where log and ln are the binary and natural logarithms, respectively.
Proof.It is proven by combining Theorem 3 and a lemma in [IUH22a,MRS22] Here, I(S; L) is equal to the I(Z; X) of non-masked implementation in some settings; therefore, Inequality (10) can be used to evaluate the AR on masked implementation from the profiling result on non-masked implementation, without actual evaluation on masking gadgets [IUH22a].Remark 6 (Conditions for Inequality (10)).Inequality (10) is meaningful for a non-trivially low I(S; L) (i.e., worse SNR) and/or large masking order e, as mentioned in [IUH22a, Remark 5.1].At least, it should hold I(S; L) < 2 ln(2) ≈ 0.72 to use Inequality (10).If I(S; L) is relatively high and e is relatively small, we need to actually profile the adopted masking gadgets or to use the aforementioned method in [IUH22a].It should be noted that Béguinot et al. recently proved another bound in [BCG + 23], in which they claim a more precise evaluation than [IUH22a,MRS22].It would be useful for the practical and more precise evaluation, although we used Corollary 1 based on [IUH22a,MRS22] for the proof-of-concept evaluation in this paper.In other words, for masked implementation, we can achieve a more precise evaluation if we use a precise inequality about masked implementations.
Step 4: Actual design/implementation.After determining the parameters that satisfy the master key lifetime requirement, we conduct an actual design and implementation for LR4.Here, if we adopt no SCA countermeasure other than LR4, it is sufficient to use a common non-protected implementation like naïve or reference implementations as it is, which may have been used in Step 1. Otherwise, a sound masked implementation should be utilized.The masking scheme used here should be provably secure under a practical leakage model (e.g., [NRS11, RBN + 15, GMK16, GM17, BBD + 16]), and implementation should be done by carefully considering the physical defaults that cause security order degradation (e.g., coupling, cross-share interaction, and glitches), which have been shown and analyzed in many studies [RSVC + 11, BGG + 14, dCBG + 17, DCEM18, FGP + 18, GMPO20, SCS + 21, SSB + 21, MKSM22].Usage of leakage detection/verification tools, design automation tools and/or open-source implementations is promising to achieve such a provably secure masked implementation (e.g., [Rep16, UHMA17, UHMA21, BBC + 19, KSM20, SCS + 21, SSB + 21, KMMS22, BMRT22]).
Numerical evaluation
We show the validity of LR4 through a numerical evaluation of the key lifetime for a given d and mutual information.
First, we virtually determine the mutual information in Step 1.We set the acceptable full-key recovery success rate as 1% as an example7 and then evaluate the master key lifetime for various rekeying orders d (and masking orders e) using Inequality (9) (or Inequality (10)) with achievable trace bound m for AR d,m = 0.01.Here, we assume that each rekeying component in LR4 is implemented using one AES encryption call (namely, where E is AES for any i) for a proof-of-concept evaluation, although such a plain AES encryption is not an RO.See Section 6.2 for a discussion about the instantiation of an RO using AES or other symmetric primitives.Note that the evaluation result under the assumption of one AES encryption call would be consistent with actual RO instantiations.In addition, we suppose that AES is utilized for encrypting payload data using a temporal key generated by LR4.The AES encryption for payloads should be trace-bounded by m, so the key lifetime is given by m × m d = m d+1 , as m d corresponds to the number of generatable temporal keys.
Table 1 and Table 2 list the evaluation results of key lifetime lower-bounds of LR4 with non-masked and masked implementations, respectively, where I(Z; X) is the virtually determined mutual information value; SR denotes the SR required to satisfy AR d,m = 0.01 with a maximum value of m for a given d (evaluated using Equation ( 7)); m denotes the maximum value of m under the conditions evaluated using Inequalities (9) and (10) for non-masked and masked implementations, respectively; and m d+1 denotes a lower-bound of the maximum number of secure encryption calls with generated temporal keys."N/A" means that we cannot derive the value due to the computational difficulty (that is, the evaluation requires extremely high-precision floating-point arithmetic).Note that rekeying and masking are not applied if d and e are zero, respectively (if d = 0, the master key is used for the payload encryption as it is).
The results demonstrate the validity of LR4 as an SCA countermeasure: the key lifetime increases exponentially (i.e., digit-wisely) by an increase of the rekeying order d in most parts of the tables.In addition, from Table 2, we confirm that a combination with masking is more effective for improving key lifetimes if I(Z; X) is sufficiently smaller (i.e., the leakage is sufficiently noisy).It should be noted that, in I(Z; X) = 0.01 of key lifetime of first-order masked implementation is smaller than non-masked one.This is because Inequality (10) evaluates a lower-bound of key lifetime for a given AR d,m , but does not necessarily precisely/tightly represent an actual value under some conditions.As mentioned in Remark 6, Inequality (10) cannot provide a meaningful evaluation of the bound if I(S; L) is too low and e is too small.I(S; L) = 0.01 and e = 1 may be such a condition, but an actual key lifetime would be longer than the non-masked implementation.
Practical instantiations
Standard instantation using SHA-3.The provable security analysis of LR4 assumes that G i is an RO and E is an IC.A straightforward instantiations would be e.g., SHA-3 for G (as and AES for E. If one wants to avoid multiple distinct primitives for implementation efficiency, E can be also permutation-based, say using Keccak-p with an adequate domain separation and output truncation (but the resulting function is non-invertible so it limits the applications).In principle, instead of E we can use more complex functions, such as nonce-based encryption or AE, possibly using a permutation.What security/efficiency benefit is expected depends on the scheme we use, and exploring such combinations would be an interesting future direction.
Instantiation using AES.Our SR evaluation in Section 6.1 assumes a naïve use of AES for G for ease of evaluation, but this instantiation has a gap from the proof.It is important to consider secure instantiations using AES owing to its ubiquity and maturity as a symmetric primitive.We briefly discuss secure instantiations of E and G based solely on an ideal cipher E base .For simplicity, let E base have a key length one-bit longer than E so that we can generate two independent ideal ciphers, E ′ and E ′′ , from E base by using this extra key bit for domain separation.The problem is how to instantiate G from E ′ .What we need for G is indifferentiability [MRH04] from the fixed-length RO.Note that classical block cipher-based compression functions (e.g., Davies-Mayer), are not indifferentiable [KM07].We present two secure examples here.First, if E ′ is a block cipher of n-bit block and n-bit key, we can instantiate G by Mennink's F 3 construction [Men17], which is n/2-bit indifferentiability from the (fixed-length) RO and needs three calls of E ′ .Second, if E ′ is a block cipher of n-bit block and 2n-bit key, G can be a 2n-bit indifferentiable hash function using Hirose's double-block-length compression function [Hir06] with a proper domain extension.For example, we can use MDPH [Nai19], which has (n − log n)-bit indifferentiability [GIM22].Both examples utilize different primitives and offer different security levels, so the proper choice will depend on the security goal, application, and the available primitives.
We should point out that the average computation cost of LR4 is at most two G calls plus one E call (Section 3), which means the impact of G's cost on the total computation is limited.In addition, security evaluation for the aforementioned (secure) instantiations could be done in the same manner as Section 6.1 as long as we use AES as E base .
Conditions for exponential increase of key lifetime.
Interestingly, in Table 1 and Table 2, SR and the trace bound m get more severe for larger d.For larger d, the attacker can have the larger number of SCA trials (i.e., σ d,m increases), which enables at least one full-key recovery with a smaller value of SR.This is also represented as Equation ( 7), which shows a monotonic decrease in terms of σ d,m for a fixed AR d,m .In other words, the key lifetime gets longer by increasing d only if the gain of m d+1 is greater than the decrease of SR and m.In fact, if d is very large, the key lifetime does not (exponentially) increase and AR decreases anymore by increasing d, which implies that LR4 is valid as an SCA countermeasure for sufficiently small d.
In contrast, the security of masking is guaranteed as the SR decreases exponentially by increasing the masking order e.In particular, the security of masking is asymptotically proven; that is, it holds SR m → 1/2 n b as e → ∞, although the exponential increase may not be guaranteed for a small e [IUH22a].Thus, the rekeying and masking have opposite features to each other.A higher-order SCA countermeasure usually incurs a large performance overhead.If LR4 is available, its adoption can be one of the best choices to counter SCAs, as it is very efficient for small d and its overhead is practically small (as shown in Section 3).
On upper-bound of master key lifetime for general rekeying schemes
In the above, we mentioned that the master key lifetime does not increase exponentially by increasing d if d is (very) large, although it is effective for practical values of d.In general, the increasing rate gets slower for larger d, and the increase of master key lifetime eventually stops for a certain value of d, depending on I(Z; X).Intuitively, this is because the minimum value of SR m should be 1/2 n b (as mentioned in Remark 4), although SR m is monotonically decreasing in terms of d and SR m ∈ [0, 1] to the definition.In fact, any rekeying scheme including LR4 has an upper-bound of master key lifetime according to Proposition 2. Proposition 2 implies that, for a given TR value τ , there always exists the number of SCA trials σ such that TR σ ≥ τ , and the implementation cannot achieve a master key lifetime of more than σ with regard to the overall success rate of τ .In other words, Proposition 2 implies that, for any given AR, we should make SR m approach to zero when d → ∞, although SR m should be greater than 1/2 n b .Here, the master key life time σ d,m is maximized by d and m which tightly satisfies Inequality (9) with SR m = 1/2 n .Thus, for a given I(Z; X) (or I(S; L) and masking order) and AR, there exists an upper-bound of the master key lifetime, distinctly from a brute-force/cryptanalysis on the master key.Theorem 3 is an upper-bound of the master key lifetime with regard to SCA9 , and it is a case study of LR4.Our discussion emphasizes a (trivial) fact that an ultimate goal of SCA countermeasures including rekeying is to achieve a (master) key lifetime as long as lifetime against pure cryptanalysis, such that SCA leakage is no longer useful for the attacker.
Related to Proposition 2, the convergence rate of TR → 1 is very important and represents the achievable security by rekeying schemes.It depends on the value of I(Z; X).Fortunately, for LR4, we experimentally confirmed that the convergence is slow for practical d and a wide range of I(Z; X), which indicates the validity of LR4 security for many practical conditions.In contrast, for very high I(Z; X) (e.g., I(Z; X) = 1), the leakage is not sufficiently bounded at all, and it is impossible for any SCA countermeasure to protect such a leaky device (see also [BS21]).Discussion on the convergence rate for a wider range of I(Z; X) would be useful for achievable security of rekeying schemes.The purpose/goal of the rekeying scheme is to improve master key lifetime in general; hence, investigating (the existence of) tighter upper-bounds and the convergence rate for rekeying schemes is an important future work for making the rekeying security more concrete.
Resilience against fault attacks
We here briefly discuss the resilience of LR4 against fault attacks [BDL97].A major fault attack would be differential fault analysis (DFA) [BS97].DFA recovers the secret key from pair(s) of correct and faulty ciphertexts for an identical input, where faulty ciphertext means that an error bit flip) is induced to an intermediate value of the encryption/decryption.For example, in the case of AES encryption, one-bit fault in the eighth-round input is sufficient for the full-key recovery if the corresponding correct ciphertext is available [PQ03].However, the DFA attacker should observe the output of the symmetric primitive to obtain the pair(s) of correct and faulty ciphertexts.As the output of ROs of LR4 is not (directly) available for the attacker, DFA is inapplicable to LR4 implementation (except for the payload encryption).In addition, DFA must require to query an identical plaintext twice to obtain pair of correct and faulty ciphertexts.If the LR4 is correctly implemented in such a way as to detect the replayed queries as mentioned in Section 3.2 (and the payload encryption is nonce-based), ROs (and payload encryption) never evaluate an identical input more than once, which also indicates the inapplicability of DFA.Some other fault attacks have been also developed, such as fault sensitivity analysis (FSA) [LSG + 10, MMP + 11], differential fault intensity analysis (DIFA) [GYTS14], and persistent fault analysis (PFA) [ZLZ + 18, ZHF + 23].These fault attacks utilize a statistical mean for the key recovery like DPAs.Hence, LR4 can offer a leakage resilience by determining an appropriate trace bound m and m ′ similarly to Section 5 (which may be trivial for some attacks).Moreover, we want to stress that the FSA requires to query an identical plaintext many times to observe the leakage of fault sensitivity; the DFIA utilizes the output ciphertext; in addition, PFA is a chosen-plaintext fault attack.Neither such chosen-plaintext strategies nor RO outputs are avaialble in attacking ROs in LR4; thus, it would be difficult to apply these fault attacks to ROs in LR4 (in a naïve manner).Meanwhile, the payload encryption part can be protected using an appropriate trace bound m ′ , a fault attack resilient mode of operation, and/or countermeasure against fault attacks (e.g., fault detection schemes).
Note that fault attacks on hash functions (e.g., SHA-3), which may be a natural choice for RO instantiation, would be frequently more difficult than block ciphers, although we basically discuss fault attacks on AES in this section.
Comparison to LR-PRG/stream cipher
LR-PRG and stream cipher may be used for generating a key stream as a temporal key.For example, Pietrzak's LR stream cipher [Pie09] is based on a weak PRF F (whose outputs are pseudorandom as long as inputs are random) and its initial state is one uniformly random input of F , in addition to two keys of F which are the master key, and the random input is sent in clear.F is called in an alternating manner, using an internal state consisting of two keys and one input to F .Leakage model is different from ours, namely it is assumed that F leaks a certain amount of bits for each invocation via a leakage function restricted on the output size.The security is proved in terms of the pseudorandomness of single (possibly long) output sequence with leakage (hence the game does not consider multiple initializations).
One of the advantages of LR4 over LR-PRG/stream cipher is that LR4 offers an explicit synchronization.LR4 can immediately generate arbitrary temporal keys, which indicates that LR4 can redeem the communication whenever the synchronization fails (maliciously or accidentally) and can start a new session without reset.In contrast, LR-PRG and stream cipher require a reset with a new initialization vector in cases of synchronization failure or new session beginning.As mentioned in Section 1, an attacker may mount an SCA on LR-PRG/stream cipher during some first state updates, if the attacker can trigger resets repeatedly.Thus, an explicit synchronization is essential for an LR rekeying, which makes LR4 more suitable.
Relation and comparison to LR-PRFs
As mentioned, known LR-PRFs [MSJ12,FPS12] have structural similarities to our approach in terms of the use of GGM, but LR4 and [MSJ12,FPS12] are basically incomparable due to the different leakage models and the assumptions on the primitives.In [FPS12], the authors assume non-adaptive bounded leakage and a weak PRF as an underlying primitive.They show how to construct leakage resilient non-adaptive PRF, in which the adversary non-adaptively chooses the inputs of PRF.In [MSJ12], a formal security proof is not given, but the authors show that parallel implementation improves the security and efficiency of GGM-like LR-PRF.In terms of the constructions, two LR-PRFs [MSJ12, FPS12] use independent public randomness for each node on the path.These public random values (IVs) are crucial for their security proofs and significantly increase the bandwidth.Moreover, the generation of these random values must be secure even under leakage, which can be quite costly in practice.For completeness, we briefly describe GGM and [MSJ12,FPS12] in Appendix A.
Applicability of LR4 to LR-AEs
As discussed in Section 4.3, many LR-AE proposals are designed with leveled implementation [BBC + 20] in mind [PSV15, BGP + 19, DJS19, KS20, DEM + 20, BBB + 20, SPS + 22].Although their security assumptions and leakage models vary (as discussed in [BBC + 20]), they share the core idea of combining a leak-free/DPA-resistant component for, for example, the derivation of a temporal key, and SPA-resistant component(s) using the derived temporal key for the rest of the encryption routine.As we have discussed in Section 4.3, LR4 could be used as a component of existing LR-AEs if they meet certain conditions.However, these conditions are not always met, particularly when it comes to the tag-generation function (TGF) (see [BBC + 20]).As discussed in Section 4.3, extending LR4 to handle such case would be an interesting future direction.
Ultimately, the goal of LR-AEs is to improve the temporal key lifetime and change the key lifetime unit from the number of (tweakable) block cipher calls to AE calls.When applicable, an LR rekeying scheme contributes to this goal.
Summary
This paper studied rekeying as a power/EM SCA countermeasure and presented a new higher-order and LR rekeying scheme named LR4.We developed a leakage model for rekeying to formally prove the security of LR4, and analyzed its performance overhead and practical usecases.In addition, we defined the success rate of attack on rekeying schemes and developed a methodology for evaluating the success rate quantitatively through a unification of bounded trace complexity and bounded leakage.This is useful for determining the rekeying frequency for a bounded leakage (defined as a mutual information value for a given device here) regarding a success rate, which is mandatory for the practical usage of LR4.Through a numerical evaluation, we confirmed the validity and effectiveness of LR4 as an SCA countermeasure (as well as masking), as the number of secure encryption/decryption calls increases exponentially by an increase of rekeying order under practical conditions.
Future works
Relaxing security assumption.Our current security proof relies on the idealized primitives.A standard model-based proof would provide additional confidence (see e.g., [BGPS21]).It might be possible to remove a (pseudo)random property for G as observed by MSGR.Moreover, it should be noted that our SR definition is related to the multi-user analysis [DLMS14, BT16, LMP17, HTT18, DGGP21, NSSY22] (as in Remark 3).Clarifying the relationship would be an interesting future direction.
Investigation of other possible rekeying construction.In this paper, we discussed the evaluation of LR4 in Section 5.2, but Definition 1 and our evaluation methodology are readily and naturally generalizable and extendable to other rekeying schemes (as in Proposition 2 in Section 6.2.3).It is an important future work to investigate efficient rekeying constructions that make the master key lifetime longer.
Extension to SCAs other than power/EM attack.The focus of this paper was power/EM SCAs, for which we developed the models, security proofs, and an evaluation methodology.It would be valuable to extend our theory and methodology to utilize rekeying in a provable secure manner against other SCAs such as timing and cache attacks.
wPRF.Let fk (r; 0) and fk (r; 1) be the lower and upper n k bits of f k (r), respectively.Using n F n E -bit public randomness r 1 , r 2 , . . ., r n F , the FPS scheme evaluates F k (t) as (1 ≤ i ≤ n s ) denote the i-th digit of t in the 2 n b -ary number representation.For example, t (n b ) [i] is typically given by two hexadecimal numbers for AES as 2 n b = 16 2 .Using 2 n b distinct public values r 0 , r 1 , . . ., r 2 n b −1 , the MSJ scheme evaluates F k (t) using n s iterative block cipher calls, as The MSJ scheme performs one F k (t) evaluation with n s encryption calls, which is a significant reduction from n E = n b n s of the GGM scheme.For the MSJ scheme, Medwed et al. suggested determining the i-th public value r i as r i = i ∥ i ∥ • • • ∥ i, which increases the trace complexity bound (i.e., decreases the SR or increases the number of traces).Some improvements and practical evaluations of MSJ have been devoted in [MSNF16, USS + 20, BMPS21].
B Proof of Theorem 1 B.1 H-coefficient technique
Assume that computationally-unbounded adversary A queries to the two worlds: real and ideal, denoted by O re and O id , and tries to distinguish them.The H-coefficient [Pat08,CS14] is a general technique to evaluate the distinguishing probability of A. We define a transcript as a set of input/output values that A obtains during the interaction with the world.Let T re (resp.T id ) denote the probability distribution of the transcript induced by the real world (resp.the ideal world).By extension, we also use the same notation to refer to a random variable distributed according to each distribution.We say that a transcript τ is attainable if Pr[T id = τ ] > 0 holds with respect to A. Let Θ denote the set of attainable transcripts.The following is the fundamental lemma of H-coefficient technique; see e.g.[CS14] for the proof.
Lemma 2. Let Θ = Θ good ⊔Θ bad be a partition of the set of attainable transcripts.Assume that there exists ε 1 ≥ 0 such that for any τ ∈ Θ good , one has and that there exists
B.2 Evaluation of good transcript probability ratio
In Section 4, we defined bad events to show how to partite the set of attainable transcripts and then showed the evaluation of ε 2 , i.e., Pr[T id ∈ Θ bad ] ≤ d(q + q L ) 2 /2 n k +1 + (q + q L )(p + p I )/2 n k + 4m ′ q/2 n bc .All that remains is evaluating a good transcript probability ratio, i.e., ε 1 in Lemma 2.
Lemma 3.For any τ ∈ Θ good , we obtain the following evaluation: T E , T L , T K denote the random variables of each transcript.Let * ∈ {re, id}.In both real and ideal worlds, we obtain the following evaluation: To prove Lemma 3, we evaluate the lower bound of (P1 re • P2 re • P3 re )/(P1 id • P2 id • P3 id ).
Evaluation of P1 re and P1 id .
We first obtain P1 re = P1 id because the probability distribution of transcripts defined by the interactions with the oracles G 1 , . .., G d , and E ± are identical in both worlds.
Evaluation of P2 re and P2 id .In the ideal world, the keys revealed by the construction oracle (i.e., k (1) •,• ) are chosen at random and independently from {0, 1} n k .Regarding keys revealed by the leakage oracle (i.e., k (0) •,• ), recall that the transcript τ is good; thus, there is no collision between the revealed keys in the same depth in Q K (i.e., Bad1), and there is no collision between in revealed keys of depth i, k (0) i,• , and the input keys of the RO G i where i ∈ [d] (i.e., Bad2).Therefore, we obtain P2 id = (1/2 n k ) Σ d+1 i=1 nki , where nk i is the number of elements k (•) i,• in Q K (i.e., the number of revealed keys in i-th depth).In the real world, keys revealed from the construction oracle are all real, unlike in the ideal world.However, due to Bad1 and Bad2, we obtain P2 re = (1/2 n k ) Σ d+1 i=1 nki in the same manner as the above discussion of keys revealed by the leakage oracle in the ideal world.Therefore, we obtain P2 re = P2 id .
Evaluation of P3 re and P3 id .In the ideal world, the construction oracle is TURP P ± ; thus, T C and T L are independent.Then we obtain the following equations.
Recall that Cnc is the number of distinct counters in construction queries, and ctr d 1 , . .., ctr d Cnc are the distinct counters.Also recall that q 1 , . .., q Cnc are the number of construction queries whose counter is ctr d 1 , . .., ctr d Cnc , respectively.Since the adversary queries to P ± which is independent from other oracles, we obtain .
Similarly, we define Lnc as the number of distinct counters in leakage queries, and Lctr d 1 , . .., Lctr d Lnc as the distinct counters.Also, let q L,1 , . .., q L,Lnc be the number of leakage queries whose counter is Lctr d 1 , . .., Lctr d Lnc , respectively; thus, q L,i ≤ m ′ for i ∈ [Lnc] and Lnc i=1 q L,i = q L .Here, temporal key values inputted into E in LR4-L, which are derived from Lctr d 1 , . .., Lctr d Lnc , are all distinct due to Bad1.Also, there is no collision between the temporal keys in LR4-L and input keys of the IC due to Bad3.Thus, we obtain In the real world, unlike the case of P3 id , we cannot divide the evaluation of P3 re into two evaluations about T C and T L since they are not independent in the real world.However, we can discuss the evaluation of P3 re in almost the same manner as P5 id .We define CLnc as the number of distinct counters throughout the construction and leakage queries (i.e., CLnc ≤ Cnc + Lnc), and CLctr d 1 , . .., CLctr d CLnc as the distinct counters throughout Q C and Q L .Also, let q CL,1 , . .., q CL,CLnc be the summation number of construction and leakage queries whose counter is CLctr d 1 , . .., CLctr d CLnc , respectively; thus, CLnc i=1 q CL,i = q + q L .As in the case of P5 id , all the temporal keys derived from CLctr d 1 , . .., CLctr d CLnc are distinct and have no collision with the input key of the IC due to Bad1 and Bad3.Thus, we obtain
Figure 4 :
Figure 4: Key diagram of LR4 when d = 2 and m = 3. Solid, dashed, and dash-dotted arrows mean that key is derived from key at parent's node when ctr i = 0, ctr i = 1, and ctr i = 2, for each i ∈ [d] = [2], respectively.Temporal keys are generated and used from left to right.
Figure 5 :
Figure 5: The cache-based version of the temporal key derivation function R. LR4with cache invokes R C instead of R. The caches ch d+1 and kh d+1 are initially given as ch d+1 = (0, 0, . . ., 0), kh i+1 = G i (kh i , 0) for each 1 ≤ i ≤ d, and kh 1 = k mst .Here, kh i should be called at Line 9 only when it is actually used.
Figure6: The rekeying function of asakey.K is a master key, and p is an underlying permutation.N 1 , . .., N k are one-bit split nonces where an input nonce N is written asN 1 ∥ • • • ∥ N k .K * isa derived temporal key input to the sponge-based encryption part.ISAP has a similar rekeying function as this construction.
Figure 7 :
Figure 7: Example of key reveal procedure in the proof.The leakage queries are for the leaves (3, 1), (3, 4) and (3, 5) and the relevant (intermediate) keys are circled by red.The construction oracle queries are for the leaves (3, 2), (3, 5) and (3, 7) and the relevant (intermediate) keys are circled by blue.First, the keys circled by red will be revealed.Second, the keys circled by blue will be revealed-following the tree in the real world and randomly sampled in the ideal world-except those not already revealed (i.e., circled by red).Thus, the latter step only reveals k 3,2 , k 2,3 , and k 3,7 .
Proposition 2 (
Attacker with infinite trials almost surely succeeds in at least one full-key recovery).Let TR σ,m be the probability of at least one success during σ SCA trials with m-bounded-trace, defined asTR σ,m = Pr σ v=1 ns h=1 rank(k * v,h , m) = 1 ,where rank(k * v,h , m) denotes the correct key rank of the h-th partial key at the v-th trial with m traces.With the same assumption as Lemma 1, it holdsTR σ,m → 1 as σ → ∞; namely, v,h , m) = 1 = 1, if SR m ̸ = 0. Proof.If SR m ̸ = 0, then it holds Pr[ ns h=1 rank(k * v,h , m) = 1] = (SR m ) ns > 0 for any v,according to the assumption on the SR.Therefore, as the SCA trials are mutually independent of each other, it holds ∞ v=1 Pr ns h=1 rank(k * v,h , m) = 1 = ∞.(11) According to the Borel-Cantelli lemma [Fel91, pp.201-202], Equation (11) is followed by Pr lim sup v→∞ ns h=1 rank(k * v,h , m) = 1 = 1, which means that events ns h=1 rank(k * v,h , m) = 1 (i.e., successful full-key recoveries) infinitely often occur with probability one.This implies Proposition 2. Corollary 2. Let AR d,m be the success rate of attack on the d-th order m-bounded-trace LR4 defined in Definition 1.With the same assumption as Lemma 1, it holds AR d,m → 1 as d → ∞ or m → ∞.Proof.It is proven 8 by Proposition 2 as the case that TR σ,m = AR d,m with σ = σ d,m , where it holds σ d,m → ∞ as d → ∞ or m → ∞.Note that SR m → 1 as m → ∞ usually holds.
Faust
et al. proved that the above PRF is leakage resilient if both the leakage function and inputs are non-adaptive.Medwed-Standaert-Joux (MSJ) scheme [MSJ12].The MSJ scheme consists in a multi-ary tree for an improved computational cost, whereas GGM and FPS employ a binary tree.Let n b denote a positive integer divisible n E , and let n s = n E /n b .Typically, n b is defined as the bit-length of Sbox (e.g., n b = 8 and n s = 16 for AES).Let t (n b )[i] bc − j) .We next show P3 re ≥ P4 id • P5 id holds.For ∀i ∈ [CLnc], the query of CLctr d i in Q C and Q L can be classified into any of the following three cases: (Case 1)CLctr d i is queried only in Q C , (Case 2) CLctr d i is queried only in Q L , (Case 3) CLctr d i is queried in both Q C and (s), its impracticality and meaninglessness were discussed in [SPY + 10,FPS12].Meanwhile, some impossibility results (difficulty in LR cryptography with practical construction under adaptive leakage) were shown in [SPY + 10, FPS12].Known practical remote power SCAs (e.g., [ZS18, LKO + 21]) also utilize non-adaptive leakage.Thus, non-adaptive leakages are more common, practical, and significant than adaptive ones, and we focus on non-adaptive leakage in this paper.
ctr d ) Figure 2: Encryption of LR4, where k 1 = k mst .probe displays the cache-based version of the temporal key derivation function R, denoted by R C .It caches the intermediate keys and counters for given m and m ′ , where kh i and ch i denote the caches for i-th intermediate key kh d+1 = (kh 1 , kh 2 , . . ., kh d , kh d+1 ) and counter values ch d+1 = (ch 1 , ch 2 , . . ., ch d , ch d+1 ), respectively.Unlike Figure The strengthened asakey must call at least oneKeccakp[1600, 12]for each encryption call, while LR4 calls at least one Keccak-p[1600, 24] (i.e., SHA-3) per m ′ E K calls, where m ′ is trace bound for E K .Note that E K can be instantiated with an (LR-)AE.For example, as sponge-based encryption like ISAP or Ascon [DEM + 20, DEMS21], which can encrypt a message with (practically-)arbitrary bit length by only one E K call (see also Section 6.2.1).On average per E K call, LR4 requires less than 2/m ′ Keccak-p[1600, 24] calls (as proven in Proposition 1), while asakey always requires at least one Keccak-p[1600, 12] call for nonce-processing.As m ′ is usually greater than 10 (see Section 6.1), cache-based LR4 has a far lower latency than strengthened asakey on average.Also, for non-cached version, LR4 requires only d Keccak-p[1600, 24] calls while strengthened asakey requires 128 Keccak-p[1600, 12] calls.
Table 2 , the Table 1 :
Key lifetime lower-bound evaluation results of non-masked LR4 for AR d,m = 0.01
Table 2 :
Key lifetime lower-bound evaluation results of masked LR4 for AR d,m = 0.01
|
2023-08-26T21:20:31.484Z
|
2023-12-04T00:00:00.000
|
{
"year": 2023,
"sha1": "dbfe2d103375be3b068604f182bea86360c5d3f4",
"oa_license": "CCBY",
"oa_url": "https://tches.iacr.org/index.php/TCHES/article/download/11253/10795",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1b4a8f70807e78c3af2a247e170d941d5ad8830b",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
266997409
|
pes2o/s2orc
|
v3-fos-license
|
British Columbia’s Safer Opioid Supply Policy and Opioid Outcomes
This cohort study examines whether there were changes in opioid prescribing and opioid-related health outcomes after implementation of British Columbia’s Safer Opioid Supply policy.
C
anada's opioid crisis has accelerated markedly in recent years.In 2022, 14 opioidrelated poisoning hospitalizations occurred each day, while 20 people per day died of an overdose on average. 1 With potent synthetic opioids from the unregulated market fueling this crisis, there is growing interest in offering a safe supply of regulated, pharmaceutical-grade opioids to people who use drugs to help reduce the risk of overdose and poisonings. 2 In March 2020, British Columbia became the first jurisdiction globally to launch a provincewide Safer Opioid Supply policy that allows individuals at high risk of overdose to receive pharmaceutical-grade opioids free of charge prescribed by a physician or nurse practitioner. 3his policy initially covered select opioids (hydromorphone and sustained-release oral morphine), 3 and in July 2021, it was made permanent and expanded to include additional drugs, including injectable fentanyl. 3,4In June 2023, 4619 people were prescribed safer supply opioid medications. 5utside British Columbia, there have also been a limited number of federally funded pilot safer supply programs in cities in Ontario, Quebec, and New Brunswick since 2020. 6,7hile this harm-reduction policy is intended to reduce overdose or poisoning risks and remove barriers to care access for people who use drugs, 8 some people suggest that the limited range of lower-potency opioids available through safer supply programs may not meet the needs of people who use drugs and are accustomed to high-potency opioids. 9Furthermore, there are concerns that providing a safer supply could discourage people who use drugs from receiving proven substance use treatment and encourage potential diversion of prescribed opioids. 8,10n a recent debate in the House of Commons, opposition parties blamed the Safer Opioid Supply polic y for continued opioid overdose deaths in British Columbia, arguing that opioids obtained from the program are being traded for fentanyl-laced opioids on the unregulated market. 11,12here is little evidence on the impacts of safer supply programs to inform this debate.Some qualitative studies suggest that safer supply programs positively impacted the lives of participants 13 and were associated with reduced overdose risks, 14,15 although participants highlighted inadequate substitutability of hydromorphone for unregulated opioids. 168][19] Using a quasi-experimental difference-indifferences (DD) design and administrative data from 3 Canadian provinces, this study sought to overcome the limitations of prior studies and provide the first evidence, to our knowledge, on the association of British Columbia's Safer Opioid Supply policy with opioid prescribing and opioid-related health outcomes.
Study Design
In this cohort study, we used the DD method [20][21][22][23][24] to compare prepolicy to postpolicy changes in outcomes in British Columbia (where the Safer Opioid Supply policy was implemented) with similar changes in comparison provinces (Manitoba and Saskatchewan) that did not implement this policy.The key assumption underlying the DD analyses was that, in the absence of the policy, the trends in outcomes in British Columbia would be similar to the trends in the comparison provinces.Our study used deidentified, aggregate province-level data; hence, no ethics approval or informed consent was required, as per Newfoundland and Labrador's Health Research Ethics Board guidelines.This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.
Outcomes, Data, and Study Period
Opioid prescribing outcomes included number of opioid prescriptions dispensed, number of people with at least 1 opioid prescription dispensed (hereafter, opioid claimants), and number of opioid prescribers, measured per 100 000 population.All outcomes were specific to the types of opioids targeted by British Columbia's Safer Supply policy (ie, hydromorphone, morphine, fentanyl, and oxycodone).While these types of opioids were targeted by the policy, these were not exclusively safer supply products and had been prescribed for pain management and other clinical reasons before the policy was implemented.Opioid-related health outcomes included rates of opioid overdose poisoning hospitalizations (hereafter, hospitalizations) and deaths from apparent opioid toxicity (hereafter, deaths).All data were at the province quarter level.
Data on opioid prescriptions were obtained from the Canadian National Prescription Drug Utilization Information System Database.Prescription data included all claims (both public and private) dispensed in community pharmacies and were categorized at Anatomical Therapeutic Chemical level 5 (all forms of fentanyl, hydromorphone, oxycodone, and morphine).Hospitalizations were identified using International
Key Points
Question Was there an association of British Columbia's Safer Opioid Supply policy with opioid prescribing and opioid-related health outcomes in the first 2 years of implementation?Findings In this cohort study using the difference-in-differences method, the Safer Opioid Supply policy was associated with a large increase in the number of opioid prescriptions dispensed, a moderate increase in the number of individuals with at least 1 opioid prescription dispensed, and a substantial increase in opioid-related poisoning hospitalizations.
Meaning Two years after its launch, the Safer Opioid Supply policy was associated with greater prescribing of safer supply opioids but also with a significant increase in opioid-related poisoning hospitalizations.
Statistical Classification of Diseases and Related Health
Problems, Tenth Revision codes T40.0, T40.1, T40.2, T40.3, T40.4, and T40.6 and included both unintentional and intentional poisonings. 25,26Deaths were defined as "a death caused by intoxication/toxicity (poisoning) resulting from substance use, where 1 or more of the substances is an opioid, regardless of how it was obtained (eg, illegally or through personal prescription)." 25 Data on hospitalizations and deaths were publicly available from the Public Health Agency of Canada. 25Given the data availability, our study period spanned from quarter 1 of 2016 (January 1, 2016) to quarter 1 of 2022 (March 31, 2022).
Comparison Provinces
Our study used Manitoba and Saskatchewan as provinces for comparison with British Columbia.These 2 provinces did not implement the policy and, in addition to British Columbia, were the only provinces that had data for both opioid prescriptions (across all ages) and opioid-related health outcomes.
Statistical Analysis
We estimated DD regressions using province quarter-level data.The covariate of interest was an indicator for the Safer Opioid Supply policy, which was equal to 1 if the policy was in effect in a province (ie, in British Columbia after quarter 1 of 2020) and 0 otherwise.These analyses controlled for time-varying province-level covariates, including proportion of individuals aged 0 to 17 years, proportion of males, Consumer Price Index, and unemployment rate as well as public health COVID-19 restrictions (using the COVID-19 stringency index developed by Bank of Canada 27 ).The regressions also included province indicators to control for all time-invariant characteristics of provinces and quarter-year indicators to control for secular changes or shocks in outcomes that are common to British Columbia and the comparison provinces.Additionally, we included province-specific linear time trends to control for possible differences in trends across provinces.
We estimated the regressions by ordinary least squares and calculated heteroskedasticity-consistent HC3 SEs.All analyses were conducted using Stata, version 18 (StataCorp LLC).Tests were 2 sided, and a significance level of P < .05 was used.
We conducted several analyses to investigate the robustness of our results.As the policy's launch coincided with the onset of the COVID-19 pandemic, we reran the analysis excluding the COVID-19 washout period between quarter 2 of 2020 and quarter 1 of 2021 and then examined the policy effects separately during the first year (ie, the policy's launch) and the second year (ie, the policy's expansion) for a doseresponse relationship.
We also examined the sensitivity of our results to exclusion of province-specific linear time trend, demographic controls, and the COVID-19 stringency index and examined sensitivity of results to choice of comparison provinces by additionally including Alberta and Nova Scotia as comparison provinces.Next, we conducted an event study analysis to assess the validity of the parallel trends assumption.Finally, to examine the robustness of our results to potential violation of the parallel trends assumption, we used the synthetic DD method 28 that reweights control groups and time periods to ensure that outcome trends are parallel between control and treatment groups.This analysis included not only Manitoba, Saskatchewan, Alberta, and Nova Scotia but also Ontario and New Brunswick as comparison provinces.Further details of the statistical analysis are provided in the eMethods in Supplement 1.
We also examined how changes in prescription outcomes associated with the Safer Opioid Supply policy varied by age and sex of claimants.Subgroup analyses for health outcomes were not feasible as subgroup data on these outcomes were not available at the province quarter level.
Descriptive Statistics
Figure 1 shows the outcome trends.In Manitoba and Saskatchewan, opioid prescriptions remained stable throughout the study period.In British Columbia, there was a gradual, small increase in opioid prescriptions (from 6227.1 per 100 000 population in quarter 3 of 2017 to 7750.4 per 100 000 population in quarter 1 of 2020) before implementation of the Safer Opioid Supply policy.This increase, however, accelerated sharply after the policy's implementation.The number of prescriptions increased by 52% from 8598.6 per 100 000 population in quarter 2 of 2020 to 13 070.1 per 100 000 population in quarter 1 of 2022.The trends for the opioid claimant outcome declined for the comparison provinces throughout the study period.Meanwhile, in British Columbia, these trends declined before the policy but began to increase after policy implementation.For the number of prescribers, there was a declining trend in all 3 provinces before the policy.After the policy was implemented, the number of prescribers continued to decline in Manitoba, while there was an upward trend in Saskatchewan and British Columbia.
In all 3 provinces, hospitalizations fluctuated from 1 quarter to the next before the policy was implemented.However, the trend appeared to be similar between British Columbia and the comparison provinces.After policy implementation in March 2020, there was a sharp increase in hospitalizations in British Columbia that continued until quarter 4 of 2021.An increase in hospitalizations was observed in Saskatchewan after the policy, but this increase was followed by declines and periods of no change-a pattern more consistent with quarterby-quarter fluctuations than an upward trend.The postpolicy trend for hospitalizations in Manitoba was similar to the prepolicy trend and was relatively flat.A spike in death rates was observed in British Columbia at the time of policy implementation.Similar spikes were seen in Saskatchewan and Manitoba, but unlike in British Columbia, these increases reversed in the subsequent quarters.
Regression Results
Table 1 presents the results from the main analyses.The Safer Opioid Supply policy was associated with an increase of 2619.6 prescriptions per 100 000 population (95% CI, 1322.1-3917.0 per 100 000 population; P < .001) in British Columbia compared with the corresponding increase in the comparison provinces.This increase represented a 34% relative increase com-pared with 7730 prescriptions per 100 000 population in British Columbia in quarter 4 of 2019 (before the policy was imple-mented).The policy was also associated with statistically significant increases in the number of claimants of opioids (176.4 per 100 000 population; 95% CI, 33.5-319.4per 100 000 population; P = .02).There was no significant change in the number of prescribers of opioids (15.7 per 100 000 population; 95% CI, −0.2 to 31.6 per 100 000 population; P = .053)covered under the policy.Subgroup analyses by age and sex revealed that the large increases in opioid prescriptions and opioid claimants were driven by males and those aged 25 to 64 years (eTable in Supplement 1).
The hospitalization rate increased by 3.2 per 100 000 population (95% CI, 0.9-5.6 per 100 000 population; P = .01)after policy implementation in British Columbia compared with the corresponding increase in the comparison provinces.This represented a 63% relative increase compared with 5.7 hospitalizations per 100 000 population in quarter 4 of 2019.Meanwhile, there was no statistically significant change in deaths (1.6 per 100 000 population; 95% CI, −1.3 to 4.5 per 100 000 population; P = .26;a 36% relative increase compared with 4.5 per 100 000 population in quarter 4 of 2019).
Table 2 reports the sensitivity analyses.When we included separate indicators for policy introduction and expansion, the increases in both prescriptions and hospitalizations were larger during the policy expansion phase than during the introduction phase.The DD estimates were also larger when we excluded the COVID-19 washout period.Our results were also robust to the exclusion of control variables for demographic factors, province-specific linear time trend, and the COVID-19 stringency index.When we expanded the control group to include 4 provinces (Manitoba, Saskatchewan, Alberta, and Nova Scotia) and then 6 provinces (Manitoba, Saskatchewan, Alberta, Nova Scotia, Ontario, and New Brunswick), we also obtained evidence of an increase in hospitalizations, both graphically (eFigures 1 and 2 in Supplement 1) and in the regression analyses.In these analyses, the increases in deaths were smaller and remained statistically insignificant.Finally, despite the slight difference in prepolicy trends in the number of prescriptions observed in Figure 1, the event study indicated no systematic differences in prepolicy trends between British Columbia and the comparison provinces for any outcome (Figure 2), suggesting that inclusion of province-specific linear time trends helped satisfy the parallel trends assumption.
Discussion
To our knowledge, this study provides the first evaluation of the association of British Columbia's Safer Opioid Supply policy with opioid prescribing and opioid-related health outcomes at the population level.We obtained both graphical and regression evidence that the policy was associated with increases in prescriptions and claimants of opioids targeted by the policy.However, the policy was also associated with an increase in hospitalizations, albeit with no statistically significant increases in deaths.
What could explain the higher hospitalization rate after the policy's implementation?One potential reason is that participants in British Columbia's Safer Opioid Supply policy program diverted safer opioid supply for various reasons, including to purchase unregulated fentanyl. 29,30It is also possible that a higher supply of prescription opioids led to an increase in prescription opioid misuse, which in turn, could increase hospitalization risks.Another possibility is that availability and/or toxicity of an unregulated drug supply increased more in British Columbia than in comparison provinces, leading to more hospitalizations in British Columbia.While we do not have data on unregulated drug supply, our regression analyses controlled for this potential difference by including province-specific time trends.It may also be argued that more hospitalizations and deaths were due to increased toxicity of unregulated opioids and reduced access to harm-reduction services during the COVID-19 pandemic.However, our analyses showed that the observed increases in hospitalizations and deaths were even greater after excluding the COVID-19 pandemic washout period, and in particular, we found evidence of a positive dose-response relationship after the policy's expansion.
The increase in prescription rates without a significant increase in prescriber rates suggests that a small number of prescribers contributed to the increased prescriptions.While this might reflect hesitancy among physicians to participate in the Safer Opioid Supply policy program 9,31 and possible frequent prescriptions of small opioid amounts, it is important to ensure that safer supply opioids are prescribed to and used by people who use drugs and are targeted by the policy.In particular, given some reports of diversion of safer supply opioids, measures to address such diversion (eg, witnessing ingestion or injection of the drug by a health professional 32 ) are needed.
It is worth noting that British Columbia has a high incidence of overdose deaths and a long history of harmreduction approaches (eg, provision of supervised consumption sites).While these differences were controlled for by the province-specific fixed effects and time trends in our DD analyses, they might explain why there was no significant increase in deaths after the policy.It will be interesting to see how death rates will play out in other contexts if safer supply is offered in the absence of supervised consumption sites and easy access to naloxone.Also, as British Columbia decriminalized possession of small amounts of illicit drugs in January 2023, it will be useful to assess the combined effects of these 2 policies in day opioid crisis.The Safer Opioid Supply policy is a response to the crisis and aims to reduce opioid overdose by inducing opioid users to switch from illegal to legal opioids.Our finding of higher rates of hospitalization during the first 2 years of implementation of the Safer Opioid Supply Policy is potentially concerning and suggests a need to carefully monitor how safer supply approaches influence opioid use, addiction, and overdose in the long term.
Limitations
This study has several limitations.First, we used only Manitoba and Saskatchewan as comparison provinces, as data on prescription outcomes were available for only these 2 provinces.However, sensitivity analyses including other provinces and using the synthetic DD approach indicated that our results were robust.Second, as the drugs under consideration could also be used for other clinical conditions, we were unable to attribute the observed increase in prescriptions exclusively to the policy.However, we are aware of no other factors occurring around the time of policy implementation that may have led to increased demand for these drugs for pain management or other conditions.Third, there were prepolicy quarter-by-quarter fluctuations in hospitalizations and deaths, which may have been driven by a variety of changes in supply-side factors (eg, prescribing practices and illicit opioid supply) and demand-side factors (eg, patients' awareness of opioid harms and illicit opioid prices).Although the prepolicy trends were broadly similar between British Columbia and the comparison provinces, future work that uses longerterm data to discern meaningful trends would be helpful.
Another limitation is that we were unable to examine heterogeneity in the policy effects due to lack of consistent aggregate-level data across demographic groups.It is possible that certain subgroups (eg, polysubstance users or those with higher opioid tolerance levels) continue to be at high risk of using unregulated opioids (and thus have higher risk of hospitalization and death), while the risk of death is reduced for other subgroups that switch to safer supply.Future studies using individual-level data 33 can examine this heterogeneity.Last, with a small number of provinces, we were unable to account for within-province correlation in outcomes using clustering procedures suitable for a small number of clusters, which could result in a downward bias in SEs and a higher likelihood to find statistically significant results.However, the large magnitude of the observed differences (especially increases in opioid prescriptions and hospitalizations) and the graphical evidence of changes after policy implementation suggest that the association between the policy and outcomes is likely real.
Conclusions
Two years after its launch, the Safer Opioid Supply Policy in British Columbia was associated with higher rates of prescribing of opioids but also with a significant increase in opioidrelated hospitalizations.These findings may help inform ongoing debates about this policy not only in British Columbia but also in other jurisdictions that are contemplating it.
Figure 1 .
Figure 1.Unadjusted Trends in Opioid-Related Outcomes in British Columbia, Canada, and Comparison Provinces From Quarter (Q) 1 of 2016 to Q1 of 2022
Figure 2 .
Figure 2. Quarterly Differences in Outcomes Between British Columbia, Canada, and the Comparison Provinces Before and After Safer Opioid Supply Policy Implementation
Table 1 .
Outcome Changes Associated With the Safer Opioid Supply Policy a jamainternalmedicine.com (Reprinted) JAMA Internal Medicine March 2024 Volume 184, Number 3
|
2024-01-17T06:16:53.227Z
|
2024-01-16T00:00:00.000
|
{
"year": 2024,
"sha1": "7735643b80dc132ccb0e456da82b7d4f87f3b432",
"oa_license": "CCBY",
"oa_url": "https://jamanetwork.com/journals/jamainternalmedicine/articlepdf/2814103/jamainternal_nguyen_2024_oi_230090_1704958683.19215.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b178f06e422d782cb332c238c949f9f6bd8400e",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
70610885
|
pes2o/s2orc
|
v3-fos-license
|
Pain detection by clinical questionnaire in patients referred for temporomandibular disorders in a Chilean hospital
Cite as: Maturana T, Ruiz P, Rosas C & Aravena P. Pain detection by clinical questionnaire in patients referred for temporomandibular disorders in a Chilean hospital. J Oral Res 2015; 4(5): 300-305. Abstract: Aim: To determine pain frequency by means of a clinical screening questionnaire in patients with temporomandibular disorders (TMD) referred to the general Hospital of Valdivia (HBV) between September and December 2014. Material and method: A descriptive study, which included patients referred to the TMD Unit of the dental service at HBV between September and December 2014, was carried out. A clinical screening questionnaire was applied by an examiner in order to detect painful Temporomandibular Joint Disorders. The variables age, sex, wait time, and presence of related TMD pain were measured. Results: 101 patients were surveyed; 88.17% (84 patients) were women. Average age was 33.5 (11-70) years; 66% of patients had mandibular pain or stiffness upon awakening; 80% informed pain related to painful TMD. Conclusion: Most surveyed patients were women. Pain was highly frequent in the surveyed population; its main location was in temporal areas.
INTRODUCTION.
Temporomandibular disorders (TMD) are a group of clinical problems that are clinically characterized by pain affecting the masticatory muscles, temporomandibular joints and associated structures 1 .
Studies reveal that 75% of the population will have signs or symptoms of TMD at some point in their lives 2 . For unknown reasons TMD may be characterized as localized pain or radiating pain 3 . TMD studies in Chile suggest a higher prevalence in women 4,5 . Prevalence of myofascial pain was 80.99%.
TMDs are the second leading cause of pain in the orofacial region, after odontogenic pain 6 . Patients with orofacial pain are usually evaluated by different clinicians who, before finding the correct diagnosis 7 , give different diagnoses and prescribe treatment options that do not necessarily treat the source of this condition.
As TMDs have a multifactorial etiology 8 , clinicians require a tool to identify the interaction between the different physical and psychological dimensions of pain 9 . In 1992 Dowrkin and Leresche proposed the Research Diagnostic Criteria for TMD (RDC/TMD), an instrument intended to be universally accepted and validated. However, it is a comprehensive instrument that requires specialized and calibrated clinicians to perform the diagnosis of TMD 9 . Gonzalez et al. 2011 10 developed a clinical questionnaire containing a series of 49 essential items for TMD symptoms. From this questionnaire a short version that included three questions, and other extended version that included six were proposed. Both versions were compared with the extended protocol RDC/TMD, finding an internal reliability of the extended version (alpha coefficient 0.93) with high sensitivity (99%) and specificity (97%) 10 .
As there are no other studies on the frequency of pain in patients referred for TMD by general dental practitioners in Chile, this study will be useful to establish the frequency of pain in patients referred for TMD, characterize it according to different activities and obtain epidemiological information of the target population.
The aim of this study is to determine the frequency of pain in TMD patients referred to the general Hospital of Valdivia, between September and December 2014.
Design and population:
This study was approved on October 20 th , 2014 by the Ethics Committee of the Health Service of Valdivia (ORD 403). Surveyed patients were informed and advised specifically on TMD.
A descriptive study, based on the application of a clinical questionnaire to detect painful TMD was performed. The questionnaire was the extended version containing 6 questions proposed by Gonzalez et al. 2011 10 .
The study included patients referred by dentists from primary health care services for suspicion of TMD to the dental service at the general Hospital of Valdivia (HBV) between September and December 2014. Patients were asked to sign an informed consent form. Minors needed authorization of their guardian or chaperone to participate in the study. Patients who refused to sign the informed consent, and those unable to answer the questionnaire for themselves were excluded.
Sample size: Non-probability sampling was performed for convenience. Sample size was calculated using Raosoft, 2014 (Raosoft, Inc.; Seattle, WA, USA), with a margin of error of 5%, confidence level of 95%, and prevalence of 38.6% for painful TMD 6 . The size obtained by the software was 107 patients from a total of 150 new patients treated at the dental service during the course of this study.
Use of scale:
The extended questionnaire proposed by Gonzalez et al. 6 was used. The questionnaire contains 3 items and 6 questions about pain. The answer to each question gives a score. The questionnaire gave a maximum of seven points and a minimum of zero. The score considered for each answer was: Answer A, 0 points; B, 1 point; C, 2 points. As a result, the threshold for positive diagnosis was greater or equal to 3 points; in such a case it was considered pain associated with the presence of painful TMD, with unspecified diagnosis.
In the first stage, translation into Spanish and adaptation of the instrument for the Chilean population were made. Two native English speakers, both professors at the school of English Teaching and Translation at University of Los Lagos, Osorno, Chile, performed a simultaneous translation of the instrument. The translation was applied to a pilot group of 10 patients, assessing face validity (evaluation of syntax and semantics of the instrument) in order to recognize its reliability. At this stage changing the word "temple" (sien) with "temporal area" (área temporal) and increasing the font size of the instrument was suggested by 60% of respondents. Internal consistency (test retest) was evaluated, an average of 0.75 Kappa was obtained. These questionnaires were not included in the next stage.
One researcher applied the questionnaire to referred patients in the waiting room of the hospital dental service. Patients were given an ink pen, a sheet with the questionnaire, and five minutes to respond. Incomplete questionnaires were replaced by new questionnaires to comply with the number of respondents established in the sample design. Data analysis: Sex, age and waiting time from referral to treatment at HBV were recorded. The score of each item and the total score were also recorded. To confirm the presence of pain associated with painful TMD, the score should be 3 or more points.
Data were collected and tabulated by the examiner in a spreadsheet Google Docs, v1.20.8672 (Google, Mountain View, CA, USA) and analyzed using descriptive statistics, average variations and standard deviation (±), and presented in tables. All analyzes were performed using Microsoft Excel 2013 (Microsoft Corporation, Redmond, WA, USA).
RESULTS.
Of the 150 patients referred to HBV from September to December 2014, only 120 patients attended the treatment, from them, only 110 met the inclusion criteria. 101 (92%) patients completed the entire questionnaire, and 9 were discarded for incomplete answers.
From the total sample (101 patients), all were referred from primary or secondary health care centers of Región de los Ríos, Chile. Patients were referred for suspicion of TMD, with unspecified diagnosis. One of the confounding factors was the waiting time from the date of referral, considering that 60% of patients had been waiting for treatment at least 3 years.
Average age was 33.5±15.87, between 11 and 70 years. 83.17% (84 patients) were women. Distribution by age and sex is shown in Table 1.
The average score of the questionnaire was 4.14±2. The instrument reported that in 80% of cases pain was associated with the presence of painful TMD. The response distribution of the questions in the questionnaire is shown in Table 2 and Figure 1. most common symptom (80% in men and 91% in women) in patients diagnosed with TMD; results that are consistent with the findings of this study. Other studies agree 1,8,11,12,17 with the fact that pain is more frequently associated with diagnosis of TMD in women. This may be due to psychosocial, neurophysiological and hormonal factors that influence the perception and modulation of pain 11 . The most common pain symptoms were observed in mandibular habits such as clenching tooth, nail biting, grinding or chewing gum (70.3%). Another research in TMD, conducted by telephone in a general population, reported 8.3% of pain in patients with tooth clenching 8 ; results that differ from the findings of this study. Mandibular dynamics (opening and laterality) was among the actions that made the painful symptoms vary; in these cases pain intensity varied in 59.4% of patients. This differs from another study where patients with limited movement had pain in 49% of the cases 8 . With regard to the question of whether other activities done with the jaw as talking, kissing or yawning, changed or worsened the pain, there was a similar distribution of the responses (yes, 47.5%, not 52.5%).
Pain in patients referred for TMD may be associated
DISCUSSION.
Of the total respondents, 83.17% were female, with an average age of 33.51 years. 80% of patients had pain associated with the presence of painful TMD, and obtained a score equal to or greater than 3 in the questionnaire.
Regarding the sex of respondents, our study found a prevalence of women among the patients surveyed for TMD. In Chile Iturriaga et al. 5 in a referral center for TMD, as well as Díaz-Guzmán et al. 4 in a general population, found similar results. In other studies conducted in Latin American in countries like Venezuela 11 , Peru 12 and Colombia 13 , the number of women surveyed exceeded that of men. Two Italian 8,14 studies found a higher prevalence of women among its participants. The prevalence of females is consistent with other studies, in which the entire sample was composed of women 15 . Average age was consistent with other data found in studies conducted on similar populations in Chile by Iturriaga et al. 5 , Campos et al. 10 in Brazil and Manfredini et al. 12 in Italy, with averages of 27, 38, and 39 years respectively. In a systematic review the average age ranged between 30.2 and 39.4 years 16 . The age group was similar in all the studies.
In a research conducted in Venezuela 11 , pain was the 1. On average over the past 30 days, how long did you have the pain in your jaw or at both sides of the temporal area?
2. Over the past 30 days, have you felt pain or stiffness in your jaw when you wake up?
3. Over the past 30 days, did the following activities cause any change in pain (that is, what eased or worsened your pain) in your jaw or at both sides of the temporal area?
3.1 Chewing hard or coarse food.
3.2 Opening your mouth and moving your jaw forward or sideways.
3.3 Mandibular habits such as keeping teeth together, tighten them, make teeth grind or chewing gum.
3.4 Other activities involving the jaw, such as talking, kissing, or yawning. Without pain Infrequent pain Constant pain with myofascial pain, a muscular disease that may cause referred pain and have differential diagnosis of pain in the temporomandibular joint 5 . Reports of pain during mouth opening or pain during chewing are common, and some individuals may even feel pain when speaking or singing 18 . It is also associated with limitation of mandibular joint movement or sounds. That is why the verbal communication of a symptom such as pain poses a challenge, since different individuals may describe and characterize a similar pain in a number of different ways 6 .
The instrument used in this study meets these requirements by detecting if the pain described by the patient can be associated to painful TMD, without specifying a diagnosis.
On the other hand, systematic reviews of the available literature suggest that only 15 articles of TMD populations and 6 in community samples were developed to collect data from clinical diagnoses in patients with the use of the RDC/ TMD 15 , so the use of the latter in large epidemiological studies may not be possible depending on the interview technique, and/or the time available for data collection 10 . Consequently it is necessary to standardize and simplify the design of validated studies on painful TMD.
Among the referrals to the TMD dental service at HBV, problems caused by TMD were the most frequent; therefore they represented the largest number of consultations during the time of our study.
Among the limitations of this study are: waiting time for treatment, because it could have been a factor altering the responses. Another limitation was the patients' lack of technical vocabulary. They probably did not have a full under-standing of anatomical areas, and may have given confusing answers with respect to the area where the pain actually came from. However, despite these limitations, patients' responses are similar to those found in other studies conducted in Chile 2,3 .
The present study serves as a reference for future research that may use this questionnaire as a simple and rapid method to detect painful TMD, since no similar studies have been conducted in Chile. Despite the fact that this is not the gold standard in international research, it is a relatively validated method and provides information about the manifestation of pain in these pathologies in the Chilean population. We suggest to change or simplify some of the technical terms to make them more accessible to patients before applying the questionnaire in the general population.
CONCLUSION.
As in other studies on TMD, pain has a high frequency in the surveyed population; there is a higher frequency of women referred for TMD, with an average age of 31 years. Pain occurs mainly in temporal areas.
ACKNOWLEDGEMENTS.
This research is based in part on the requirements of Tomás Maturana Böhm to obtain his degree of dental surgeon at the School of Dentistry at Universidad Austral de Chile.
PR proposed the research. TM performed sampling, tabulation and data entry. TM and PR wrote the manuscript. PA and CR performed the analysis and reviewed the final draft. All authors approved the final version.
|
2018-12-27T06:57:52.057Z
|
2015-09-25T00:00:00.000
|
{
"year": 2015,
"sha1": "e93164826f5f4a33842438b9640c76430f49efd5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.17126/joralres.2015.059",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "59dcbfd1037a7e16478d54f82371a8a3b83b5248",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3644957
|
pes2o/s2orc
|
v3-fos-license
|
What is “moral distress” in nursing? How, can and should we respond to it?
Interest in moral distress (MD) as a research topic has soared in recent years. What is it about this concept that makes it so intriguing? Why does it create such debate amongst healthcare professionals? Why has there been so much conceptual confusion regarding the concept? This article is protected by copyright. All rights reserved.
Interest in moral distress (MD) as a research topic has soared in recent years. What is it about this concept that makes it so intriguing? Why does it create such debate amongst healthcare professionals? Why has there been so much conceptual confusion regarding the concept? And why do nurses, in particular, seem to feel this concept so accurately captures their experiences? These are some of the questions that I have been thinking about over the course of my doctoral studies and I would like to consider here.
As it was originally conceived, MD was believed to arise "when one knows the right thing to do, but institutional constraints make it nearly impossible to pursue the right course of action" (Jameton, 1984, p. 6). On this understanding, all that is required for MD to occur is a moral judgement and the presence of an external constraint which prevents that judgement from being carried out. This notion that nurses are constrained by external forces and are "not free to be moral" (Yarling & McElmurry, 1986, p. 63) has according to critics, such as Paley (2004), perpetuated a favourite metanarrative of nurses suffering within nursing discourse. Others, such as Johnstone and Hutchinson (2015), argue that the entire concept ought to be abandoned because it undermines the process of moral deliberation by perpetuating the notion that nurses' moral judgements are correct and justified. They argue that MD, as it is currently understood (according to Jameton (1984Jameton ( , 1993), risks nurses failing to nurture the skills required for ethical discussion and damages their integration into moral decision-making because of the "assumed rightness of [their] moral judgements" (Johnstone and Hutchinson, 2015). Indeed, Weinberg (2009) highlights how Jameton's conception of MD fails to acknowledge the possibility that there might not even be a "correct" course of action. So, does Jameton's (1984) conception of MD perpetuate epistemic arrogance, or worse perhaps, epistemic laziness, when it comes to decision-making? Should we, as Johnstone and Hutchinson (2015) argue, abandon the concept of MD altogether?
Despite criticism of the concept, the notion of MD continues to attract research attention from healthcare professionals and, in particular, nurses, suggesting that it appeals to individuals lived experiences. I was first drawn to exploration of this phenomenon through my own experiences working as a nurse in the National Health Service (NHS). I was surprised to find a distinct lack of empirical research had been conducted in the United Kingdom (UK). Since Jameton (1984) This much broader understanding of MD allows for other potentially relevant causes of MD to be captured within the "umbrella" term of MD (McCarthy & Deady, 2008), which can then be further subcategorised, as suggested by Fourie (2015), into, for example, "moral-constraint distress" or "moral-conflict distress." I hypothesize that broadening the definition of MD and subcategorising into these constituent pieces will allow for more specific measurements and targeted interventions to help address MD.
Arguably, only once the conceptual fog has cleared can healthcare professionals, researchers and policy-makers can begin to gain further clarity regarding what, if anything, can be done to mitigate MD? Indeed, it has been suggested that MD is simply a natural response to morally troubling experiences that should be welcomed and that getting rid of MD is not only impossible but undesirable (personal communication with Tigard; hinted at in Tigard (2017) and Howe (2017)). MD may be a natural consequence of the messiness of moral life but when one experiences MD everyday due to their occupation, which seems to be the case for nurses and other healthcare professionals, then it may instead be regarded as an occupational hazard that employers have a responsibility to address. Marshall and Epstein (2016) argue that MD is inherently linked to the notion of "moral hazard" because of the power differentials between those making the decisions and those that must live with those decisions. Moral hazard is a term used to describe situations in which one party controls decisions about resources but another party bears the burden of those decisions (Brunnquell & Michaelson, 2016). Even if we perceive MD to be a natural response to morally challenging events, the negative consequences that MD appears to have upon the nursing profession (perpetuating the nursing shortage as increasing numbers of nurses leave the profession, psychological distress and distancing oneself from patients) suggest we should consider ways to respond to it. MD seems to consist of two fundamental aspects: psychological distress and a moral event. We might, therefore, need different interventions to tackle each of these aspects. There are several mechanisms in place that can be further utilized and integrated into everyday clinical practice that may help address psychological distress, for example regular team debriefing sessions after patients' deaths or clinical incidents, and addressing the stigma around using staff support services such as employee assistance programmes that are available at most NHS Trusts. In a recent longitudinal study, Maben et al. (2018) explored the effectiveness of Schwartz Rounds â which offer a safe space for staff to come together and reflect upon the experiences and challenges they face at work. They found that attending Schwartz Rounds â resulted in a statistically significant improvement in staff psychological well-being, increased empathy and compassion for patients and colleagues, and positive changes in practice (Maben et al., 2018). (2017) have launched a systemwide MD consultation service. In the UK, Traynor (2017) has explored the concept of "critical resilience" not as a specific response to MD but rather as a way for nurses to look more critically at their environments. Traynor (2017) argues that by examining and revealing the power structures in which we work, nurses can come together in solidarity and resist the external forces that Morley and Jackson (2017) suggest are destroying the art of nursing. Importantly, there is much more work to do regarding ways to support clinical staff regarding ways to support clinical staff facing not only the increasing external pressures but the complex everyday clinical ethical issues, and this is the future of my own work. However, it is important that we try to transform the nursing narrative, away from the metanarrative of powerless, suffering victims (Paley, 2004), and embrace a new narrative for nursing.
In a recent study reported in Peden-McAlpine, Liaschenko, Traudt, and Gilmore-Szott (2015) and Traudt, Liaschenko, and Peden-McAlpine (2016), 19 experienced critical care nurses were interviewed who self-identified as skilled and comfortable during end-of-life care, MD did not arise as a theme. This was surprising considering that many studies have found that end-of-life care and withdrawal of life-sustaining treatments provides the kind of moral events that often lead to experiences of MD. Instead of discussing their experiences of MD, the nurses in Traudt et al. (2016), who had an average of 17 years critical care experience, reported feeling a strong sense of moral agency, felt accountable for their actions, possessed "moral imagination" (meaning they could empathise and appreciate the values of others) and perceived a "moral community" in which they viewed themselves as an integral part of the decisionmaking process. The authors highlighted how the nurses in this study seemed to feel able to navigate ethically difficult scenarios.
Commenting on this study, Rushton and Carse (2016) applaud the changing MD narrative, away from the powerlessness nurse to one in which the nurse is able to thrive within a moral community, bolstered by ethical competency, likely authority and able to enact their moral agency. Whilst we continue to implement, measure and assess the effectiveness of interventions that might mitigate the negative consequences of MD, I call on nurses to continue the fight against this continued narrative of powerlessness and instead embrace the power they do have to engage in moral reflection and debate.
ACKNOWLEDG EMENTS
If you are interested in further information about MD and possible ways to respond to it, Morley, Bradbury-Jones, Maben, Traynor, Rushton and Jackson will be presenting their work at a public event held at the Wellcome Collection on 21 June. For more information, see https://www.eventbrite.co.uk/e/what-is-moral-distress-in-nur sing-how-can-and-should-we-respond-to-it-tickets-42197149811.
|
2018-04-03T06:05:57.262Z
|
2018-04-02T00:00:00.000
|
{
"year": 2018,
"sha1": "d0d837fe498d2d664a344dfd24f66bedd7aced8f",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jocn.14332",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0d837fe498d2d664a344dfd24f66bedd7aced8f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
205545364
|
pes2o/s2orc
|
v3-fos-license
|
Durable and self-hydrating tungsten carbide-based composite polymer electrolyte membrane fuel cells
Proton conductivity of the polymer electrolyte membranes in fuel cells dictates their performance and requires sufficient water management. Here, we report a simple, scalable method to produce well-dispersed transition metal carbide nanoparticles. We demonstrate that these, when added as an additive to the proton exchange Nafion membrane, provide significant enhancement in power density and durability over 100 hours, surpassing both the baseline Nafion and platinum-containing recast Nafion membranes. Focused ion beam/scanning electron microscope tomography reveals the key membrane degradation mechanism. Density functional theory exposes that OH• and H• radicals adsorb more strongly from solution and reactions producing OH• are significantly more endergonic on tungsten carbide than on platinum. Consequently, tungsten carbide may be a promising catalyst in self-hydrating crossover gases while retarding desorption of and capturing free radicals formed at the cathode, resulting in enhanced membrane durability.
Supplementary Note 1. Synthesis of early transition metal carbides nanoparticles supported on carbon spheres
The conventional synthesis of carbides or nitrides entails normally through high-temperature carburization or nitridation of the metals, leading to low surface area. One of the most facile routes to high-surface-area transition metal carbides is attributed to Lee et al., 1 who developed a temperature-programmed reduction-carburization (TPRC) method to form carbides from precursor oxides under a wide range of conditions. Unfortunately, the unstable mesoporous structure limits the application of this material, especially for high temperature and high pressure reactions.
Recently, an old technique 2 has gained renewed interest, whereby biomass is hydrothermally treated in water under relatively mild conditions providing bulk, mesoporous, or nanostructured carbon materials. 3,4 Cui et al. found that the presence of metal ions effectively accelerates the hydrothermal carbonization (HTC) of starch with shorter reaction times and control of particle shape of carbon materials. 5 Sun and Li applied hydrothermal reduction to encapsulate noble metal nanoparticles into the core of carbon spheres. 6 Inspired by these works, we believe that the formation of solid carbon by HTC could lead to carbide or nitride nanoparticles through the reduction-carburization or nitridation processes, respectively. Here we present a preparation method for production of nano carbides particles dispersed on carbon materials. The synthetic strategy is shown in Figure 1, and all experimental details are summarized in the methods section.
Supplementary Figure 1 shows a top view of the as-prepared nano-WC sample, which has a smooth spherical structure with a diameter of 3-5 µm containing well-dispersed WC nanoparticles on its surface. We used focused ion beam to mill a selected region to reveal the cross-sectional morphology of individual carbon spheres. Although W signal is found across the entire sphere by EDX mapping, nanoparticles of WC are only observed on the surface of carbon spheres. Due to the low resolution of the SEM technique, bright spots on the carbon sphere surface are not necessarily individual particles of WC, since several nanoparticles closely packed on a support surface may also result in a bright spot in SEM images at low magnification. We have further carried out TEM/STEM analysis to characterize the dispersion of WC on carbon spheres, as shown in Figure 1b SEM image and EDX mapping of WC nanoparticles supported on carbon spheres cut by a focused ion beam. The sample was milled using a Ga+ ion beam with a selected region (7μm × 7μm × 5μm, lenth × width × depth) operated with an energy of 30 kV and a current of 600 pA.
Both HAADF-STEM and TEM investigations confirm that the WC nanoparticles are uniformly dispersed on the carbon sphere with a narrow size distribution. We have also conducted STEM tomography analysis of the as-prepared nano-WC sample. STEM images were recorded by tilting the sample from -65° to +55° with 1° increment. Representative 3D rendering of the reconstructed image is shown in Figure 2c, which again shows homogeneously dispersed WC nanoparticles.
Supplementary Figure 2.
Bright field TEM images of nano-WC located on a carbon sphere.
In order to confirm the crystalline structure of our as-prepared nano-WC nanoparticles, we have Ganesan and Lee reported a method of W2C preparation by heating a mixture of a resorcinolformaldehyde polymer and ammonium metatungstate. 8 Yan and Shen also reported a similar method by heating a mixture of ion-exchange resin and W precursor. 9 We believe that "graphitic coke" differs from our materials, because the WC nanoparticles of prior work are not catalytically active in their applications (ORR and methanol oxidation). We have also annealed the samples collected after the HTC step in inert gas (He) at different temperatures (Supplementary Figure 5). Apparently, none of the patterns match well those of α-WC but are similar to diffraction patterns of tetragonal WO3 structure. This indicates that the reduction carburization is a facile route to carburize the W precursor into the interstitial structure. Figure 5. Powder X-ray diffraction characterization of various materials indicated in the legend. Sample collected after the HTC step was further annealed in flowing He gas at 500, 700 and 900 °C. Standard diffraction patterns of WC (JCPDS 00-002-1055) and C (JCPDS 00-026-1076) are included as references.
Supplementary
We have also conducted thermogravimetric analysis (TGA) of the nano-WC sample under flowing air (see Supplementary Figure 6). With the assumption that all W is oxidized to WO3 and the carbon sphere is combusted, we estimate the total loading of W in nano-WC to be around 60 wt.%. We have further calculated the surface elemental concentration of nano-WC from the XPS survey spectrum, which can detect 2-5 nm in depth from the surface (Supplementary Table 1). The nano-WC contains about 52 wt.% of W on top atomic layers of WC and carbon sphere.
We estimate that about 86% of W in nano-WC is carburized near the surface during synthesis. Considering the well-known similarities of carburization, nitridation and sulfidation, 10 it is possible that by changing gas precursors in the second step of our approach, nanoparticles of nitrides and sulfides can also be formed. Further work will focus on such materials. Here PH2O is the ratio of the partial pressure of water vapor in the mixture to the equilibrium vapor pressure of water (P*H2O) at a given temperature. Figure 9. Initial fuel cell performance consisting of baseline recast Nafion membrane (black) and composite membranes incorporating nano-WC (red), commercial WC (pink), and Pt black catalysts (blue). The polarization I-V evaluation of the fuel cell was conducted and controlled by a fuel cell test station from Arbin Instruments. The H2 and O2 humidifiers were maintained at 70, 55, 41, and 14 °C while the fuel cell temperature was set to 70 °C such that the relative humidity of the inlet gases was 100, 50, 25, and 5%. Gas supply lines temperature were maintained 5 o C higher than the fuel cell temperature to prevent condensation of water vapor. Hydrogen fuel and oxygen were fed in co-flow to the fuel cell. H2 and O2 flow rates were 200 ml/min and 400 ml/min, respectively. Figure 9 shows the fuel cell performance of recast Nafion, Pt Nafion, nano-WC/Nafion and commercial WC/Nafion membranes. The Pt/Nafion membrane (blue) shows the least decrease in performance when the humidity drops from 100% RH to 5% RH. Our nano-WC/Nafion shows similar improvement as the Pt/Nafion but less than Pt black due to the lower activity of nano-WC catalyst compared to that of Pt black. However, the improvement is still significant considering the low cost of nano-WC catalyst and the positive effect on membrane durability. Recast Nafion without self-hydrating function shows the largest decrease in performance (from 1 W/cm 2 at 100% RH to 0.3 W/cm 2 at 5% RH). Supplementary Figure 10c Table 2). This degradation is due to major defects formed during the test from higher gas crossover. The accelerated durability tests were conducted according to the DOE protocol at 90 °C and 35% RH. 12 Fuel cells were first conditioned at 1A/cm 2 for 8 hours at 100% RH and 70 °C. Then the fuel cell temperature was raised to 90 °C, and the relative humidity was reduced to 35%. When the fuel cell and humidifiers reached the desired temperature, the fuel cell was switched to OCV, and the durability test started.
The degradation rate for regions i-iii in
The OCV was recorded for evaluation of durability. This test is designed to be much faster than the conventional one so that the lifespan of different membranes can be studied in laboratories, usually within 100 to 300 hours. Since the failure of Pt/Nafion membrane happens within 100 hours, we conducted tests for 100 hours. The failure of Pt/Nafion is repeatable based on our tests on multiple samples (Supplementary Figure 11). All three samples showed similar trends and failed at ~70 hours. The slight variability in the OCV vs. time profiles is due to the necessarily random formation of pinholes through which reactant gas crosses over, leading to random drops in OCV and eventually, failure. In light of such randomness, the three profiles shown are quite similar.
Supplementary Figure 11. Accelerated fuel cell durability tests of the Nafion composite membranes with 5wt.% Pt NPs.
Supplementary Figure 12. Cross-sectional SEM image of Pt/Nafion and WC/Nafion membranes collected after 100 hours of accelerated durability testing. Figure 13. Gas crossover of recast Nafion measured (open symbols) and vacancy volume percentages estimated by the tomography of recast Nafion membranes after 100 hours durability tests (closed symbols). Gas crossover was tested by the linear scan voltammetry (LSV) method from 0-0.7V with Scan Rate of 2 mV/s on an AMETEK Versa STAT 3 station using 100% RH nitrogen and hydrogen in the working and counter electrodes, respectively. The hydrogen electrode was also used as the reference electrode. Nitrogen and hydrogen flow rates were both set to 100 mL/min. The H2 crossover from reference electrode surface to working electrode surface was then oxidized when H2 moves away from the surface and new H2 molecules come into contact with the surface of the working electrode. H2 oxidation current at 0.3 V was used to compare the H2 crossover of different membranes. Pt/Nafion showed the fastest increase of gas crossover during the durability test due to fastest degradation of the membrane. The nano-WC/nafion membrane is the most stable one with the least gas crossover.
Supplementary
Due to the large scale of composite membranes and strong beam damage to the Nafion structure, where surf is the total energy from DFT, bulk is the number of bulk units, bulk is the energy of one bulk unit, and is the area of the surface.
The surface energy of each low-index surface of WC is provided in Supplementary Table 3. Supplementary In contrast, the potential free energy diagram for WC (100) (Supplementary Figure 14b) indicates that the lowest energy pathway to produce OH• is a co-adsorbed H* and OOH* intermediate whereby the reaction is strongly endergonic (+4.01 eV). While the thermodynamic barrier for desorption of OH• will decrease at high coverage due to lateral interactions, previous results indicate that these interaction energies are relatively mild with a pairwise O-O interaction of 0.16 eV on a Pt(100) surface. 15 Therefore, the production of OH• through the mechanism described by Yu et al. should be thermodynamically unfavorable even at high coverages on WC (100).
Supplementary Note 4. Assessment of new materials as electrode catalyst
In order to further assess the reactivity of nano-WC, we have tested our catalyst in a fuel cell as electrodes. The results (Supplementary Figure 15) show similar performance using nano-WC catalyst on the anode and cathode. The peak power density of the fuel cells with nano-WC is about 0.09 W/cm 2 . The results indicate catalytic activity of nano-WC catalyst for H2 oxidation and O2 reduction.
Supplementary Figure 15. Polarization curves of fuel cells using nano-WC catalyst as anode catalyst (black) and cathode catalyst (red). The electrode with nano-WC catalyst was prepared by air spraying nano-WC, Nafion and isopropyl Alcohol (IPA) mixture onto commercial gas diffusion media (carbon cloth with microporous layer). The loading of nano-WC is 0.62 mg/cm 2 and the loading of Nafion is 25 wt%. The fuel cells were tested with (1) home-made nano-WC electrode on the anode and commercial Pt/C electrode (0.3 mg/cm 2 ) on the cathode and (2) nano-WC electrode on the cathode and commercial Pt/C electrode (0.3 mg/cm 2 ) on the anode. The testing temperature of the cell is 70 o C, 100% RH with 200 ml/min H2 and 400 ml/min O2.
|
2018-04-03T03:30:55.772Z
|
2017-09-04T00:00:00.000
|
{
"year": 2017,
"sha1": "78ebbaa111fb785952e3deb488225ffbd3600f24",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41467-017-00507-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52af1c63e87b1226edb790eb95eb8eb9d3d1b934",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
125080498
|
pes2o/s2orc
|
v3-fos-license
|
Novel Carbon/PEDOT/PSS-Based Screen-Printed Biosensors for Acetylcholine Neurotransmitter and Acetylcholinesterase Detection in Human Serum
New reliable and robust potentiometric ion-selective electrodes were fabricated using poly(3,4-ethylenedioxythiophene)/poly(styrenesulfonate) (PEDOT/PSS) as the solid contact between the sensing membrane and electrical substrate for an acetylcholine (ACh) bioassay. A film of PEDOT/PSS was deposited on a solid carbon screen-printed platform made from ceramic substrate. The selective materials used in the ion-selective electrode (ISE) sensor membrane were acetylcholinium tetraphenylborate (ACh/TPB/PEDOT/PSS-ISE) (sensor I) and triacetyl-β-cyclodextrin (β-CD/PEDOT/PSS-ISE) (sensor II). The sensors revealed clear enhanced Nernstian response with a cationic slope 56.4 ± 0.6 and 55.3 ± 1.1 mV/decade toward (ACh+) ions over the dynamic linear range 1.0 × 10−6–1 × 10−3 and 2.0 × 10−6–1.0 × 10−3 M at pH 5 with limits of detection 2.0 × 10−7 and 3.2 × 10−7 M for sensors I and II, respectively. The selectivity behavior of both sensors was also tested and the sensors showed a significant high selectivity toward ACh+ over different common organic and inorganic cations. The stability of the potential response for the solid-contact (SC)/ISEs was evaluated using a chronopotentiometric method and compared with that of electrodes prepared without adding the solid-contact material (PEDOT/PSS). Enhanced accuracy, excellent repeatability, good reproducibility, potential stability, and high selectivity and sensitivity were introduced by these cost-effective sensors. The sensors were also used to measure the activity of acetylcholinesterase (AChE). A linear plot between the initial rate of the hydrolysis of ACh+ substrate and enzyme activity held 5.0 × 10−3–5.2 IU L−1 of AChE enzyme. Application to acetylcholine determination in human serum was done and the results were compared with the standard colorimetric method.
Introduction
Neurotransmitters are classified as important endogenous chemical messengers. These messengers transmit and enhance specific signals between neurons and other cells. They have an important role in behavior and cognition functions in the brain. In addition, they can play an important role in muscle tone and heart rate adjustment. Also, they are responsible for regulation of learning, sleeping, memory, consciousness, mood, and appetite in human bodies [1,2]. Acetylcholine (ACh) can be considered as one of the oldest neurotransmitters in the animal kingdom in both peripheral and central nervous systems [2]. It binds to its specific receptors to regulate muscle contraction in the peripheral nervous system. On the other hand, it plays an essential role in the central nervous system in the processes related to behavioral activities. In neurons, ACh is prepared from choline using choline acetyltransferase (ChAT) and acetylcoenzyme A [1][2][3]. The level of ACh in human blood is~0.52 µM in males and~0.47 µM in females [4]. ACh also has a therapeutic utility as an intraocular irrigating fluid [5]. On the other hand, a lack of ACh can cause disturbance in the transmission of nerve impulses, paralysis, and death [6].
From all of the above, the level of ACh in human blood is very critical and it is very important to find an analytical tool to trace its level in the human body. The development of reliable and robust analytical devices for fast and sensitive assessment of ACh was a great challenging target for many years. ACh is not ultraviolet (UV)-absorbing, has no fluorescence signal, and is not derivatizable. Thus, tracing of ACh is a challenging analytical problem. Therefore, only tedious procedures such as bioassays [7], methods based on radiochemical analysis [8], enzymatic-based liquid chromatography (LC) [9][10][11][12], and mass spectrometric detection-based LC [13][14][15][16] are often used. In addition, theses reported methods have several disadvantages, such as a long analysis time, high cost analysis, and the requirement of highly skilled people specialized with laboratory facilities.
Electroanalytical techniques provide many advantages such as simple equipment used for analysis, ease of use, and a short time taken for analysis. Uni-, bi-, and tri-enzyme/mediator biosensors including chemiluminometric [17], amperometric [18][19][20], conductometric [21], and voltammetric [22] methods are conducted for ACh monitoring. On the other hand, all these mentioned methods suffer from a long analysis time and the need for specialized personnel with laboratory facilities.
In comparison with different techniques, potentiometric ion-selective electrodes (ISEs) have some unique performance characteristics, such as ease of miniaturization, ease of handling, portability, and low cost [23][24][25][26]. Recently, solid-state potentiometric ion-selective sensors revealed a tremendous ability to analyze biological, biomedical, and environmental samples [27][28][29]. The configuration of all-solid-state ISEs seems suitable for miniaturized sensor fabrication characterized with good analytical characteristics to detect different ions in different matrices.
As cited in the literature, few potentiometric ion-selective electrodes (ISEs) were developed [30][31][32][33][34][35][36]. Some of them include ion-pair complexes of acetylcholine as electro-active materials. These revealed limited linear range and a long response time of analysis [30][31][32]. The others include macrocycle carriers such as β-cyclodextrin and molecularly imprinted polymers [33,35,36]. It is interesting to compare the selectivity and working of the proposed ACh electrode along with those reported before (Table 1). Traditional potentiometric ISE configurations containing inner filling solution presented good performance characteristics for benchtop ion sensing. However, this configuration fails with small-volume samples. These types of samples need to be addressed along with the requirements of miniaturization. Considering in vivo biomedical analysis, the use of liquid-contact ion-selective electrodes (ISEs) has many drawbacks. For example, the inner filling solution is subjected to evaporation, as well as changes in temperature and pressure of the sample. In addition, osmotic pressure can be caused due to differences in the ionic strength of the sample and the inner filling solution. This arisen osmotic pressure can produce a net liquid transport from/to the inner filling solution and lead to volume changes. This can provoke delamination of the ISM [37]. As an example of solid-contact electrodes, screen-printed electrodes were shown to be a good technique for sensor miniaturization, as they are simple, cheap, and easy to be fabricated for mass production. These good features of screen-printed electrodes make such electrode types promising for the detection of organic species. Herein, we present novel robust, reliable potentiometric solid-contact ISEs for trace determination of acetylcholine. A poly(3,4-ethylenedioxythiophene)/poly(styrenesulfonate) (PEDOT/PSS) film was used as the solid contact. The sensors were introduced for the determination of ACh under static and hydrodynamic modes of operation. The proposed sensors were also applied for simple, sensitive, and rapid monitoring of acetylcholinesterase (AChE) enzyme activities. The method for enzyme monitoring was based on the reaction of the enzyme with ACh + substrate, while monitoring the decrease of ACh + concentration using the proposed ACh sensor.
Sensors Performance Characteristics
Acetylcholine reacts with sodium tetraphenylborate forming an acetylcholinium-tetraphenylborate (ACh/TPB) ion-associated complex in a stoichiometric ratio of 1:1. The resulting precipitate was purified, isolated, and dried. Two polymeric potentiometric acetylcholine sensors were based on the use of ACh/TPB (ACh/TPB/PEDOT/PSS-ISE) and tri acetyl-β-cyclodextrin (β-CD/PEDOT/PSS-ISE) in a plasticized poly(vinyl chloride) (PVC) matrix. The polymeric membrane composition of the two sensors was 63.1 wt.% plasticizer, 32.4 wt.% PVC, 1.5 wt.% tetradodecylammonium tetrakis (4-chlorophenyl) borate (ETH 500), and 3.0 wt.% ionophore. All calibration plots for the proposed electrodes are shown in Figure 1. The obtained results from triplicate studies revealed a near-Nernstian slope of 56.4 ± 0.6 (R 2 = 0.999) and 55.3 ± 1.1 (R 2 = 0.998) mV/decade, with detection limits of 2.00 × 10 −7 and 3.20 × 10 −7 M, for sensors I and II, respectively. The performance response characteristics are tabulated in Table 2. The influence of pH on the proposed sensors was also investigated. Two different concentrations of ACh + (10 −4 -10 −3 M) were chosen for this test over a pH range of 2 to 10. Sensors I and II revealed stable and unaffected potentials at the presented concentrations by pH changes over the working pH ranges 3-10 and 4.5-10, respectively. At pH > 10, the potentials sharply decreased due to interference from hydroxide ions. At pH < 3, the sensor responses were severely influenced by H 3 O + . A 10 −2 M concentration of phosphate-buffered saline (PBS) at pH 7 was chosen for all subsequent measurements.
The closeness of agreement between mutually independent repetitive test results obtained with a 10 µg/mL internal quality control ACh sample was measured using the proposed sensors and the same reagents during short intervals of time within one working day (within-day reproducibility), and a significantly small variation (±0.5%) from the final mV readings was noticed. Reproducibility of the results (day-to-day response variations) was also tested by measuring a 10 µg/L internal quality control ACh sample on five consecutive days using different batches of the reagents and daily recalibration. A small variation of the results compared to those obtained for repeatability experiments was obtained. Calculation of the relative standard deviation was found to be 1.7% and 2.2% for ACh/TPB/PEDOT/PSS-ISE and β-CD/PEDOT/PSS-ISE, respectively. These data indicate the good response stability of the proposed solid-contact (SC)/ISEs. After repeating the calibration of the sensors for at least 12 weeks, the long-term potential stability was tested. After six weeks of daily use, the detection limit increased to 7.0 × 10 −5 M for both sensors and the sensitivity was found to decline. Considering the disposable nature of these types of sensors, the issue of decreasing sensitivity after six weeks was not taken as a big problem.
Sensor Selectivity
Selectivity behavior of the proposed sensors toward acetylcholine was tested over different common ions. The selectivity coefficients (K ACh,J ) were evaluated using the separate solution method (SSM) modified by Bakker [38]. Table 3 presents the potentiometric selectivity coefficient values of all sensors. From the results shown in Table 3, the selectivity of the sensor based on β-CD (β-CD/PEDOT/PSS-ISE) toward ACh + ions over choline, methylamine, urea, dimethylamine, Na + , Ca 2+ , and Mg 2+ ions was much better than the sensor based on ACh/TPB (ACh/TPB/PEDOT/PSS-ISE). This can be explained on the basis of the high affinity of tri acetyl-β-CD toward complexation of acetylcholine as compared to other nitrogenous compounds. On the other hand, the sensor based on ACh/TPB/PEDOT/PSS-ISE (sensor I) revealed better selectivity toward ACh + over K + , histidine, ethylene diamine, hexamine, hydroxylamine, ephedrine, codeine, morphine, and alanine than β-CD/PEDOT/PSS-ISE. Table 3. Selectivity values (K pot ACh,j ) for acetylcholine solid-contact sensors.
Interferents log K pot
ACh,j
Effect of the PEDOT/PSS Solid-Contact Layer
Short-term potential stability was studied using chronopotentiometry [39]. As shown in Figure 2, the typical chronopotentiograms of ACh/PEDOT/PSS-ISEs and β-CD/PEDOT/PSS-ISE, in addition to that of ACh/TPB-ISE and β-CD-ISE, are presented for comparison. The slope (∆E/∆t) of the E-t curve at longer times gives a direct measure of the potential stability of the ACh+-ISEs. The potential drifts were 6.5 µV/s and 52.0 µV/s for ACh/TPB/PEDOT/PSS-ISE and β-CD/PEDOT/PSS-ISE, respectively, which were much lower than those of ACh/TPB-ISE and β-CD-ISE (117.0 µV/s and 90.2 µV/s, respectively). From the data presented above, it was found that the potential stability of the proposed sensors was greatly improved by the introduction of PEDOT/PSS directly into the polymeric membrane. The capacitances were estimated to be 153.1 µF and 19.1 µF for ACh/TPB/PEDOT/PSS-ISE and β-CD/PEDOT/PSS-ISE, respectively. The capacitances of ACh/TPB-ISE and β-CD-ISE were 8.54 µV/s and 11.0 µV/s, respectively. From these values, we can confirm the relationship between potential stability (∆E/∆t) or the capacitance (C) of ISEs and the presence of PEDOT/PSS as a solid-contact material between the membrane and the electrical substrate in the ISEs.
Water-Layer Effect
The water film formed between the ion-sensing membrane and electron conductor interface can act as localized microscopic pools of water [40]. It has a great influence on the potential stability and lifetime of the ISE. Thus, a test for water-layer absence was carried out for ACh/TPB/PEDOT/PSS-ISE. As shown in Figure 3, the proposed sensor was firstly conditioned in 10 −3 M CaCl 2 solution and the potential was then recorded for 1.0 h. After that, the solution of CaCl 2 was replaced with 1.0 × 10 −4 and 5 × 10 −6 M ACh + solution, and the potential was recorded for another 1.0 h. After replacing CaCl 2 solution with ACh + ion solution, the stable potential response for nearly 1.0 h showed complete absence of the undesirable water layer. This can be explained on the basis of the hydrophobic nature of PEDOT/PSS in the polymeric membrane. A similar ACh + selective membrane composition without PEDOT/PSS was also tested for comparison. A great potential drift was noticed in the absence of the PEDOT/PSS layer as a solid-contact material. This confirms the hydrophobic nature of this material and the absence of water-layer formation.
Hydrodynamic Assessment of ACh +
The flow injection manifold of the system is shown in Figure 4. The flow cell used was prepared to afford a small sensor size. This design was used to avoid sample dispersion and to obtain a short recovery time for the potential response. Planar-type detectors (i.e., screen-printed electrodes) containing ACh/TPB-and β-CD-based membrane sensors were prepared and characterized by measuring ACh + ions under flow-through operation. The recorded potential signals of the sensors are presented in Figure 5. The sensors revealed a Nernstian response slope of 60.1 ± 1.1 and 52.7 ± 0.8 mV/decade over the linear range 1.0 × 10 −5 to 1.0 × 10 −3 M and 2.0 × 10 −5 to 1.0 × 10 −3 M with detection limits of 2.5 × 10 −6 and 3.9 × 10 −6 M for ACh/TPB/PEDOT/PSS-ISE and β-CD/PEDOT/PSS-ISE, respectively. A value of 4 mL/min was recommended as an optimized flow rate for measuring. All potentiometric characteristics are summarized in Table 4. The relative standard deviation of the transient flow signals was ± 2.1% and ± 1.7% over the concentration range 1.0 × 10 −6 to 1.0 × 10 −2 M ACh + ions for the proposed sensors, respectively.
Acetylcholine Assay in Human Serum
The usefulness of the proposed method was tested by determining ACh in collected human serum samples. For comparison with the present potentiometric procedure, the samples were also analyzed using a standard commercial spectrophotometric kit (No. ab65345, Abcam, Boston, MA, USA) at 25 • C. In this assay protocol, free choline is oxidized by choline oxidase to betaine and H 2 O 2 . The hydrogen peroxide is then detected with a highly specific colorimetric probe. Horseradish peroxidase is used as a catalyst for this reaction. The reaction products generate a color measure at λ = 540 to 570 nm. The results obtained from the standard and proposed methods are presented in Table 5. An F-test showed that there was no significant difference between means and variances of both the spectrophotometric and potentiometric sets of results. In addition, four different sensor assemblies with two different instruments on different days were used for repetitive determination of different sample sizes of ACh. Repeatability (within-day) and reproducibility (between-day) measurements showed potential variation in the range of 2-3 mV. These results revealed that the influence of these parameters was within the specified tolerance and the variations are considered within the method's robustness range.
Kinetic Monitoring of Acetylcholinesterase Activity
Termination of the transmitted pulses in the cholinergic synapses is done via the fast hydrolysis of acetylcholine (ACh) into choline (Ch) and acetic acid as shown in Scheme 1 [41].
Scheme 1. Hydrolysis of AChCl by acetylcholinesterase (AChE).
In the enzymatic reaction, the reaction rate when the enzyme is saturated with substrate can be considered as the maximum rate of reaction (V max ), and K m is the concentration of substrate which permits the enzyme to achieve half V max . The values of K m and V max of the enzymatic reaction were estimated using the proposed Ach sensor.
The potential change was recorded for different concentrations of ACh + (1.0 × 10 −6 to 1.0 × 10 −3 M) using fixed enzyme activity (0.5 IU L −1 ). At 1.0 × 10 −6 , the measured initial rate had no significant increase. This is because of the low sensitivity of the sensor at low concentration levels of ACh + ions. At concentrations ≥5.0 × 10 −5 M, the measured initial rate had a significant increase. The 1.0 × 10 −4 M ACh + solution was chosen for all subsequent AChE measurements because it revealed a good measurable change in the reaction rate at low activity for the enzyme. As shown in Figure 6, it provided values of 8.9 × 10 −5 M and 59 mV min −1 for K m , and V max , respectively. This value of K m is close to the magnitude obtained previously [35,42].
Apparatus
All EMF measurements were carried out at ambient temperature using a pH/mV meter (Orion, Cambridge, MA, USA, model SA 720) connected with a data logger (Pico Technology Limited, model ADC-16). The flow injection (FI) manifold consisted of an Ismatech peristaltic pump (Ms-REGLO model) with polyethylene tubing (0.71 mm internal diameter) for carrying solutions and an Omnifit injection valve (Omnifit, Cambridge, UK) with a loop sample of 100-µL volume.
For potential stability and capacitance estimation of the solid-contact material used, chronopotentiometry tests were carried out in 10 −3 M ACh + using a conventional three-electrode system cell. Double-junction Ag/AgCl was used as a reference electrode, and a platinum plate was used as the auxiliary electrode. The applied constant current on the sensors was ±1 nA for 60 s.
ISE Membranes and Electrodes Measurements
To the conductive carbon layer in the screen-printed platform (Figure 8), 10 µL of PEDOT/PSS solution was applied by drop-casting on this orifice. After drying, a layer with a thickness close to 0.25 µm was formed and used as the solid-contact layer. The sensor membrane was prepared by dissolving 100 mg of the total components in 1.0 mL of THF:ionophore (3.0 mg), PVC (32.4 wt.%), ETH 500 (1.5 wt.%), and o-NPOE (63.1 wt.%). Then, 20 µL of the membrane cocktail was drop-cast onto the orifice in the screen-printed platform and allowed to dry for 6 h.
Acetylcholine Assay in Human Serum
Different human blood samples were collected from different patients and analyzed within 3 h of extraction. The samples were collected in tubes and then mixed with 9 mL of absolute ethyl alcohol; they were left for 10 min before being centrifuged at 4000 rpm. The supernatant liquid was separated from the particulate matter and transferred into a 10-mL measuring flask and completed to the mark by 10 −2 M phosphate buffer solution of pH 7.2. The cell electrodes were immersed in the buffer solution, and the EMF readings were recorded after electrode potential stabilization. The amount of ACh was calculated using the constructed calibration plot between log [ACh + ] versus potential readings.
Bioassay of Acetyl Cholinesterase Enzyme (AChE)
To a thermo-stated vessel, a volume of 30.0 mL of the pH 7.0 PBS was transferred, and the cell electrodes were immersed in the solution. After obtaining a stable potential reading for the electrochemical system, 1.0 mL of 10 −2 M of ACh + working solution was added. When the potential stabilized again, 100-µL aliquots containing 1.0 × 10 −3 -6.0 IU L −1 of AChE enzyme were added. From the potential kinetic curve, the potential change with time (∆E/∆t) expressed as initial rate (t = 0) was plotted versus enzyme activity. The obtained linear calibration curve was then used for all unknown enzyme activity monitoring. A blank experiment was also carried out under the same conditions but in the absence of the enzyme.
Conclusions
Herein, we focused on the demonstration of the value of miniaturized screen-printed solid-contact ISEs when facing a complex and relevant determination as an analytical challenge. The work deals with the preparation and characterization of potentiometric acetylcholine-selective membrane sensors using the carbon-based screen-printed ceramic substrate. The PEDOT/PSS showed excellent conductivity when used as the ion-to-electron transducer. The sensors developed were based on the use of TPB ion exchangers and tri acetyl-β-CD ionophore, o-nitropheyloctyl ether (o-NPOE) as a plasticizer, and PVC as a polymeric matrix. Improved accuracy and precision, good reproducibility, potential stability, rapid response, acceptable selectivity, and high sensitivity were obtained using these sensors. Possible interfacing with automated systems was also offered by these simple and cost-effective potentiometric biosensors. High sample throughputs ranged from~25-30 samples/h and excellent response characteristics were also obtained using the flow-through system. The activity of acetylcholinesterase (AChE) was also determined using the proposed sensors. The enzyme activity holds 5.0 × 10 −3 -5.2 IU/L.
|
2019-04-21T13:04:02.807Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "1f22e2e92b65288c8ca90cdb75e7aebe5de10aaa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/24/8/1539/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f22e2e92b65288c8ca90cdb75e7aebe5de10aaa",
"s2fieldsofstudy": [
"Chemistry",
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
69297645
|
pes2o/s2orc
|
v3-fos-license
|
Epileptic brain network analysis based on Kendall’s improved synchronization algorithm
In this paper, we use a new algorithm, the IRC algorithm, to improve the Kendall algorithm. Research on complex networks has gradually deepened into all areas of social science. The study of brain networks has become a hot topic in the study of brain function. The method of wavelet filtering is used to filter the EEG data to obtain the required α-band (8-16 Hz). Using the improved IRC algorithm, the brain functional network is constructed based on the EEG data, and the related characteristics of the brain network constructed are analyzed. The experimental results show that the method is suitable for distinguishing the network degree indicators of epilepsy and normal brain tissue, and further deepening the study of the neurokinetic behavior of the brain.
Introduction
Epilepsy, commonly known as claw typhoon (or epilepsy), is a chronic neurological disease and a high-incidence rate that is second only to cerebrovascular disease. Seizures are characterized by excessive discharge of neurons in the brain. Clinical symptoms include seizures, autonomic disorders, and transient loss of consciousness. Due to the long-term recurrent epilepsy, it not only brings physical pain to the patient, but also leads to mental and psychological disorders to a certain extent. It has some harm to the patient and the society. [1,2] At present, epilepsy is diagnosed and diagnosed clinically mainly through electroencephalogram (ESG or EEG) pathological wave examination. As a noninvasive detection method, EEG can effectively reflect the physiological and pathological conditions of the brain, and has become an indispensable method for clinical detection and diagnosis of various brain dysfunction diseases. The frequency of α-waves is 8~13Hz (the average is 10Hz) and the amplitude is 20~100μV. It is the basic rhythm of normal human brain waves. If there is no additional stimulus, the frequency is fairly constant. The rhythm is most noticeable when people are awake, quiet and closed their eyes. Therefore, the analysis of α-waves is used to distinguish the brain power between epilepsy and normal people. The brain is a complex physiological system. The interactions between different brain regions form a comprehensive complex network. The brain itself is a network of nerve cells connected by axons. The use of complex network theory for the analysis of brain function associations can abstractly describe the physiological structure of the brain and deepen the understanding of the mechanisms of brain function disorder. Neural network research of epilepsy has found that the functional connection of brain neural networks has the characteristics of small-world networks, and the interaction between neural networks in different brain regions is the main factor that induces, spreads and maintains epilepsy. The use of complex network research methods [3][4][5] to analyze brain function coupling [6][7][8] and the mechanism of occurrence is the current research hotspot.
Kendall rank correlation coefficient is a common non-parametric measure. It uses the rank of the variable observation value to calculate the correlation coefficient between variables, which can effectively measure the nonlinear relationship between variables. Kendall's measure of the correlation between variables is insufficient. We have enhanced the measure of nonlinear correlation between variables by improving the count of concordant pairs in Kendall's rank correlation algorithm and the division of nodes in binary random variables. Using a modified IRC algorithm constructs brain function networks in 8-16 HZ in epileptic patients and normal subjects. By studying the topological structure of the brain network of epilepsy patients and the interaction between different brain regions, the understanding of seizures can be further deepened.
IRC algorithm
2.1. Kendall rank correlation coefficient 1 , 2 , ⋯ , and 1 , 2 , ⋯ , are samples from and respectively. They form a two-dimensional random sample ( , ) , , = 1,2, ⋯ , . with a capacity of . If the product ( − )( + ) > 0, ( , ) and ( , ) are concordant. On the contrary, if the product is less than zero, we call it disconcordant. To test whether the binary variables and they represent are related to. Record for the number of coordination pairs, is the number of uncoordinated pairs. When there is no knot in the two sets of variables, That is, there is no case where ( − )( + ) = 0 .the Kendall rank correlation coefficient (Kendall'τ) is calculated as: = 2( − ) The range of Kendall's correlation coefficient is [−1,1]. A positive value indicates a positive correlation and a negative value indicates a negative correlation. The greater the absolute value, the stronger the correlation. When the order of the size of sample and sample is completely the same, = 1 ,when the order of the size of sample X and sample Y is completely opposite, = −1.
IRC principle
Our algorithm is different from the count of Kendall's concordant pairs, and the division of knots in binary random variables. First, sample sequences and for 1 , 2 , ⋯ , and 1 , 2 , ⋯ , . We sort the sequence , , = 1,2, ⋯ , in non-decreasing order to get the new sequence * , Remember that is the length of each knot in sequence * . Another time series , , = 1,2, ⋯ , corresponds to * to get * , and let * in the length = . We redefine : ] , * ∉ , Indicate the relevance of variable to variable : Obviously, if the sequence , , = 1,2, ⋯ , is non-decreasingly arranged and corresponds to rearrangement, the new concordant logarithm and the correlation → of the variable to the variable are obtained.
Network topology features
Average degree is a simple and important concept used to describe the attributes of a single node. The clustering coefficient is described in the language of graph theory as the average probability of the connection between two nodes connected to the same node in the network. This coefficient is usually used to reflect the local structural characteristics of the network (also called clustering).
Data processing
The experimental data were from the clinical sample of the General Hospital of Nanjing Military Region. The patient group and normal group signals were used to compare the correlation between EEG and ECG. For the patient group and the normal person group, there were 22 subjects in each group. The epilepsy data were taken from the interictal period, and most of them were patients with frontal lobe epilepsy and temporal lobe epilepsy. The data for each subject included a 16-lead brain electrical signal and a lead ECG signal with a record length of more than one minute and a sampling period of 512 Hz.
Because of its ability to characterize the local characteristics of signals in the time-frequency domain and the characteristics of multi-resolution analysis, wavelet transform is widely used in the field of artifact removal of non-stationary random signals (such as brain waves). This paper uses db5 wavelet basis functions. The EEG signal is decomposed by 5 layers, and d5 coefficients are used for wavelet reconstruction to obtain the filtered signal [9].
When constructing the brain network, first define the network nodes. In this paper, these nodes are defined as 16-electrode array electrodes. Then calculate the IRC coefficients of all pairs between the 16 electrodes to generate a association matrix. Select an appropriate threshold to generate A binary adjacency matrix defines the connections between the nodes. For the connectivity of EEG signals [10], this paper selects the mean value of the coefficient matrix multiplied by the coefficient 0.8 as a threshold by several experiments. Since our algorithm is asymmetric, the corresponding directions of the two leads represented by the elements in the upper triangular matrix and the lower triangular matrix are opposite. We calculated the average of non-zero elements of the matrix as the threshold. Each element in the matrix is compared with a threshold. Elements greater than the threshold consider that the two nodes are linked. If they are less than the threshold, they are considered to be unrelated.
Result analysis
In this experiment, we used wavelet filtering to filter the EEG data to obtain the required band. Each lead of the EEG acquisition device is regarded as a node of a brain network, and the calculation between any two nodes The IRC coefficients form a 16×16 matrix of coefficients. The average degree is calculated for the upper triangular matrix and the lower triangular matrix of the matrix (LTM indicates the brain function network constructed by the lower triangular IRC coefficient matrix. UTM represents the upper triangular IRC. A network constructed by a coefficient matrix) and subjected to analysis of variance (t-test) for this set of data. The results are shown in table 1. For a more intuitive comparison of the IRC network and the Kendall network, the data in the above table is plotted as a graph (Nor represents a normal population and Abn represents a patient with epilepsy). It can be seen intuitively from figure 1 that the brain network constructed based on Kendall's correlation coefficient can't clearly distinguish the brain network of normal population and epilepsy patients, and the brain network constructed by the improved IRC algorithm can clearly distinguish epilepsy patients from normal patients; T test was performed and table 2 was obtained. If P>0.05, we cannot distinguish between normal and epileptic patients, and P <0.05, we can distinguish between normal and epileptic patients. The analysis of the average degree of variance of the upper triangular wavelet gives a P value of 0.005991, and the P value of the variance analysis of the clustering coefficient is 0.076322. The following conclusions are obtained by analyzing the upper triangular and lower triangular of the band data matrix: There is a significant difference in the average degree of epileptic brain waves and normal brain waves, but there is no significant difference in the clustering coefficient. That is to say, the 8-16Hz frequency band obtained by wavelet filtering can be used as the brain-electricity signal to construct the brain function network. The average degree of the network can distinguish between normal people and epilepsy patients.
Conclusions
In this paper, the wavelet filtering method is used to filter the EEG data to get the α-band we need. The experimental data has a frequency range of about 8-16 Hz. Then we use the Kendall coefficient and IRC algorithm improved by Kendall algorithm to construct the brain functional network, and calculate the network average degree and clustering coefficient. The experimental results show that the improved algorithm IRC can clearly distinguish the average degree and clustering coefficient of the brain network between normal people and epilepsy patients, and the average degree of epilepsy patients in the α-band brain network is greater than the normal population. However, the brain function network constructed by the Kendall coefficient cannot effectively distinguish between epilepsy and normal people. Analyzing the average IRC network will help the clinical diagnosis of epilepsy. It is of great significance for the prediction and judgment of epilepsy diseases.
|
2019-02-19T14:06:30.903Z
|
2018-07-01T00:00:00.000
|
{
"year": 2018,
"sha1": "0353881abfd1a23a694ac5aa10c95a2224b58d87",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1053/1/012051",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "20b6514eb92605719e61a7710d54ecd84281fdba",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
}
|
237055829
|
pes2o/s2orc
|
v3-fos-license
|
Acupuncture With deqi Modulates the Hemodynamic Response and Functional Connectivity of the Prefrontal-Motor Cortical Network
As a world intangible cultural heritage, acupuncture is considered an essential modality of complementary and alternative therapy to Western medicine. Despite acupuncture’s long history and public acceptance, how the cortical network is modulated by acupuncture remains largely unclear. Moreover, as the basic acupuncture unit for regulating the central nervous system, how the cortical network is modulated during acupuncture at the Hegu acupoint is mostly unclear. Here, multi-channel functional near-infrared spectroscopy (fNIRS) data were recorded from twenty healthy subjects for acupuncture manipulation, pre- and post-manipulation tactile controls, and pre- and post-acupuncture rest controls. Results showed that: (1) acupuncture manipulation caused significantly increased acupuncture behavioral deqi performance compared with tactile controls. (2) The bilateral prefrontal cortex (PFC) and motor cortex were significantly inhibited during acupuncture manipulation than controls, which was evidenced by the decreased power of oxygenated hemoglobin (HbO) concentration. (3) The bilateral PFC’s hemodynamic responses showed a positive correlation trend with acupuncture behavioral performance. (4) The network connections with bilateral PFC as nodes showed significantly increased functional connectivity during acupuncture manipulation compared with controls. (5) Meanwhile, the network’s efficiency was improved by acupuncture manipulation, evidenced by the increased global efficiency and decreased shortest path length. Taken together, these results reveal that a cooperative PFC-Motor functional network could be modulated by acupuncture manipulation at the Hegu acupoint. This study provides neuroimaging evidence that explains acupuncture’s neuromodulation effects on the cortical network.
INTRODUCTION
External neurostimulation and its modulatory effect on the cortical network have been the subject of extensive research by neuroscientists. As an external neuromodulation method that originated in ancient China, acupuncture balances sympathetic and parasympathetic nervous activity (Takahashi, 2011) and modulates the human brain (Yu et al., 2018). As a world intangible cultural heritage, acupuncture is now considered an essential mode of complementary and alternative therapy to Western medicine (Yoo et al., 2004). Studies have demonstrated that acupuncture has therapeutic effects on various neuropsychiatric disorders such as neuropathic pains (Miranda et al., 2015), Alzheimer's disease , and Parkinson's disease (Jiang et al., 2018). Besides acupuncture's clinical therapeutic effects, it has been shown that acupuncture could enhance human cognition and memory (Zheng et al., 2018) and regulate emotion processing for healthy people (Hui et al., 2005). Despite acupuncture's long history and public acceptance, the underlying neural mechanism of acupuncture's neuromodulation effects on the cerebral cortex remain unclear.
The needling sensation that subjects experience during acupuncture is called deqi , which refers to the excitation of vital energy, known as qi in languages from China (Kong et al., 2007). Importantly, deqi is an indispensable requirement for achieving acupuncture efficacy according to Traditional Chinese Medicine (TCM) (Sun et al., 2020). Previous studies showed that acupuncture with deqi sensation could produce better therapeutic efficacy and reduce the severity of symptoms compared to no deqi sensation Fang et al., 2009;Lee et al., 2017). Therefore, deqi was employed as a valuable behavioral index to evaluate the acupuncture effect (Yang et al., 2013). However, the current behavioral evaluation of deqi is subjective and insufficient. There is a lack of an objective neural biomarker for quantifying acupuncture's deqi sensation.
The human brain is an integrative and complex network system (Bassett and Sporns, 2017;Cai et al., 2018), in which regions are considered as nodes, and functional or structural connectivity between regions is considered as edges (van den Heuvel and Pol, 2010). According to the network control theory in network neuroscience, the brain's functional network could be modulated by various neuromodulation methods, such as acupuncture (Takahashi, 2011), transcranial magnetic stimulation (TMS) (Curtin et al., 2019b), transcranial electrical stimulation (tES) , and transcranial focused ultrasound (tFUS) (Legon et al., 2014).
During the acupuncture stimulation process, the afferent neural activities from the peripheral nervous system are firstly carried up to the brainstem and then are transferred to the subcortical regions in the central nervous system, such as the thalamus, cerebellum, amygdala . Finally, the high-level cerebral cortex of the human brain, such as the prefrontal and sensorimotor network, is modulated by the afferent signals from subcortical regions (Lund and Lundeberg, 2016). By modulating the cortical network during acupuncture stimulation, the human brain's cognition functions and behavioral outcomes are finally regulated (Zheng et al., 2018;Ghafoor et al., 2019). However, how the brain's functional cortical network changes and what specific cortical regions are involved during acupuncture with deqi remain mostly unclear.
To reveal how the human brain's functional network changes during acupuncture with deqi, different neuroimaging methods, such as functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) have been used in previous studies. The high-spatial-resolution fMRI studies mainly revealed the involvement of several subcortical brain regions during acupuncture (Hui et al., 2005;Dhond et al., 2008;Fang et al., 2009;Asghar et al., 2010;Hui et al., 2010;Zhong et al., 2012). For example, it was found that the limbic-paralimbicneocortical network, including the subgenual anterior cingulate cortex, posterior cingulate cortex, amygdala, and hippocampus, was deactivated during acupuncture (Dhond et al., 2008;Fang et al., 2009;Zhong et al., 2012), whereas the insula, thalamus, the somatosensory, and anterior cingulate cortex were activated by acupuncture (Dhond et al., 2008;Zhong et al., 2012). Besides, fMRI studies also revealed that the sub-cortical regions' functional connectivity could be modulated during acupuncture (Dhond et al., 2008;Zhong et al., 2012). Previous fMRI studies have found that changes in the sub-cortical regions' functional connectivity during acupuncture, but only a few studies focused on the cortical network changes, but how the cortical network is modulated during acupuncture with deqi remains unclear.
Meanwhile, scalp EEG was also used to explore the modulation effect of acupuncture, which found that the delta and theta frequency band's EEG power (Yu et al., 2017(Yu et al., , 2018 and functional connectivity were increased by acupuncture stimulation (Chen et al., 2006;Yu et al., 2017Yu et al., , 2019. However, due to the volume conduction effect (Hipp et al., 2012) and the low-spatial resolution (Tabar and Halici, 2017), EEG failed to reveal the specific cortical region's involvement during acupuncture. In summary, how the key node and functional connectivity of the cortical network are modulated by acupuncture with deqi remains mostly unclear.
The functional near-infrared spectroscopy (fNIRS), a relatively novel non-invasive neuroimaging technique (Quaresima and Ferrari, 2019), is good at detecting the cerebral cortex's hemodynamic responses, such as oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR) changes, which could reflect the underlying cortical neural activities according to the neurovascular coupling effect (Keles et al., 2016;Soltanlou et al., 2018). Compared with EEG, fNIRS has a much higher spatial-resolution and is especially good for detecting cortical neural activities (Wang et al., 2020). Moreover, compared to fMRI, fNIRS is both portable and less sensitive to motion artifacts. As a useful neuroimaging tool, fNIRS is widely used to investigate the high-level cerebral cortex's hemodynamic activities and the related functional network in various neuromodulation studies, such as TMS (Park et al., 2017;Curtin et al., 2019b), tES (Patel et al., 2020), and acupuncture (Takamoto et al., 2010;Ghafoor et al., 2019;Rojas et al., 2019). Previous acupuncture studies also showed the prospect of applying fNIRS to study acupuncture's neural mechanism (Takamoto et al., 2010;Ghafoor et al., 2019;Rojas et al., 2019). However, to our knowledge, few studies used fNIRS to simultaneously investigate the cortical response and cortico-cortical functional connectivity during acupuncture with deqi.
The Hegu acupoint (LI-4) is considered the basic neural acupuncture unit for regulating the central nervous system according to the TCM. For example, acupuncturing at the Hegu acupoint with deqi could relieve pain and improve the human brain's cognition (Zheng et al., 2018) by balancing the brain activity as a whole. In addition, previous studies have shown that acupuncture at the Hegu acupoint could regulate the activity of the brain, activating or inhibiting areas of the brain to achieve the effect of anesthetic (Hui et al., 2010;Liang et al., 2014). However, how the cortical network is modulated during acupuncture at Hegu acupoint is largely unclear. Considering that the human brain is an integrative and complex network system (Bassett and Sporns, 2017;Cai et al., 2018), we hypothesized that acupuncture at the Hegu acupoint with deqi could achieve its acupuncture effects by modulating the large-scale cortical network. The functional cortical network revealed by fNIRS could thus be used as an objective biomarker to quantify the behavior deqi effect for acupuncture.
To test our hypothesis, the multi-channel fNIRS data were recorded from twenty healthy subjects for acupuncture manipulation, pre-and post-manipulation tactile controls, and pre-and post-acupuncture rest controls. Meanwhile, each subject's acupuncture deqi behavior performance was collected. As the basic acupuncture unit for regulating the central nervous system according to TCM , the Hegu acupoint was employed in this study to explore the neuromodulation effects of acupuncture. To reveal the acupuncture with deqi responsive areas, the hemodynamic responses and the power of HbO concentration were computed and compared between different conditions. To investigate the correlation relationship between hemodynamic response and behavior performance for acupuncture with deqi responsive areas, Pearson's correlation was analyzed. To reveal how the cortical network is modulated by acupuncture with deqi, the functional connectivity and graph theory parameters of the network were calculated and compared between acupuncture manipulation and controls.
Participants
Twenty healthy right-handed volunteers (23.9 ± 1.5 years old (mean ± std), age range from 22 to 28 years, twelve males and eight females) were recruited for the acupuncture experiment ( Table 1). All subjects have normal intelligence, no history of mental illness, and no brain damage caused by long-term medication. Most importantly, all subjects in this study were naive to acupuncture, having never experienced an acupuncture treatment. After confirming that each subject met all the inclusion criteria, subjects gave written informed consent prior to participating, which was reviewed by the Institutional Review Board and Ethics Committee of Tianjin University.
Acupuncture Experiment Paradigm
The acupuncture manipulation was performed by an experienced acupuncturist. The acupuncture site was the Hegu acupoint, also known as "Large intestine 4" (LI4) at the midpoint on the radial side of the second metacarpal (Kong et al., 2015; Figure 1). Throughout the experiment, all subjects were asked to sit in a comfortable chair during the entire process of the acupuncture experiment in a quiet room (Figure 2A). All subjects During the preparation, each subject was asked to familiarize themself with the deqi sensation behavior questionnaire, containing six individual visual analog scales (VAS) for evaluating the multiple acupuncture deqi sensations (Kou et al., 2007), which including suan (aching or soreness), ma (numbness or tingling), zhang (fullness/distention or pressure), tong (dull pain), zouchuan (spreading, a feeling of extension from the acupuncture position to the arm) and zhong (heaviness) according to TCM (Asghar et al., 2010;Hui et al., 2011;Yu et al., 2012). The intensity of each needing sensation was rated by a point between 0 (none) and 10 (strongest) to quantify each of the six sensations for each subject.
Considering that the quasi-experimental design could involve the real-world acupuncture intervention instead of artificial experimental settings with high external validity (Bärnighausen et al., 2017), the quasi-experimental design was employed in this study (Figure 1), which was also used in previous acupuncture studies (Yu et al., 2017;Yu et al., 2018). The whole acupuncture procedure included the following five conditions (Figure 1). (I) Pre-acupuncture rest control: each subject was asked to keep awake and rest for 5 min without movement before the needle was inserted by the acupuncturist. (II) Pre-manipulation tactile control: a single-use sterile acupuncture needle was inserted by the acupuncturist at the Hegu acupoint to a certain depth without twisting. To avoid the bias of acupuncture site, random acupuncture on the right or left hand at the Hegu acupoint was conducted in this experiment. During this condition, each subject was asked to maintain awake and rest for 5 min before FIGURE 1 | Acupuncture experiment paradigm. During the preparation, subjects were asked to be familiar with the deqi behavioral questionnaire evaluated by the visual analog scale (VAS). There are five conditions in the experiment: pre-acupuncture rest control (5 min), pre-manipulation tactile control (5 min), acupuncture manipulation (2 min), post-manipulation tactile control (5 min), and post-acupuncture rest control (5 min). After the experiment, subjects were asked to evaluate the needling sensations for the pre-manipulation tactile control, acupuncture manipulation, and post-manipulation tactile control. The twisting frequency in acupuncture manipulation is 2 Hz. FIGURE 2 | Acupuncture experiment, fNIRS channel configuration, and anatomical location. (A) Acupuncture experiment. An experienced acupuncturist conducted acupuncture on the subject wearing a 48 channel fNIRS measuring cap. The black arrow indicated the location of the Hegu acupoint (LI-4). (B) fNIRS channel configuration. Red dots represented light sources, and blue dots represented detectors. Each fNIRS channel's relative cortical sensitivity was visualized by pseudo-color. (C) Anatomical location. Each channel's Brodmann label was indicated by a different color. All channels were divided into three regions of interests (ROIs), the prefrontal cortex (PFC, colored with the green series system), the motor-related area (motor and premotor cortex, colored with the blue series system), and the remaining Brodmann area 8 (BA 8, colored with red, BA, Brodmann area). the acupuncture twisting. (III) Acupuncture manipulation: the subjects were acupunctured by using the twirling method at the Hegu acupoint for 2 min. The twisting is mainly manipulated by the thumb to push forward with force, which was within a range of 90 • -180 • (Yu et al., 2017) with a frequency of 2 Hz. (IV) Post-manipulation tactile control: after the acupuncture manipulation period, each subject was asked to keep awake in a resting state with a needle inserted for 5 min. During this condition, the needle was kept at the Hegu acupoint without the twist. (V) Post-acupuncture rest control: after the acupuncturist removed the needle, each subject was asked to maintain awake and rest for 5 min without movement. Immediately after the acupuncture experiment, the behavior questionnaire was given to each subject to evaluate his/her deqi sensation for the following three conditions: pre-manipulation tactile control, acupuncture manipulation, post-manipulation tactile control.
fNIRS Data Acquisition
The fNIRS measurements were conducted by a multichannel fNIRS system (NirScan, Danyang Huichuang Medical Equipment Co., Ltd.) at a sampling rate of 9 Hz. Near-infrared light of three different wavelengths (740, 808, and 850 nm) was used to detect the concentration signals of HbO and HbR. For the channel configuration, 15 light sources and 16 detectors were plugged into a holder and resulted in 48 measurement channels, which were positioned by referring to the standard international 10-20 system of electrode placement ( Figure 2B). The distance between the light source and the detectors was 3 cm.
fNIRS Channel Location and Cortical Sensitivity
To obtain the Montreal Neurological Institute (MNI) coordinates of each fNIRS channel, the spatial coordinates of the light sources, detectors, and anchor points (located at Nz, Cz, Al, Ar, Iz referring to the standard international 10-20 system of electrode placement) were firstly measured by using the electromagnetic 3D digitizer system (FASTRAK. Polhemus VT, United States). Then, the spatial coordinates of the light sources and detectors were registered to the standard MNI space by using the spatial registration approach (Tsuzuki et al., 2007). Thirdly, the MNI coordinate of the midpoint between a light source and detector pair was determined, as each fNIRS channel consisted of a light source and detector pair. Fourthly, the midpoint was spatially projected to the cortex surface of the MNI "Colin27" brain template, and the fNIRS channel's coordinate was defined as the MNI coordinate of the cortical projected point. Finally, to reduce the measurement error, the coordinates of left/right hemispheric symmetric channels were averaged and marked with positive or negative hemispheric information ( Table 2). To reveal the probability of the transmitted photon path in the cortex, Monte-Carlo photon transmitted software tMCimg was used to obtain the cortical sensitivity . Then the cortical sensitivity of each channel was displayed on the MNI "Colin27" brain template by using the AtlasViewer (Aasted et al., 2015; Figure 2B). The cortical sensitivity information of each channel was then used to visualize different indexes, such as statistical significance, hemoglobin concentration, and channel label.
Brodmann Atlas (BA) Label and Regions of Interest (ROI)
To obtain the BA label of each channel, the MNI-to-BA Tool (Okamoto et al., 2009) was used. Firstly, a 10 mm radius sphere with each fNIRS channel's MNI coordinate as the center was determined. Secondly, all possible BA labels within the sphere were determined. Then, the percentage of each BA label's voxel number within the sphere over the entire sphere's voxel number was calculated. Thirdly, the BA label with the highest percentage was identified as the BA label of a given fNIRS channel. Finally, each fNIRS channel and the corresponding BA label were visualized on the "Colin27" brain template by using BrainNet Viewer (Xia et al., 2013; Figure 2C). According to each fNIRS channel's BA label and the anatomical location information from the "Colin27" brain template, all fNIRS channels were divided into the following three ROIs, including the motor area, the prefrontal cortex (PFC) area, and the remaining Brodmann area 8 ( Figure 2C).
The Acupuncture's Behavioral deqi Index Calculation and Statistical Analysis
To quantify the acupuncture's deqi behavioral effect, the Massachusetts general hospital acupuncture sensation scale (MASS) was administered as the deqi index to quantify the needling sensation in this study (Kong et al., 2007;Yu et al., 2012). According to the MASS index, the VAS scores for all the different needling sensations were sorted from highest to lowest for each subject. The deqi index was calculated according to the following equation: where R i indicates the highest to lowest ranking index of i th VAS score, and n represents the number of different deqi sensations on the behavior questionnaire. The deqi index was calculated for each subject (Supplementary Table 1).
To evaluate if there is a significant deqi sensation difference between different conditions, paired t-tests were used on all subjects' deqi data for tactile controls (averaging two tactile controls) vs. acupuncture manipulation (Figure 2A), and for premanipulation tactile control vs. post-manipulation tactile control (Figure 3), respectively. The p-values were corrected with false discovery rate (FDR) correction for multiple comparisons to minimize the risk of type I errors. To verify the validity of the statistical results, the statistical power was also calculated by using the free software G * power.
fNIRS Data Pre-processing
Because the fNIRS recorded raw data contained various noises, fNIRS raw data was pre-processed with the MATLAB TM 17.0 (MathWorks, United States) software by the following procedure to improve the signal-to-noise ratio (SNR) (Supplementary Figure 1). Firstly, noisy channels were rejected if any of the following situations were met: (I) The optical intensity (OI) range exceeded the 0.5-1,000; (II) The source-detector separation range exceeded the range from 0 to 45; and (III) The ratio of the mean and standard deviation of the optical intensity [mean(OI)/std(OI)] is less than 2 (Huppert et al., 2009;Peters et al., 2020). Secondly, for a given subject, if the percentage of the subject's rejected channels over all channels exceeds 50%, the subject with poor signal quality was also rejected, which was in line with the previous fNIRS study (Hakuno et al., 2020). In total, there were three subjects (S14, S18, S20) excluded from the subsequent analysis (Table 1). Thirdly, after excluding the noisy channels and subjects, the optical intensity raw data were converted into optical density (OD) (Supplementary Figure 1A).
To further improve the SNR of HbO and HbR, the following four pre-processing steps were applied (Supplementary Figure 1). In step 1, motion artifacts containing in the OD signals were corrected by using principal component analysis (PCA) (Supplementary Figure 1B; Behrendt et al., 2018;Di Lorenzo et al., 2019). In step 2, to remove various physiological noises such as cardiac (∼1 Hz), respiration (∼0.3 Hz), and low-frequency drifts (<0.09 Hz), the OD data were filtered by a 0.01-0.2 Hz Butterworth bandpass filter (Scholkmann et al., 2014;Chen and Glover, 2015;Pinti et al., 2019), which is in line with previous fNIRS studies (Zuo et al., 2010;Scholkmann et al., 2014;Chen and Glover, 2015). The Mayer wave (∼0.1 Hz) was removed by using a 0.1 Hz notch filter (Supplementary Figure 1C; Yucel et al., 2016). In step 3, the OD data were converted into HbO and HbR concentrations by applying the modified Beer-Lambert law (Supplementary Figure 1D; Scholkmann et al., 2014). In step 4, to further extract the functional brain activity, the hemodynamic modality separation (HMS) algorithm was used and the neural activity related HbO and HbR were obtained (Supplementary Figure 1E; Yamada et al., 2012).
HbO Power Calculation of Each Epoch
After removing the motion artifacts and systemic noise by the pre-processing procedure, the HbO and HbR concentration with high SNR were obtained. Considering that HbO concentration proved to be more reliable and sensitive than HbR in the previous studies (Strangman et al., 2002), HbO signals were used for further analysis. Considering that the root-mean-square (RMS) amplitude of the time domain signal represented the integrated amplitude of major frequencies (Chuang et al., 2008), the RMS of HbO concentration was calculated as HbO power to quantify the cortical response intensity during acupuncture by the following steps. Firstly, the sliding window technique with the moving window length of 5 s without overlap was used to divide the HbO into epochs (Baczkowski et al., 2017). Secondly, to remove the outlier epoch and improve the SNR, the noisy outlier epoch was rejected if its absolute Z-score of HbO concentration was larger than 3. Finally, the HbO power of each epoch was calculated by using the RMS method (Supplementary Figure 1F).
Hemodynamic Responses Analysis
To compare the differences between different conditions, averaged mean time series of HbO concentration and HbO power were computed across all subjects, which was also visualized by a typical channel (Figures 4A,B,D). To further assess if there existed significant difference for different conditions, the one-way analysis of variance (ANOVA) was conducted on the HbO power data with different conditions (pre-acupuncture rest control, premanipulation tactile control, acupuncture manipulation, postmanipulation tactile control, and post-acupuncture rest control) as a factor (Figure 4C), in which the data samples were each condition's HbO power data of all epochs across all subjects. The post-hoc t-tests were conducted when the main effects or interactions in the ANOVA were significant. To minimize the risk of type I errors, the p-values were corrected with FDR correction for multiple comparisons. To verify the validity of the statistical results, the statistical power was also calculated by using the free software G * power. The above statistical analysis was computed using MATLAB TM 17.0 (MathWorks, United States) software.
Statistical Mapping for Acupuncture Responsive Area
To investigate the significant acupuncture responsive areas at the group level, the HbO power data of all epochs within different conditions across all subjects were taken as samples for statistical analysis. Considering that noisy outlier epochs were rejected previously and that the duration of different conditions varied, the data sample size of different conditions was unbalanced. Therefore, two-sample t-tests were conducted on the HbO power data between acupuncture manipulation and all controls for each channel. To minimize the risk of type I errors, the p-values were corrected with FDR correction for multiple comparisons and were visualized on the standard brain template ( Figure 5A). The set of significant acupuncture responsive channels (p < 0.05) were defined as acupuncture responsive areas (Supplementary Figure 2A) that could be divided into the PFC area and motor area according to the label of each channel. As a comparison, twosample t-tests were also conducted between tactile controls and rest controls (Figure 5B). In addition, to map the hemodynamic responses strength during acupuncture, the difference of the group averaged HbO power between acupuncture manipulation and all controls was calculated as the HbO power change. To verify the validity of the statistical results, the statistical power was also calculated by using the free software G * power. Then, HbO power change was visualized for each channel on the standard brain template (Supplementary Figure 2B).
Correlation Analysis Between Hemodynamic Responses and Behavioral Performances
To reveal the relationship between hemodynamic responses and acupuncture behavior performances for the acupuncture responsive areas, Pearson's correlation and linear regression were performed between the deqi behavior data and HbO power change for all acupuncture significant responsive channels from the PFC and motor areas, respectively. To verify the validity of the correlation results, the statistical power was also calculated by using the free software G * power. The previously rejected noisy channels were not included during the correlation analysis. For the linear regression, the 95% confidence intervals for the mean of the polynomial evaluation were computed and visualized (Supplementary Figure 3).
Functional Connectivity and Statistical Analysis
To investigate the information flows in significant acupuncture responsive areas, functional connectivity analysis was conducted using the following steps.
Step 1, to obtain the subject level channel-to-channel functional connectivity, the significant acupuncture responsive channels were paired off to calculate the Pearson's correlation coefficients for each subject (Zhang et al., 2011). The whole time series' HbO concentration of acupuncture manipulation and all controls were used in the functional connectivity analysis, respectively. The previously rejected noisy channels were not included during this analysis.
Step 2, to further determine the significant channel-to-channel functional connectivity for each subject, a non-parametric permutation test was used as follows (van den Heuvel et al., 2019). Each channel's HbO concentration time series data were divided into 2 s epochs for each subject's acupuncture manipulation and all controls. Then, a 1,000-times permutation resampling method (the channel pairs' corresponding epochs were shuffled randomly) to determine the significant functional connectivity (p < 0.001).
Step 3, each subject's significant functional connectivity data were then subjected to Fisher's r-to-z transformation to yield variants from the normal distribution (Ghafoor et al., 2019). To obtain the group level's channel-to-channel functional connectivity matrices, then z-values were averaged across all subjects, and the group-level z-values were converted back into r-values via Fisher's z-to-r inverse transformation, which was in line with previous studies (Zhu et al., 2017;Cai et al., 2018; Supplementary Figures 4A,B).
Step 4, to obtain the group level region-to-region functional connectivity, subject level region-to-region functional connectivity data were firstly computed by averaging r-values of pair-wise channels within the same region by the previous Fisher's r-to-z and z-to-r transformation method. Then, the group level region-to-region functional connectivity data were calculated by averaging the region-to-region functional connectivity data of all subjects, which were finally visualized on the standard brain template (Figures 6A,B and Supplementary Figures 4C,D).
Step 5, to reveal the significant group level region-to-region functional connectivity between acupuncture manipulation and all controls, paired t-tests were conducted on subject level channel-to-channel functional connectivity data within the same region across all subjects. To verify the validity of the statistical results, the statistical power was also calculated by using the free software G * power. Significant increased region-to-region functional connectivity was visualized as a red line (Figure 6B).
Step 6, to further compare the functional connectivity changes of different region pairs between acupuncture manipulation and all controls (Supplementary Figure 5), paired t-tests were conducted on subject level channel-to-channel functional connectivity data within different region pairs (PFC-Motor, PFC-PFC, and Motor-Motor) across all subjects.
Network Topological Property Analysis
After calculating each subject's correlation coefficients for the pair-wise channels from the acupuncture responsive areas, the correlation coefficients matrix was obtained for each subject. Based on the calculated correlation coefficients matrix, the functional brain network could be reconstructed between acupuncture manipulation and all controls for each subject, in which channels served as nodes and functional connectivity between channels served as edges. The previously rejected noisy channels were not included during this analysis.
To further reveal the acupuncture's modulation effect on the functional brain network, the graph theory was utilized to describe the functional network's topological structure changes between different conditions (Yu et al., 2018(Yu et al., , 2019. The shortest path length and global efficiency were calculated and used as network metrics for each subject, in which the correlation coefficients matrix was thresholded over a range of sparsity (from 0.1 to 0.9 with step-size of 0.05) to investigate the relationship between network metrics and sparsity (Figure 7; Niu et al., 2013;Geng et al., 2017). In addition, other network metrics, including the local efficiency, clustering coefficient, and small-world, were also calculated, presented in Supplementary Figure 6. These network metrics were computed by a freely available MATLAB TM toolbox, GRaph thEoreTical Network Analysis (GRETNA) (Wang et al., 2015).
For the network with N nodes and K edges, the shortest path length of the network (Figure 7A) was defined as the average of the shortest path lengths between all pairs of nodes in the network (Latora and Marchiori, 2003): where d ij was the shortest path length between node i and node j, which meant the minimum number of edges included in the path that connected these two nodes.
The global efficiency ( Figure 7B) was defined as (Latora and Marchiori, 2001): where d ij is the shortest path length between node i and node j.
To reveal the significant difference between acupuncture manipulation and all controls, paired t-tests were performed on all subjects' global network metric data for different conditions (Figure 7). To verify the validity of the statistical results, the statistical power was also calculated by using the free software G * power.
Behavior deqi Performances Under Different Conditions
To verify the behavioral effectiveness of acupuncture manipulation, paired t-tests with FDR correction were used to compare the deqi index for different conditions (Figure 3 and Supplementary Table 1). The subjects' deqi index of acupuncture manipulation was significantly larger than that of pre-manipulation tactile control and post-manipulation tactile control ( * * p < 0.01 and statistical power >0.95) (Figure 3). In comparison, there was no significant difference between pre-manipulation tactile control and post-manipulation tactile control (n.s.) (Figure 3). These results indicated that acupuncture manipulation could cause significant deqi behavioral effects compared to tactile controls.
Decreased HbO Responses by Acupuncture Manipulation
To reveal the deqi modulation effects of acupuncture on the cerebral cortex, the HbO concentration was firstly calculated after pre-processing of fNIRS data. Subjectaveraged group-level results showed reduced amplitude of fluctuation for HbO concentration during acupuncture manipulation, which was shown by a typical channel from PFC ( Figure 4A). Furthermore, the RMS of HbO concentration was computed as the HbO power index to quantify the HbO fluctuation energy. The typical channel's subject-averaged results also showed reduced HbO power during acupuncture manipulation ( Figure 4B). Furthermore, one-way ANOVA on the HbO power shown a significant difference between the five conditions ( Figure 4C). The post-hoc t-tests with FDR correction shown that acupuncture manipulation's HbO power was significantly smaller than the other four controls [pre-acupuncture rest control, * * p < 0.01 and statistical power >0.95; pre-manipulation tactile control, * * p < 0.01 and statistical power >0.95; post-manipulation tactile control, p < 0.05 and statistical power >0.8; postacupuncture rest control, * * p < 0.01 and statistical power >0.95] (Figure 4C). This typical channel's HbO power change between acupuncture manipulation and all controls was computed and visualized on the standard brain template ( Figure 4D).
Acupuncture Responsive Area
To reveal the acupuncture with deqi responsive areas, the HbO power response patterns were examined across all channels from seventeen subjects (Figure 5A and Supplementary Figure 2). Channels from the bilateral PFC and motor cortex showed the strongest decreased HbO power change (Supplementary Figure 2B). Among them, the acupuncture responsive areas were identified as those channels that had significantly lower HbO power between acupuncture manipulation and all controls (acupuncture responsive channels, n = 8; p < 0.05; twosample t-tests with FDR correction). The acupuncture with deqi responsive channels were distributed over the PFC and the motor cortex bilaterally, which was shown on the standard brain template ( Figure 5A) and was indicated with channel labels (Supplementary Figure 2A). For the two different controls, there was no significantly changed HbO power channel by comparing the HbO power change between tactile controls and rest controls (Figure 5B).
The PFC's Hemodynamic Response Had a Positive Correlation Trend With Behavior Score
To investigate whether there is a relationship between hemodynamic response and behavior performance for acupuncture with deqi responsive areas, Pearson's correlation was performed across all subjects for the bilateral PFC and the bilateral motor cortex, respectively. Interestingly, the HbO power change (acupuncture manipulation -all controls) of the acupuncture with deqi responsive channels from the PFC had a positive correlation trend with acupuncture behavior deqi index (r = 0.234, p < 0.05 and statistical power >0.50) (Supplementary Figure 3A). In contrast, there was no significant correlation for the acupuncture responsive channels from the motor cortex (r = −0.073, n.s.) (Supplementary Figure 3B). These results showed that the PFC's HbO responses could predict the subject's behavior performance during acupuncture manipulation.
The PFC Related Connection Showed Significantly Increased Functional Connectivity
To reveal the information flows among key nodes in the network, functional connectivity analysis was used to explore the neural interactions between the acupuncture with deqi responsive channel pairs. The significant functional connectivity edges (permutation test, p < 0.001) were calculated and visualized for all controls (Figure 6A and Supplementary Figures 4A,C) and acupuncture manipulation (Figure 6B and Supplementary Figures 4B,D). It was shown that there was overall increased functional connectivity strength for acupuncture manipulation, compared with all controls (Figures 6A,B). Importantly, channels over the PFC were found to have significantly enhanced connections with the motor and the PFC under acupuncture manipulation compared with all controls (paired t-tests, alpha = 0.05 and statistical power >0.95) (Figure 6B), which was also verified by subsequent results.
Compared with all controls, PFC-Motor and PFC-PFC showed significantly higher functional connectivity strength under acupuncture manipulation (paired t-tests, * * p < 0.01 and statistical power >0.95) (Supplementary Figures 5A,B). In contrast, the Motor-Motor connection showed no significant difference between acupuncture manipulation and all controls (paired t-tests, n.s.) (Supplementary Figure 5C).
Topological Properties of the Cortical Functional Network
To further reveal acupuncture with deqi's modulation effects on the functional brain network, graph theory analysis was performed on the functional connectivity network with the acupuncture with deqi responsive channels as nodes. The topological properties of the network were revealed by different global network metrics with different sparsity thresholds (Figure 7). To reveal the information transmission ability of the network, the shortest path length and global efficiency were showed. Compared with all controls, the shortest path length parameter showed a lower tendency. It was significantly lower within 0.4-0.5 sparsity range for acupuncture manipulation compared with all controls (paired t-tests, alpha = 0.05 and statistical power >0.95) (Figure 7A). On the contrary, the global efficiency parameter showed a higher pattern and was significantly higher within the 0.35-0.55 sparsity range for acupuncture manipulation (paired t-tests, alpha = 0.05 and statistical power >0.95) (Figure 7B). In addition, other network metrics, including the local efficiency, clustering coefficient, and small-world, did not have a significant difference between acupuncture manipulation and all controls (Supplementary Figure 6).
A Cooperative PFC-Motor Functional Network for Acupuncture
As the basic acupuncture unit for regulating the central nervous system , the Hegu acupoint was employed in this study to explore the neuromodulation effects of acupuncture. Different from most previous findings of studies examining the involvement of subcortical localized brain areas during acupuncture (Chen et al., 2006;Dhond et al., 2008;Yu et al., 2017), our results revealed a distributed and cooperative cortical acupuncture with deqi network with the PFC and the motor cortex as nodes.
Firstly, acupuncture manipulation caused significant behavioral deqi effects compared to tactile controls (Figure 3). Secondly, by investigating the effects of acupuncture at the Hegu acupoint with deqi on cerebral hemodynamic responses, we found the decreased HbO responses could be a potential biomarker for acupuncture (Figure 4 and Supplementary Figure 2). Thirdly, the bilateral PFC and the motor cortex were found to be acupuncture responsive nodes (Figure 5), and the PFC's hemodynamic responses were significantly correlated with the behavior deqi index (Supplementary Figure 3A). Interestingly, the PFC-related connections in the acupuncture network showed significantly increased functional connectivity during acupuncture manipulation (Figure 6 and Supplementary Figure 5). Finally, the topological properties of the acupuncture network showed that the network efficiency was improved by acupuncture with deqi (Figure 7). Taken together, these results indicate that there is a cooperative PFC-Motor acupuncture functional network, which could provide more neuroimaging evidence for explaining acupuncture with deqi's neuromodulation effects on the human brain.
Decreased HbO Response by Acupuncture Manipulation
By analyzing the hemodynamic responses recorded by fNIRS (Figure 4 and Supplementary Figure 1), the bilateral PFC and the motor cortex were found to show decreased HbO responses during acupuncture manipulation (Figure 5 and Supplementary Figure 2). Previous fMRI-fNIRS simultaneous recording studies have proved that the HbO signal is positively correlated with the blood-oxygen-level-dependent (BOLD) fMRI signal (Strangman et al., 2002). In addition, the deactivation of the BOLD signal recorded by fMRI was further proved to reflect the suppression of neuronal activity by previous studies (Shmuel et al., 2002;Stefanovic et al., 2004). These findings suggest that the observed decrease of HbO response from the bilateral PFC and motor cortex could reflect the suppression of underlying neural activities. Moreover, the bilateral PFC and motor cortex could be network nodes during acupuncture manipulation.
Acupuncture Modulates the PFC and Regulates the Cognition Function
In this study, the bilateral PFC showed significantly decreased HbO responses during acupuncture manipulation (Figure 5 and Supplementary Figure 2), which indicates that acupuncture could modulate the hemodynamic responses of bilateral PFC. The PFC is an important region for high-level cognition processing, which can provide top-down regulation of attention, emotion, and cognitive control (Arnsten and Rubia, 2012). It has been postulated that acupuncture manipulation could regulate cognitive functions by modulating the bilateral PFC responses, as evidenced by previous studies. Firstly, previous studies have shown that the high-level cognitive functions could be modulated by using neuromodulation methods such as TMS (Curtin et al., 2019a), tDCS (Di Rosa et al., 2019), and acupuncture stimulation (Zheng et al., 2018). For example, the previous fNIRS-TMS and fNIRS-tDCS studies found that neuromodulation could improve cognitive performances during working memory tasks (Di Rosa et al., 2019) and speed of processing (SOP) cognitive tasks (Curtin et al., 2019a), which were both evidenced by the reduced amplitude of HbO concentration in the PFC (Curtin et al., 2019a;Di Rosa et al., 2019). Secondly, the dorsolateral prefrontal cortex (DLPFC) is a key node of the central executive network (CEN), which is responsible for high-level cognitive and attentional control (Menon and Uddin, 2010). Therefore, it could be further speculated that the CEN could be modulated by acupuncture manipulation, which helps to improve the brain's cognitive functions. Finally, previous fMRI studies found that acupuncture modulated BOLD responses and could improve cognition performances for stroke patients (Zhang et al., 2014), mild cognitive impairment patients , and Alzheimer's disease (AD) patients Zheng et al., 2018), which supported our fNIRS findings. In addition, although acupuncture responsive channels were distributed over the PFC and the motor cortex bilaterally, the PFC showed a left hemisphere dominant response tendency. As all subjects were right-hand controlled in this study, we speculate that it might be related to right-handedness, which needs to be explored in future studies. By modulating bilateral PFC responses, acupuncture manipulation could help regulate cognitive functions.
The PFC's HbO Response as an Objective Biomarker for Acupuncture
The needling deqi sensation that subjects experienced is considered a valuable behavioral index in evaluating the effect of acupuncture in TCM (Yang et al., 2013). However, the current behavioral evaluation of deqi is subjective and insufficient and lacks an objective neural biomarker. In this fNIRS study, we found that the PFC showed a positive correlation trend between the acupuncture deqi behavior index and the HbO power change (acupuncture manipulation -all controls) (Supplementary Figure 3), which meant that the PFC's HbO responses could predict the subject's behavior performance during acupuncture manipulation. This result showed that the PFC's HbO response could potentially be used as an objective neural biomarker to evaluate the effects of acupuncture at the Hegu acupoint.
Modulation of the Motor Cortex and the Analgesic Effect of Acupuncture
In this study, the bilateral motor cortex showed significantly decreased HbO responses during acupuncture manipulation (Figure 5 and Supplementary Figure 2), which indicated that acupuncture could modulate the hemodynamic responses of the bilateral motor cortex. Considering that the motor cortex is an important region for pain processing and control (Volz et al., 2015;Lopes et al., 2019) we argue that by modulating the bilateral motor cortex's HbO responses, acupuncture manipulation could produce analgesic effects. Firstly, it has been well known that the motor cortex has the role of pain control (Ostergard et al., 2014), and electrical stimulation in the motor cortex could treat intractable pain syndromes effectively (Arle and Shils, 2008;Henssen et al., 2019). Secondly, previous fNIRS studies also proved that analgesic effects could be delivered during acupuncture manipulation, in which the decreased hemodynamic responses of the motor cortex were observed (Takamoto et al., 2010;Rojas et al., 2019). The decreased motor activities showed by the above fNIRS studies were in line with our findings. Taking together, acupuncture manipulation could suppress the excessive motor cortex activity, which helps explain the analgesic effects of acupuncture.
The Bilateral PFC as Network Hub During Acupuncture
Here, the bilateral PFC was found to have significantly increased functional connections with the motor cortex (PFC-Motor) and with the PFC cortex (PFC-PFC) during acupuncture manipulation compared with all controls (Figure 6 and Supplementary Figure 5). We argue that the bilateral PFC works as a network hub in the cooperative PFC-Motor functional network during acupuncture manipulation. We further postulate that the increased PFC-related functional connectivity could not only improve the human cognitive ability but also produce an analgesic effect through acupuncture manipulation.
Functional connectivity has been widely used to reveal cognition functions (Zeng et al., 2012;Cohen, 2018). Moreover, previous studies showed that the human cognitive ability could be improved during acupuncture with deqi, which was evidenced by the increased functional connectivity (Dhond et al., 2008;Bai et al., 2009;Yu et al., 2017). In this study, the observed increased functional connectivity with PFC as node indicated that the longdistance information transmissions with PFC network hub were enhanced. Therefore, acupuncture with deqi could strengthen the communications between remote brain areas within the largescale network and improve cognitive function.
A previous study showed that the PFC-motor's functional connection had the role of pain control (Peng et al., 2019). In our study, acupuncture manipulation significantly improved the PFC-Motor connections. Therefore, we postulate that the motor cortex might receive increased top-down modulation from the PFC, which could produce the analgesic effect through the recruitment of descending pain inhibition system (Peng et al., 2019). The bilateral PFC is a network hub in the cooperative PFC-Motor functional network for acupuncture. The increased functional connections of bilateral PFC by acupuncture with deqi might play a role in improving human cognitive ability and produces analgesic effects.
Acupuncture Improves the Global Efficiency of the PFC-Motor Functional Network
Compared with all controls, the topological properties of the network showed a lower shortest path length ( Figure 7A) and higher global efficiency ( Figure 7B) during acupuncture manipulation (Figure 7). We argue that acupuncture manipulation could improve the information transmission of the PFC-Motor functional network. Firstly, global efficiency is a global parameter to describe the network's capacity for parallel information exchanging between nodes via multiple edges (Rubinov and Sporns, 2010). A higher global efficiency shows that PFC and Motor cortex are well integrated and that information transfer over the PFC-Motor network is more efficient by acupuncture manipulation (Fleischer et al., 2019). Secondly, the path length is a measure of processing steps along the path of information transfer between different nodes (Rubinov and Sporns, 2010). Since fewer numbers of processing steps denote a more rapid and accurate information communication, a lower shortest path length indicates a higher level of communication efficiency among the PFC-Motor network by acupuncture (Kaiser and Hilgetag, 2006).
We further postulate that acupuncture manipulation could improve cognitive ability by improving the global efficiency of the network. Firstly, the functional brain network attained a smaller path length and a larger global efficiency under acupuncture manipulation, indicating that the enhancement of the processing efficiency in the brain network (Yu et al., 2017). Secondly, a previous study found that acupuncture could improve the overall processing efficiency of the brain network among MCI patients, reflecting the improvement of cognitive function by acupuncture manipulation (Ghafoor et al., 2019). Acupuncture manipulation improves the information transmission of the PFC-Motor functional network and could enhance the human cognitive ability, which could also explain the increased functional connections of the PFC-Motor network (Figure 6 and Supplementary Figure 5).
Limitations and Future Works
There exist several potential limitations to this study. Firstly, more participants should be recruited in future studies to reach a higher statistical power, and provide more evidence for further exploring the relationship between hemodynamic response and acupuncture behavior performance. For example, although the bilateral PFC's hemodynamic response showed a positive correlation trend with acupuncture behavioral performance (Supplementary Figure 3), the statistical power should be further validated by recruiting more participants in the following study. Secondly, although a quasi-experimental design and pre/posttactical/rest controls were employed in this study, a more rigorous parallel control experiment design should be conducted in the future, which could make the conclusion more convincing. Thirdly, this study explored the effects of deqi on cortical networks, future studies should be conducted with patients to add more neuroimaging evidence for explaining the clinical therapeutic effects of acupuncture with deqi. Fourthly, due to the limited channel number of the current fNIRS device, the channel layout of fNIRS could not have complete coverage of the whole brain. In the future, a whole-brain coverage fNIRS device should be used to explore the neural mechanism of acupuncture further. Finally, our study used Pearson's correlation coefficients as functional connectivity, which could not reveal the causal relationship between brain regions. In future studies, Granger causality analysis could be used to further quantify the strength of directed effective connectivity between brain regions.
CONCLUSION
Despite acupuncture's long history and public acceptance, the underlying neural mechanism of acupuncture's neuromodulation effects on the human brain remain largely unclear. By using multi-channel fNIRS recordings on the human brain, our study found that acupuncture modulated a distributed and cooperative PFC-Motor cortical network with the bilateral PFC and the motor cortex as key nodes. Taking the basic neural acupuncture unit Hegu acupoint as an example, acupuncture manipulation not only modulated the hemodynamic responses of the bilateral PFC and motor cortex but also regulated the functional connectivity and efficiency of the PFC-Motor cortical network. Our study contributes objective neuroimaging evidence for explaining acupuncture's neuromodulation effects on the human brain, which also contributes to Traditional Chinese Medicine.
DATA AVAILABILITY STATEMENT
The data and codes that support the findings of this study are available upon request.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Institutional Review Board and Ethics Committee of Tianjin University. The patients/participants provided their written informed consent to participate in this study.
|
2021-08-16T13:25:27.766Z
|
2021-08-16T00:00:00.000
|
{
"year": 2021,
"sha1": "18db17834736f448587920ec54b36d3c5660c113",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2021.693623/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18db17834736f448587920ec54b36d3c5660c113",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
19952985
|
pes2o/s2orc
|
v3-fos-license
|
Myocarditis and cardiomyopathy after arbovirus infections (dengue and chikungunya fever).
The varying clinicalfeatures and sequelae in Io patients suffering from myocarditis after dengue and chikungunya fever caused by arboviruses are described. These patients had significantly raised antibody levels in the serum and gave a recent history of 'dengue-like' fever. A few patients had a favourable outcome with disappearance of symptoms, improvement in the electrocardiogram, and no residual cardiomegaly. Others had persistent symptoms, electrocardiographic changes, and cardiomegaly which suggested a transition to cardiomyopathy, a chronic cardiac disorder. It seems likely that arbovirus infections play a significant role in the aetiology of the cardiomyopathies which are common in Ceylon.
Myocarditis and cardiomyopathy after arbovirus infections (dengue and chikungunya fever) Ivor Obeyesekere and Yvette Hermon From the Cardiology Unit, General Hospital, Colombo, and the Virus Laboratory, Medical Research Institute, Colombo, Ceylon The varying clinicalfeatures and sequelae in Io patients sufferingfrom myocarditis after dengue and chikungunya fever caused by arboviruses are described. These patients had significantly raised antibody levels in the serum and gave a recent history of 'dengue-like' fever.
A few patients had a favourable outcome with disappearance of symptoms, improvement in the electrocardiogram, and no residual cardiomegaly. Others had persistent symptoms, electrocardiographic changes, and cardiomegaly which suggested a transition to cardiomyopathy, a chronic cardiac disorder. It seems likely that arbovirus infections play a significant role in the aetiology of the cardiomyopathies which are common in Ceylon.
Myocarditis or inflammation of the myocardium refers to myocardial lesions and symptoms associated with infectious diseases and specific infections. Among the reported causes are bacterial, viral, mycotic, protozoal and Rickettsial infections, parasitic infestations, and specific bacterial toxins. Of the viruses, Coxsackie B, and less frequently viruses causing infectious mononucleosis, infectious hepatitis, mumps, measles, poliomyelitis, aseptic meningitis, and encephalo-myocarditis have been reported to be pathogenic to man. While dengue and chikungunya fever are well recognized, there are very few published reports concerning their cardiac complications (Boon and Tan,I967;Hyman,I943). Myocarditis, which was thought to be benign, is now known to lead to a more chronic disorder, cardiomyopathy (Bengtsson,I968). This paper describes the varying clinical features and sequelae of patients suffering from myocarditis in association with dengue and chikungunya fever caused by arboviruses and referred to as 'arbovirus myocarditis'.
Subjects and methods
For the purpose of this study io patients were selected in whom a diagnosis of arbovirus myocarditis was made. The following diagnostic criteria were used. (i) Clinical evidence of myocarditis; (2) the presence of electrocardiographic evidence of myocarditis, ST segment and T wave changes, and disturbances in conduction and Received 25 October 1971. rhythm; (3) a recent history of 'dengue-like' fever; (4) serological evidence of past dengue or chikungunya infection as revealed by the presence of antibody in high titre.
The study included the history, physical examination, serial electrocardiograms, serial xrays of chest, and cardiac catheterization in two patients. The laboratory tests included routine examination of urine and blood, estimation of transaminases, erythrocyte sedimentation rates, complement-fixation tests for toxoplasmosis and filariasis, antistreptolysin titres, and tests for systemic lupus erythematosus.
Serological tests for arbovirus antibody were carried out by means of the haemagglutinationinhibition test to determine evidence of past arbovirus infection. Simultaneously, the same test was carried out on 3 control groups of patients also admitted to the Cardiology Unit, (I) those with rheumatic heart disease, (2) those suffering from coronary heart disease, with characteristic electrocardiographic changes and transient rises in activity of serum enzymes (SGOT), and (3) those with congenital heart disease. The complement-fixation test was done on IO patients suffering from myocarditis who showed high haemagglutination-inhibition antibody levels. The description of the antigens used and serological techniques are described in the Appendix.
Results
The results of haemagglutination-inhibition and complement-fixation tests in IO patients with myocarditis are given in Table i. All these (except Case 9) had haemagglutinationinhibition antibody levels of i: 640 or greater to either dengue (Type I or II) or chikungunya, and all (except Case i) had comple- i6o 8o 320 ment-fixation antibody levels of I: 64 or greater. Ten other patients with myocarditis had haemagglutination-inhibition titres of I:320 and above to dengue or chikungunya. Table 2 gives the haemagglutination-inhibition and complement-fixation test results in the 3 control groups. There was a striking and highly significant difference, with only one patient with a haemagglutination-inhibition titre of I: 320 and two with titres of i: i6o. A serological diagnosis of arbovirus infection necessarily depends on the examination of paired sera, the first sample being collected in the acute phase and the second convalescent sample 2 or 3 weeks later. The criteria for serological diagnosis have been described (Nimmannitya et al.,I969). Primary and secondary or recurrent infection are recognized according to the haemagglutinationinhibition and complement-fixation antibody responses. In the I0 patients described in this paper, a diagnosis of acute arbovirus infection was not initially sought. By the time these patients presented themselves at the cardiology unit they already had established cardiac disorders which followed recent 'dengue-like' illnesses. Five of these patients (Cases I,5,6,7,and 8) also gave definite histories of similar attacks of fever occurring months or years earlier. Only evidence of past arbovirus infection was, therefore, sought and a single sample of blood was sent for detection of arbovirus antibody. The results revealed that all these patients had high dengue and chikungunya haemagglutination-inhibition and complement-fixation antibody which confirmed a past arbovirus infection. It is postulated that dengue and chikungunya fevers, with their prominent myalgic manifestations, involved the myocardium and triggered off the chain of symptoms resulting in acute myocarditis followed, in some cases, by cardiomyopathy.
All the I0 patients with acute myocarditis gave a history of 'dengue-like' illness in the recent or distant past. It was not possible by determining the haemagglutination-inhibition antibody levels of single sera to determine whether the infections were primary or secondary dengue. However, the complementfixation test results confirmed that all these were secondary or recurrent infections as there was antibody to both Types I and II dengue. Two patients (Cases i and 6) showed high haemagglutination-inhibition and complement-fixation titres to chikungunya as well as dengue haemagglutination-inhibition and complement-fixation antibody, which indicated infections with both viruses in the past. The main clinical features and sequelae of I0 patients with arbovirus myocarditis are illustrated by case histories.
Case histories
Case I A 43-year-old housewife developed sharp pain over the praecordium, difficulty in taking a deep breath, profuse sweating, and dizziness.
A few days previously and also three months before she had had fever with recurrent headaches, backache, and pains in the joints.
On examination, temperature was 37 40C, pulse rate 78 a minute with frequent ectopics, and blood pressure 130/70 mmHg. Both heart sounds were heard. Crepitations were heard at the base of the left lung. She ran a low temperature for five days and complained of extreme fatigue and breathlessness. The ventricular ectopics persisted for four weeks. Two months later, she was symptom free; there were no abnormal physical signs in the heart.
Comments
This woman presented with severe chest pain simulating coronary heart disease. She had extensive electrocardiographic changes, a raised erythrocyte sedimentation rate, normal serum enzymes, and positive serology for arbovirus infection. The pain was probably pericardial in origin. She made a satisfactory recovery within two months with no symptoms, return of the electrocardiogram to normal, and no residual cardiomegaly.
Case 2 An unmarried woman aged 24 years presented with fever of I0 days' duration, generalized aches and pains, painful joints, palpitations, and sharp pain over the praecordium made worse by deep breathing.
On examination, she looked ill and pale, respirations were rapid, temperature was 40°C, pulse 124 a minute and irregular. Blood pressure was I I0/80 mmHg. On auscultation there was a gallop rhythm with a prominent third heart sound, and soft systolic murmur over the mitral area.
Six months later she had remained well apart from a relapse of fever and tachycardia. She complained of slight breathlessness on effort and occasional palpitations. The pulse rate was I24 a minute and blood pressure iio/60 mmHg. On auscultation there was gallop rhythm with a prominent third heart sound.
Comments
This woman presented with high fever, chest pain, difficulty in breathing, an irregular pulse due to AV dissociation, and atrial ectopics and gallop rhythm. Six months later she continued to have mild symptoms and the electrocardiogram showed sinus tachycardia; the heart size was normal and the gallop rhythm persisted.
Case 3 A married woman aged 33 years complained of increasing fatigue, headache, dyspnoea on exertion, and dizziness of six weeks' duration after a low fever with generalized aches and pains.
On auscultation both heart sounds were heard; there were no added sounds. While in hospital she had a recurrence of fever.
Six months later she complained of dyspnoea, and undue fatigue on effort. Pulse rate was 72 a minute, blood pressure I30/80 mmHg, and heart sounds were normal. Comments This patient, who gave a history of six weeks' disability following low grade fever, had occasional ventricular ectopics, an abnormal electrocardiogram, radiographic evidence of cardiomegaly, and very high dengue antibody titre. The persisting electrocardiographic changes and cardiomegaly suggest transition from acute myocarditis to cardiomyopathy.
Case 4 A married woman aged 48 years was first seen in March I971 with a history of chest pain accompanied by difficulty in breathing, sweating, and fainting. In November I970 she had dengue fever, 3 months after which she had recurrent substemal chest pain, palpitations, and breathlessness on exertion, which progressively increased.
On examination she looked anxious and pale, the pulse was 96 a minute, blood pressure ii1o/80 mmHg, and there was a gallop rhythm heard with a prominent third heart sound. Six months later she continued to have breathlessness, chest pain, and fatigue on undue exertion.
Comments
This 48-year-old woman gave a history of dengue fever. She continued to have persistent mild symptoms, electrocardiographic changes, and cardiomegaly suggesting cardiomyopathy. Her 5-year-old daughter (Case io) who gave a similar history had a high titre of antibodies for dengue and changes in the electrocardiogram.
Case 5 A married housewife aged 32 years was first seen in February I97I complaining of recurrent chest pain, breathlessness, and fatigue on mild exertion.
In early December I970, she had high fever of sudden onset accompanied by headache, backache, painful joints, and sharp substernal chest pain. A month later she had a relapse of fever and more severe chest pain, during which she blacked out, sweated, vomited, and felt dizzy. She was treated for 'benign pericarditis'. She continued to have substernal discomfort on exertion, palpitations, mild breathlessness, and undue fatigue.
On examination, pulse rate was 88 a minute with occasional ectopics, and blood pressure was 90/70 numHg. There was a distinct third heart sound.
Six months later she continued to have symptoms; the cardiac enlargement and electrocardiographic changes persisted.
Comments
This 32-year-old woman with a well-documented history of 'benign pericarditis' and a positive serology for dengue continued to have symptoms, persistent cardiomegaly, and electrocardiographic changes suggesting cardiomyopathy.
Case 6 A woman aged 44 years, mother of 10 children, was admitted to hospital in February I97I, with a history of increasing breathlessness on effort, nocturnal dyspnoea, dizziness, and undue fatigue of 6 weeks' duration, after 'denguelike' fever.
On examination, she was mildly orthopnoeic, pulse rate was 92 a minute and irregular, jugular venous pressure was raised 4 cm, and blood pressure was I60/I00 mmHg. On auscultation there was a gallop rhythm with prominent third heart sound. There were scattered rhonchi and crepitations heard over both lung bases. The liver was palpable 4 cm.
She had a relapse of fever in hospital. Treated with strict bed rest for four weeks, she became free of symptoms; pulse rate was between 68 to 8o a minute, blood pressure was normal, and the adventitious sounds in the lung cleared. She had a relapse of fever three months later with worsening of symptoms. Six months later she had signs of early congestive failure, sinus tachycardia (rate I20 a minute), raised jugular venous pressure, a blood pressure of 150/90 niniHg, gallop rhythm with prominent third heart sound, hepatomegaly, and ankle oedema.
Comments
This multiparous female presented with impaired effort tolerance and nocturnal dyspnoea after chikungunya fever. She had sinus tachycardia, frequent ventricular ectopics, gallop rhythm, cardiomegaly, and an abnormal electrocardiogram. Six months later she had signs of congestive cardiac failure. She provides an example of congestive cardiomyopathy after chikungunya fever.
Case 7 A male medical colleague was aged 32 years when first seen in I953 with recurrent chest pain and palpitations after a 'dengue-like' fever. He had an irregular pulse due to frequent ventricular ectopics. Virus serology was not done. Electrocardiogram showed non-specific T wave inversion over leads II, III, aVF, and Vs-V6. He suffered from palpitations for many years after this but continued to lead an active life. The ventricular ectopics and the electrocardiographic changes persisted with a deep S in V2 (30 mm) and prominent R in V5 (25 mm) wave.
Teleradiogram confirmed the presence of cardiomegaly.
In January 197I after an attack of dengue fever he noticed increasing breathlessness on exertion, ankle oedema, effort syncope, dizzy spells, and easy fatigue. On amination jugular venous pressure was raised, pulse 40 a minute, and blood pressure I60/80 mmHg. There was a loud late systolic murmur over the mitral area. There were crepitations at both lung bases; and hepatomegaly and ankle oedema were present.
Seen six months later, he continued to have symptoms and manifested evidence of early congestive heart failure, persisting cardiomegaly, and an abnormal electrocardiogram.
Comments
A 32-year-old medical colleague developed cardiomegaly and an abnormal electrocardiogram (cardiomyopathy) after a 'dengue-like' fever in 1953. The abnormal electrocardiogram and cardiomegaly remained unchanged. He continued to lead an active but sedentary life for I8 years. After an attack of dengue fever in I97I he developed cardiac failure with evidence of congestive cardiomyopathy.
Case 8 A i6-year-old girl was first seen in 1968 with a history of increasing breathlessness, chest pain, palpitations, and oedema of vulva and both lower limbs.
In I962 she had a 'dengue-like' fever, painful swelling of knees and ankle joints, after which she developed mild exertional dyspnoea. She had recurrent attacks of fever between 1962 and I968.
She had no cardiac murmurs and the ASO titre was repeatedly negative.
On examination she was underweight and mildly orthopneic, pulse rate 84 a minute, pulsus paradoxus was present, the jugular venous pressure raised I4 cm, and blood pressure ioo/8o mmHg. The cardiac impulse was not palpable. There was wide splitting of the pulmonary second sound and a gallop rhythm with a loud third heart sound. There was a right-sided pleural effusion, palpable liver (8 cm), and generalized anasarca, with oedema of the face and ascites.
She was treated with digitalis and diuretics.
With time she became progressively breathless, and her generalized oedema became more difficult to control necessitating larger doses of diuretics.
Comments
This young girl, who gave a history of recurrent dengue, developed evidence of cardiac constriction confirmed by cardiac catheterization.
Case 9 A 46-year-old bus driver was referred for 'investigation because of an irregular pulse. Three months previously he had had high fever and then noticed breathlessness after severe exertion. He smoked 20 cigarettes daily and consumed I bottle arrack daily for 5 years. On examination he was underweight, pulse rate was irregularly irregular, rate 88 a minute, and blood pressure was II0/70 mmHg. There was a gallop rhythm with a promin-%ent third heart sound best heard over the mitral area.
Comments
Employees of the Ceylon Transport Board are examined periodically. This man was quite normal until he developed fever 3 months previously, after which he developed atrial fibrillation *and cardiomegaly. The positive viral serology indicated recent dengue and chikungunya fever.
The persisting cardiomegaly and arrhythmia with no evidence of heart failure suggests transition from acute myocarditis to 'precongestive' cardiomyopathy. His heavy intake of alcohol may have contributed.
Case Io This 5-year-old daughter of Case 4 developed a 'dengue-like fever' with her mother in November I970, complaining of severe chest pain and difficulty in breathing. She continued to be listless, easily tired, lethargic, and disinclined to play for six months. On examination in May 1971 she looked pale and underweight. Pulse was ioo a minute and blood pressure 90/70 mmHg, heart sounds were normal.
Comments
This s-year-old daughter of Case 4 developed dengue fever and symptoms of acute myocarditis. She had persistent electrocardiographic changes and serology positive for dengue fever.
Laboratory investigations The erythrocyte sedimentation rate done on 9 patients was raised with readings in the first hour between 20 and 40 mm in 5, between 40 and go 4 5 6 8 9 mm in 2, and above go mm in 2. The haemoglobin levels ranged between 9-6-I3-7 g/IOO ml. Serum aspartate aminotransferase (SGOT) and antistreptolysin titre were within normal limits. Blood for antinuclear factor, microfilariae, and haemagglutination test for toxoplasmosis were negative. Blood picture studies were normal.
Electrocardiograms The electrocardiograms (Table 3) showed rhythm disturbances including sinus tachycardia, sinus bradycardia, atrioventricular conduction disturbances, and atrial fibrillation. Atrial and ventricular ectopic beats were common in the acute phase and during convalescence. T wave abnormalities and ST segment changes were commonly encountered indicating focal myocardial damage.
Radiology of the heart The cardiothoracic ratios determined from serial radiographs of the chest are described in Table 3. Cardiomegaly was common and often persisted long after the acute illness was over.
The findings at right and left heart catheterization and cineangiography on Case 8 in October I968 are shown in Table 4.
Cineangiocardiography with injection of contrast into the left ventricular cavity showed a mild degree of mitral incompetence with thickening of the left ventricular wall. The left ventricle emptied satisfactorily.
Discussion
Dengue fever, caused by a group B arbovirus, has been endemic in Ceylon for many years (Mendis,i967), and dengue virus (Type I) has been isolated (Hermon, Anandarajah, and Pavri, 1970). Chikungunya fever, which is caused by a group A arbovirus, appeared in epidemic proportions in I965 and was confirmed virologically (Hermon,I967). A haemagglutination inhibition serological island-wide survey carried out in I966-67 revealed that group B arbovirus infections were endemic throughout the island, while chikungunya infections were less prevalent (Vesenjak-Hirjan, Hermon,and Vitarana,I969). From the antibody levels obtained in the island survey haemagglutination inhibi- It is important to recognize that the heart can be affected since these fevers are extremely common in certain areas, with an increasing incidence. An appreciable number of patients have persisting symptoms, cardiomegaly, and electrocardiographic changes long after the initial illness, giving rise to the cardiomyopathy, has been forgotten.
Viruses may invade the myocardium and directly damage the muscle fibres or give rise to a hypersensitivity or autoimmune reaction causing myocardial damage. Moreover, this altered state of the myocardium may persist long after the initial virus infection is over and make it prone to recurrent damage from other agents.
The clinical features in myocarditis are often so vague and non-specific that unless one is aware of its existence in relation to a particular infection, it can often be missed. The signs may be minimal and limited to an irregularity in heart rhythm or minor changes in the electrocardiogram. It may be mistaken for another disease. Two patients complained of severe chest pain which simulated coronary heart disease (Cases i and 4). Another had progressive symptoms which led to congestive cardiac failure with cardiomegaly, gallop rhythm with slight rise of blood pressure, and was mistaken for hypertensive cardiac failure (Case 6).
Early on in the disease, the erythrocyte sedimentation rate was raised. The serum enzymes were normal and, being negative, helped to exclude acute myocardial infarction. * Often the earliest clue was provided by abnormalities in the electrocardiogram, multiple etcopics, ST segment and T wave changes. These non-specific changes had to be differentiated from similar changes caused by coronary heart disease, digitalis intoxication, abnormalities after electrolyte disturbances, #nd hyperventilation due to anxiety. Serial changes in the electrocardiogram were often the best guide to prognosis.
Cases I, 2, and IO had a favourable outcome, with gradual disappearance of symptoms, improvement in the electrocardiogram, and no residual cardiomegaly. Cases 3, 4, and 5 became symptom free but had permanent changes in the electrocardiogram and persisting cardiomegaly developing 'precongestive' cardiomyopathy. These patients are liable to relapses and subsequent deterioration. In Case 7 this occurred i8 years after the original X attack which, too, was personally treated by one of us (I.O.). Case 6 had persistent symptoms, probably as a result of recurrent subclinical infections, developed cardiomegaly with heart failure, and congestive cardiomyopathy. Case 8 developed signs of right heart failure and evidence of cardiac constriction.
W Arbovirus myocarditis was thus seen to lead to cardiomyopathy with eventual impairment "of cardiac function. Cardiomyopathy is a common disorder in Ceylon (Obeyesekere,I968). It seems likely that arbovirus infections play a significant role in its aetiology.
|
2018-04-03T01:00:46.643Z
|
1972-08-01T00:00:00.000
|
{
"year": 1972,
"sha1": "f1e218573830dca32fb6320c69b0f09f21144628",
"oa_license": "CCBY",
"oa_url": "https://heart.bmj.com/content/34/8/821.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "BMJ",
"pdf_hash": "f1e218573830dca32fb6320c69b0f09f21144628",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
23089598
|
pes2o/s2orc
|
v3-fos-license
|
A bipartite signaling mechanism involved in DnaJ-mediated activation of the Escherichia coli DnaK protein.
The DnaK and DnaJ heat shock proteins function as the primary Hsp70 and Hsp40 homologues, respectively, of Escherichia coli. Intensive studies of various Hsp70 and DnaJ-like proteins over the past decade have led to the suggestion that interactions between specific pairs of these two types of proteins permit them to serve as molecular chaperones in a diverse array of protein metabolic events, including protein folding, protein trafficking, and assembly and disassembly of multisubunit protein complexes. To further our understanding of the nature of Hsp70-DnaJ interactions, we have sought to define the minimal sequence elements of DnaJ required for stimulation of the intrinsic ATPase activity of DnaK. As judged by proteolysis sensitivity, DnaJ is composed of three separate regions, a 9-kDa NH2-terminal domain, a 30-kDa COOH-terminal domain, and a protease-sensitive glycine- and phenylalanine-rich (G/F-rich) segment of 30 amino acids that serves as a flexible linker between the two domains. The stable 9-kDa proteolytic fragment was identified as the highly conserved J-region found in all DnaJ homologues. Using this structural information as a guide, we constructed, expressed, purified, and characterized several mutant DnaJ proteins that contained either NH2-terminal or COOH-terminal deletions. At variance with current models of DnaJ action, DnaJ1-75, a polypeptide containing an intact J-region, was found to be incapable of stimulating ATP hydrolysis by DnaK protein. We found, instead, that two sequence elements of DnaJ, the J-region and the G/F-rich linker segment, are each required for activation of DnaK-mediated ATP hydrolysis and for minimal DnaJ function in the initiation of bacteriophage lambda DNA replication. Further analysis indicated that maximal activation of ATP hydrolysis by DnaK requires two independent but simultaneous protein-protein interactions: (i) interaction of DnaK with the J-region of DnaJ and (ii) binding of a peptide or polypeptide to the polypeptide-binding site associated with the COOH-terminal domain of DnaK. This dual signaling process required for activation of DnaK function has mechanistic implications for those protein metabolic events, such as polypeptide translocation into the endoplasmic reticulum in eukaryotic cells, that are dependent on interactions between Hsp70-like and DnaJ-like proteins.
The DnaJ, DnaK, and GrpE proteins of Escherichia coli were first identified via genetic studies of E. coli mutants that are incapable of supporting the replication of bacteriophage DNA (1)(2)(3). Later, it was found that DnaJ, DnaK, and GrpE are all prominent bacterial heat shock proteins, which comprise a set of about 30 proteins whose expression is transiently induced when cells are grown at elevated temperatures (reviewed in Refs. 4 and 5). In recent years it has become apparent that eukaryotic cells contain families of proteins that are homologous to each DnaJ and DnaK (6,7). Intensive investigations in numerous laboratories have demonstrated that these universally conserved proteins participate in a wide variety of protein metabolic events in both normal and stressed cells, including protein folding (8,9), protein trafficking across intracellular membranes (10,11), proteolysis, protein assembly, as well as disassembly of protein aggregates and multiprotein structures (reviewed in Refs. 12 and 13). Because several of these DnaJ and DnaK family members have the capacity to modulate polypeptide folding and unfolding, they have been classified as molecular chaperones (14).
The available evidence indicates that DnaJ, DnaK, and GrpE of E. coli often cooperate as a chaperone team to carry out their physiological roles. Each of these three proteins functions in (i) regulation of the bacterial heat shock response (15,16); (ii) general intracellular proteolysis (17); (iii) folding of nascent polypeptide chains, maintaining proteins destined for secretion in a translocation-competent state, and disassembly and refolding of aggregated proteins (reviewed in Ref. 18); (iv) flagellum synthesis (19); (v) replication of coliphages and P1 and replication of the F episome (20). DnaK, the primary Hsp70 homologue of E. coli (21), is believed to play a central role in these processes. Like other members of the Hsp70 family, DnaK possesses a weak ATPase activity (22). The DnaK ATPase activity is stimulated by the DnaJ and GrpE heat shock proteins (23,24), as well as by many small peptides that are at least 6 amino acids in length (24,25). Peptide interactions with DnaK may be representative of the binding of Hsp70 proteins to unfolded or partially folded polypeptides.
In contrast to the situation for DnaK and Hsp70 proteins, relatively few investigations have focused on DnaJ or other members of the Hsp40 family. It is known from in vitro studies of DNA replication that DnaJ participates along with DnaK in the assembly and disassembly of nucleoprotein structures that form at the viral replication origin (26 -30). DnaJ may play a dual role in this and other Hsp-mediated processes. In addition to activating ATP hydrolysis by DnaK, DnaJ, by binding first to multiprotein assemblies or nascent polypeptides, may also assist DnaK by facilitating its interaction with polypeptide substrates (26, 30 -34).
Comparisons of the amino acid sequences of DnaJ family members has led to the identification of three conserved sequence domains in E. coli DnaJ (35). These sequence domains, proceeding from the amino terminus, are: 1) a highly conserved 70-amino acid region, termed the J-region, that is found in all DnaJ homologues; 2) a 30-amino acid sequence that is unusually rich in glycine and phenylalanine residues; and 3) a cysteine-rich region that contains four copies of the sequence Cys-X-X-Cys-X-Gly-X-Gly, where X generally represents a charged or polar amino acid residue. The COOH-terminal portion of E. coli DnaJ, comprising residues 210 -376, is not well conserved.
To investigate the functional roles of the conserved sequence domains of DnaJ, we constructed a series of recombinant plasmids that express truncated DnaJ proteins, each of which carries a deletion of one or more of the conserved sequence elements. Following purification, each deletion mutant protein was examined for its capacity to activate ATP hydrolysis by DnaK. Our results indicate that the J-region and the Gly/Pherich segment of DnaJ must both be present in cis in the DnaJ deletion mutant protein to achieve stimulation of DnaK-mediated ATP hydrolysis. We have, however, discovered that the J-region alone is capable of activating ATP hydrolysis by DnaK if it is supplemented in trans with small peptides that have high affinity for the polypeptide-binding site on DnaK. We discuss the relevance of these findings for protein metabolic events mediated in part by cooperative action of Hsp70 homologues and DnaJ-like proteins.
Bacteriophage and E. coli Replication Proteins-Bacteriophage and E. coli replication proteins except DnaJ and DnaK were prepared as described elsewhere (36). The purification schemes for DnaJ protein and DnaJ deletion mutant proteins are described in this article. The purification of DnaK protein has been described previously (24). DnaK protein elutes in three distinct peaks following chromatography on a Mono-Q resin. All ATPase reactions described in this report were carried out with peak I material, i.e. DnaK protein in the earliest eluting peak.
Determination of Protein Concentration-The protein concentrations of samples containing partially purified proteins were determined by the method of Bradford (38), using bovine ␥-globulin as a standard. The concentrations of purified DnaJ and DnaJ deletion mutant proteins were determined in denaturation buffer, using their individual molar extinction coefficients (⑀ M ) as determined by the method of Gill and von Hippel (39). The concentration of GrpE was determined by a modification of the method of Lowry et al. (40) using bovine serum albumin as a standard. The concentration of DnaK was determined using the calculated molar extinction coefficient of the native protein, 15,800 M Ϫ1 cm Ϫ1 (24).
Strains and Plasmids-Two E. coli strains were used for the expression of DnaJ and DnaJ deletion mutant proteins; RLM569 (C600, recA, hsdR, tonA, lac Ϫ , pro Ϫ , leu Ϫ , thr ϩ , dnaJ ϩ ) and PK102 (⌬dnaJ15), which carries a deletion of the primary portion of the dnaJ coding sequence (41). Plasmid pRLM76 was used as the expression vector for DnaJ and DnaJ deletion mutant proteins. Plasmid pRLM76 is a derivative of plasmid pHE6 (42) that is deleted for the DNA sequence that encodes the amino-terminal portion of the N gene. Plasmid pRLM76 contains a polycloning linker downstream from a phage p L promoter. Thermosensitive cI857 repressor protein, which represses transcription from p L at 30°C, is constitutively expressed from a mutant cI gene present in pRLM76. Expression of genes cloned into the polycloning linker of pRLM76 can be greatly induced by shifting the growth temperature of cells harboring the plasmid to 42°C. Incubation at this temperature results in a rapid inactivation of cI857 repressor protein and leads to an enormous increase in transcription from the strong p L promoter. Plasmid pRLM76 was constructed as follows: plasmid pHE6 DNA was digested to completion with HincII and the 375-bp fragment carrying both the p L promoter and a portion of the N gene was isolated. This fragment was further digested with HaeIII to produce two fragments of 148 and 227 bp. The 148-bp fragment carrying the p L promoter was isolated and ligated to a 3573-bp fragment isolated from pHE6 DNA that had been digested with SmaI and partially digested with HincII. This ligation mixture was transformed into RLM569 and ampicillinresistant clones that carried a 3.7-kilobase plasmid were identified. A plasmid having the p L promoter in the desired orientation (i.e. directing transcription across the polylinker sequence) was identified and named pRLM76 (3721 bp).
Plasmids carrying the wild type dnaJ gene or a dnaJ deletion mutant gene were constructed by cloning a DNA fragment produced by polymerase chain reaction (PCR)-mediated amplification of E. coli genomic DNA with the aid of synthetic oligonucleotide primers. The sequences of the forward primers used were: oligonucleotide A (5Ј-CCACCGGATCCAGGAGGTAAAAATTAATGGCTAAGCAAGATTATT-AC-3Ј), oligonucleotide B (5Ј-CCACCGGATCCAGGAGGTAAAAATTA-ATGGCTGCGTTTGAGCAAGGT-3Ј), and oligonucleotide C (5Ј-CCACCGGATCCAGGAGGTAAAAATTAATGCGTGGTCGTCAACGT-GCG-3Ј). Each forward primer contained a BamHI recognition site, a consensus ribosome binding site, and an ATG initiation codon juxtaposed to dnaJ coding sequence (underlined in the primer sequences listed above). These coding sequences correspond to dnaJ nucleotides 1-21 for primer A; dnaJ nucleotides 217-234 for oligonucleotide B; and dnaJ nucleotides 318 -333 for primer C. The sequences of the reverse primers were: oligonucleotide D (5Ј-CCACCTCTAGACTGCAGGTCGA-CATCTTAGCGGGTCAGGTCGTC-3Ј), oligonucleotide E (5Ј-CCAC-CTCTAGACTGCAGGTCGACATCTTACTCAAACGCAGCATG-3Ј), and oligonucleotide F (5Ј-CCACCTCTAGACTGCAGGTCGACATCTTAA-CGTCCGCCGCCAAA-3Ј). Each reverse primer contains a PstI recognition site, the complement of two tandem translation stop codons, and the complement of dnaJ coding sequence (underlined). The sequences complementary to dnaJ coding sequence were: oligonucleotide D, complement of dnaJ nucleotides 1131-1114; primer E, complement of dnaJ nucleotides 225-211; and oligonucleotide F, complement of dnaJ nucleotides 318 -304. PCR amplification was performed in a reaction mixture (100 l) containing 120 ng of high molecular weight E. coli DNA, 100 pmol each of one forward and one reverse primer, 50 M of each of the four dNTPs, 10 mM Tris-HCl, pH 8.3, 50 mM KCl, 1.5 mM MgCl 2 , 0.01% gelatin, and 2.5 units of Amplitaq DNA polymerase.
Plasmid pRLM232 was constructed by PCR amplification of the entire E. coli dnaJ coding sequence, using primers A and D. Following amplification, the PCR fragment was digested with BamHI and PstI, and ligated to pRLM76 DNA which had been similarly digested at the unique BamHI and PstI sites present in the polylinker carried by this vector. The DNA in the ligation mixture was transformed into E. coli strains RLM569 and PK102. Ampicillin-resistant clones were selected at 30°C and screened for their capacity to overproduce a polypeptide of the size of full-length DnaJ protein (i.e. 41 kDa) when grown at 42°C. Plasmid pRLM233 (a pRLM76 derivative that expresses DnaJ1-75) was constructed and identified as above, except that the primers A and E were used for the initial PCR amplification and that the ampicillin- 1 The abbreviations used are: DTT, dithiothreitol; bp, base pair(s); J1-75, DnaJ1-75; J1-106, DnaJ1-106; J73-376, DnaJ73-376; J106 -376, DnaJ106 -376; MES, 2-(N-morpholino)ethanesulfonic acid; PCR, polymerase chain reaction; PAGE, polyacrylamide gel electrophoresis; MALDI, matrix-assisted laser desorption/ionization; ER, endoplasmic reticulum. resistant transformants were screened for their capacity to thermally induce overproduction of a protein of 9 kDa. Plasmids pRLM234, pRLM238, and pRLM239, i.e. pRLM76 derivatives for expression of DnaJ1-106, DnaJ73-376, and DnaJ106 -376, respectively, were constructed and identified by similar procedures. Several plasmids of each type were selected for DNA sequence analysis.
DNA Sequencing-The dnaJ gene region in plasmid clones containing the dnaJ gene or dnaJ deletion mutants was sequenced on both strands using the dideoxynucleotide chain termination method with modified T7 DNA polymerase (Sequenase) as described by the manufacturer (U. S. Biochemical). Plasmids free of nucleotide substitutions were selected for further analysis.
Expression and Purification of DnaJ, DnaJ73-376, and DnaJ106 -376 -E. coli RLM569 cells carrying a pRLM76 derivative that expresses DnaJ or a DnaJ deletion mutant protein were grown aerobically in a Fernbach flask at 30°C in 700 ml of Terrific Broth to an optical density of 3.0 at 600 nm. The cultures were induced by the addition, with rapid mixing, of 300 ml of Terrific Broth that had been prewarmed to 70°C. The cultures were aerated at 42°C for 2-3 h and subsequently the cells were collected by centrifugation. The cell pellets were resuspended in 25 ml of lysis buffer, quick-frozen in liquid nitrogen, and stored at Ϫ80°C.
Frozen cell suspensions (58 ml, equivalent to about 8 g of cell paste) were thawed and cell lysis was induced by three cycles of quick-freezing in liquid nitrogen and thawing in water at 4°C. Egg white lysozyme was added to a final concentration of 0.1 mg/ml to the cell lysate, which was further incubated at 4°C for an additional 30 min. The lysate was supplemented with 20 ml of lysis buffer and particulate material was removed by centrifugation for 60 min at 40,000 rpm in a Beckman 45Ti rotor (120,000 ϫ g). All subsequent purification steps for DnaJ and each of the DnaJ deletion mutant proteins were carried out at 0 -4°C. To the supernatant (Fraction I, 70 ml), ammonium sulfate was slowly added to 40% saturation (0.226 g of ammonium sulfate/ml of supernatant), and the suspension was stirred for 30 min at 4°C. The precipitate that formed was removed by centrifugation at 30,000 ϫ g for 1 h. Solid ammonium sulfate was added to the supernatant to 55% saturation (0.089 g of ammonium sulfate/ml of supernatant), and, after stirring for 30 min, the precipitate was collected by centrifugation at 30,000 ϫ g for 1 h. The pelleted precipitate was resuspended in 50 ml of buffer B and dialyzed for 16 h against 4 liters of buffer B. The dialyzed protein (Fraction II, 130 mg, 60 ml) was applied to a Bio-Rex 70 column (5 ϫ 10 cm) that had been equilibrated with buffer B. The column was washed with 600 ml of buffer B and bound proteins were subsequently eluted with a 800-ml linear gradient of 0.15-1.0 M NaCl in buffer B at a flow rate of 3 column volumes per h. The peak DnaJ-containing fractions were pooled (150 ml; ϳ0.42 M NaCl) and concentrated to 40 ml using an Amicon stirred cell concentrator fitted with a PM-10 membrane. The concentrated protein sample (Fraction III, 50 mg, 40 ml) was dialyzed for 16 h against 4 liters of buffer C and applied to a P-11 phosphocellulose column (2.4 ϫ 11 cm), that had been equilibrated with buffer C. The column was subsequently washed with 150 ml of buffer C and bound proteins were eluted with a 250-ml linear gradient of 0.15-1.0 M NaCl in buffer C at a flow rate of 1.4 column volumes per h. DnaJ eluted at approximately 0.35 M NaCl. The primary DnaJ-containing fractions were pooled (80 ml) and concentrated to 40 ml with an Amicon concentrator as described above. This sample was dialyzed against 2 liters of buffer C to produce Fraction IV (35 mg, 40 ml). Fraction IV protein was applied to a hydroxyapatite column (2.4 ϫ 11 cm) equilibrated with buffer C. The column was washed with 150 ml of buffer C and bound protein was eluted with a 250-ml linear gradient of 120 -500 mM potassium phosphate in buffer C at a flow rate of 1.4 column volumes/h. DnaJ eluted at approximately 0.4 M potassium phosphate. Fractions containing the predominant portion of DnaJ protein were pooled (40 ml), concentrated to 10 ml in an Amicon apparatus, and dialyzed extensively against buffer A. The dialyzed sample (Fraction V, 20 mg, 10 ml) was diluted with an equal volume of buffer E, containing 2 M ammonium sulfate, to produce a conductivity equivalent to that of buffer D. This sample was applied to a Pharmacia-Biotech Butyl-Sepharose 4B column (2.4 ϫ 11 cm) equilibrated with buffer D. DnaJ was eluted with a 200-ml linear gradient of 100% buffer D to 100% buffer E, followed by 50 ml of buffer E, at a flow rate of 1 column volume/h. DnaJ eluted at approximately 95-100% buffer E. Fractions containing DnaJ at greater than 90% purity, as analyzed by SDS-PAGE, were pooled (30 ml) and concentrated to 10 ml using an Amicon apparatus (Fraction VI, 10 mg, 10 ml). This protocol routinely produces approximately 1.2 mg of DnaJ at greater than 90% purity per gram of cell paste.
The physical properties of DnaJ73-376 and DnaJ106 -376 are similar to those of wild type DnaJ. Consequently, these DnaJ deletion mutant proteins could be purified by using a slightly modified version of the purification protocol used for DnaJ. Frozen cells were resuspended in lysis buffer and lysed as described above, except that the lysis buffer also contained 0.1% octyl--D-glucopyranoside and 1 mM phenylmethylsulfonyl fluoride and that the cell lysate was mixed gently for 12 h on a shaker at 4°C. The lysate was centrifuged at 120,000 ϫ g for 1 h and the supernatant was supplemented with ammonium sulfate to 40% saturation (0.226 g/ml of supernatant). The precipitated protein was collected by centrifugation at 30,000 ϫ g for 1 h. The protein pellet was dissolved in 50 ml of buffer A and dialyzed extensively against buffer B. All of the remaining purification steps were identical to those used for purifying wild type DnaJ. The final preparations of DnaJ73-376 and DnaJ106 -376 deletion mutant proteins were estimated to be greater than 90% pure. They were quick-frozen in liquid nitrogen and were stored frozen at Ϫ80°C.
Expression and Purification of DnaJ1-75-RLM1340 (RLM569/ pRLM233) cells were grown, thermally induced, harvested, and lysed as described above for wild type DnaJ. A cell lysate from 8 g of cells was centrifuged at 120,000 ϫ g for 1 h. The supernatant (Fraction I, 70 ml) was supplemented with ammonium sulfate to 75% saturation (0.476 g of ammonium sulfate per ml of supernatant), stirred at 4°C for 30 min, and centrifuged at 30,000 ϫ g for 1 h. The supernatant was concentrated to 25 ml, using an Amicon stirred cell concentrator fitted with a YM3 membrane, and dialyzed against 2 liters of buffer F (Fraction II, 80 mg, 30 ml). Fraction II was applied to a Bio-Rex 70 column (3.4 ϫ 11 cm) equilibrated in buffer F and the column was subsequently washed with 300 ml of buffer F. Bound proteins were eluted with an 400-ml linear gradient of 0.025-0.7 M NaCl in buffer F at a flow rate of 3 column volumes per hour. DnaJ1-75 eluted at approximately 0.12 M NaCl. The fractions containing the primary portion of DnaJ1-75 polypeptide were pooled (80 ml) and concentrated to 30 ml in an Amicon apparatus (Fraction III, 30 mg). Fraction III protein was applied to a Bio-Rad hydroxyapatite column (2.4 ϫ 11 cm) that had been equilibrated with buffer C. The column was washed with 150 ml of buffer C and eluted with a 250-ml linear gradient of 0.05-0.5 M potassium phosphate at a flow rate of 1.4 column volumes per h. DnaJ1-75 eluted at approximately 0.09 M potassium phosphate. Fractions containing DnaJ1-75 at greater than 95% purity were pooled (25 ml) and concentrated to 15 ml in an Amicon apparatus. This sample (Fraction IV, 20 mg, 15 ml) was quick-frozen in liquid nitrogen and stored at Ϫ80°C.
Expression and Purification of DnaJ1-106 -RLM1341 (RLM569/ pRLM234) cells were grown, thermally induced, harvested, and lysed as described for wild type DnaJ. A cell lysate from 8 g of cells was centrifuged at 120,000 ϫ g for 1 h. The supernatant (Fraction I, 70 ml) was supplemented with ammonium sulfate to 55% saturation (0.236 g of ammonium sulfate/ml of supernatant), stirred for 30 min, and centrifuged at 30,000 ϫ g for 1 h. The supernatant, which contained the vast majority of the J1-106 polypeptide, was brought to 70% saturation with ammonium sulfate (0.093 g of ammonium sulfate/ml of supernatant) and was stirred for 30 min. The protein precipitate was collected by centrifugation at 30,000 ϫ g for 1 h. The pellet was resuspended in 50 ml of buffer F and dialyzed against 2 liters of buffer F (Fraction II, 75 mg, 60 ml). Fraction II protein was applied to a Bio-Rex 70 column (3.4 ϫ 11 cm) equilibrated with buffer E. Subsequently, the column was washed with 300 ml of buffer E and bound proteins were eluted with a 400-ml linear gradient of 0.025-1.0 M NaCl in buffer E at a flow rate of 3 column volumes/h. DnaJ1-106 eluted at approximately 0.15 M NaCl. The fractions containing the highest concentration of DnaJ1-106 were pooled (90 ml) and concentrated to 40 ml in an Amicon stirred cell concentrator fitted with a YM-3 membrane (Fraction III, 35 mg, 40 ml). Fraction III was dialyzed against 4 liters of buffer C and applied to a hydroxyapatite column (2.4 ϫ 11 cm) equilibrated with buffer C. The column subsequently was washed with 150 ml of buffer C and bound proteins were eluted with a 250-ml linear gradient of 50 -500 mM potassium phosphate in buffer C at 1.4 column volumes/h. DnaJ1-106 eluted at approximately 0.12 M potassium phosphate. Fractions containing DnaJ1-106 at greater than 95% purity were pooled (40 ml) and concentrated to 10 ml in an Amicon apparatus (Fraction IV, 25 mg, 10 ml). The preparation of DnaJ1-106 was quick-frozen and stored at Ϫ80°C.
Single Turnover ATPase Assay-ATPase reaction mixtures ( 29,659. Prior to the addition of ATP as the final component, all reaction mixtures were preincubated at 25°C for 2 min. The reaction was initiated by the addition of ATP and incubated at 25°C. At each time point 15-l portions were removed to tubes containing 2 l of 1 N HCl. This treatment lowered the pH to between 3 and 4 and quenched the ATPase reaction (control experiments indicated that little or no additional hydrolysis of ATP occurred subsequent to the addition of HCl). Portions (4 l) from each quenched reaction mixture were applied to polyethyleneimine-cellulose thin layer chromatography plates that had been prespotted with 1 l of a mixture containing ATP and ADP (each at 20 mM). The plates were developed in 1 M formic acid and 0.5 M LiCl. The migration positions of ATP and ADP were visualized by short wave UV irradiation, and the level of each in the reaction mixture was determined by scintillation counting. The kinetic data obtained from the single turnover ATPase reactions were fit to a first-order rate equation using the nonlinear regression program, "Enzfitter" (Biosoft, Cambridge, UK). K A values, for activation under single turnover conditions of the ATP hydrolysis step in the DnaK ATPase reaction cycle by DnaJ and DnaJ deletion mutant proteins, were obtained from a replot of k hyd values versus concentration of DnaJ or DnaJ deletion mutant protein. For each activator protein, at least five different activator concentrations (over a 100-fold range of concentration) were examined to generate the kinetic data used for the determination of the individual K A values.
DNA Replication Assay-The in vitro assays for DNA replication were performed essentially as described (36 DnaJ deletion mutant proteins were added to the replication assay as indicated. Following assembly of the replication reaction mixture, it was incubated for 40 min at 30°C. The amount of DNA synthesis was determined by measuring the level of [ 3 H]dTMP that had been incorporated into acid insoluble material, which was collected on a glass fiber filter (Whatman AH) and counted in a liquid scintillation counter.
Papain Digestion-Papain (50 g/ml) was activated by incubation for 15 min at 37°C in a buffer containing 50 mM MES, pH 6.5, 1 mM DTT, 5 mM cysteine-HCl, and 0.1 mM -mercaptoethanol. DnaJ protein and DnaJ deletion mutant proteins were treated with activated papain, at 1% (w/w) papain:DnaJ protein, at 30°C for varying times as indicated. Proteolytic digestion was stopped by the addition of an excess of E64, a papain inhibitor (43). Samples (30 l) were mixed with 70 l of SDS-PAGE sample buffer, boiled, and analyzed by electrophoresis in a SDSpolyacrylamide gel as described below. Undigested proteins and proteolytic polypeptide fragments were visualized by staining the gel with Coomassie Brilliant Blue R-250.
Gel Electrophoresis and Amino Acid Sequence Analysis-Protein samples were mixed with an equal volume of SDS-PAGE sample buffer, boiled for 5 min, and subjected to electrophoresis in a 10 -20% gradient SDS-polyacrylamide gel as described by Laemmli (44). Acid/Triton X-100/urea gel electrophoresis (45) was performed in 12% polyacrylamide gels. For determination of amino acid sequences, proteins were blotted from SDS-polyacrylamide gels onto Immobilon-P transfer membrane filters (polyvinylidene difluroide membrane filters, Millipore). The protein bands were visualized by staining the filter with Coomassie Brilliant Blue R-250. The protein bands of interest were excised and each polypeptide was subjected, while still bound to the filter, to NH 2terminal amino acid sequence analysis using a modified Edman protocol. Amino acid sequence analysis was performed by the Johns Hopkins University Peptide Synthesis Facility.
Matrix-assisted Laser Desorption/Ionization Mass Spectral Analysis (MALDI-MS) of Proteins and Polypeptides-MALDI-MS analysis of DnaJ was performed by the Middle Atlantic Mass Spectroscopy Facility (Johns Hopkins University School of Medicine) with a Kratos Kampact
III linear time-of-flight mass spectrometer equipped with a nitrogen laser (337 nm). Protein samples (20 g), consisting of DnaJ or papainresistant fragments of DnaJ, were prepared for mass spectral analysis using Bond-Elute disposable, solid phase extraction, C8 columns, according to the manufacturer's specifications (Varian Inc.). Briefly, 100 l (20 g) of protein sample was loaded onto a 0.5-ml Bond Elute C8 column equilibrated in buffer I (0.1% (v/v) trifluoroacetic acid in deionized H 2 O). The column was washed with 2.0 ml of buffer I and the protein sample was eluted with 500 ml of buffer I containing 95% aqueous acetonitrile. The eluted protein sample was dried in a Speed-Vac centrifugal concentrator and redissolved in 20 l of buffer I containing 20% aqueous acetonitrile. The sample, or analyte (0.3 l), was deposited on a sample site of a 20-site stainless steel slide that contained 0.3 l of a saturated solution of the matrix (3,5-dimethoxy-4hydroxycinnamic acid; 207.8 Da) in 50:50 (v/v) ethanol:water. The analyte-matrix solution was air-dried and the slide was subsequently inserted into the mass spectrometer. The spectra acquired represent the accumulation of data collected from 45 laser shots.
RESULTS
Partial Proteolysis of DnaJ-Analysis of the amino acid sequences of the DnaJ heat shock protein family indicates that there are two large regions that are conserved in multiple members of this family. The most highly conserved region is the 70 amino acid "J-region," which is found in all members of the DnaJ family. The second region, which is present in several, but not all, DnaJ homologues, contains multiple Cys-rich motifs. We wished to determine if these conserved regions represent stable structural domains of DnaJ. A nonspecific protease, such as papain, can be useful for delimiting structural domains in proteins, since its enzymatic activity is generally ineffectual on stable secondary and tertiary structures in substrate proteins. DnaJ was digested with papain and the resulting polypeptide products were subjected to analysis by SDS-PAGE. Proteolysis of DnaJ with papain produced two stable fragments of approximately M r ϭ 9,000 and 30,000 (Fig. 1, J1-376). Edman analysis of the NH 2 -terminal amino acid sequences of these fragments yielded the amino acid sequences AKQDYY for the 9-kDa fragment and GGRGRQ for the 30-kDa fragment, which correspond, respectively, to amino acids 2-7 and 104 -109 of DnaJ. This demonstrates that the 9-kDa polypeptide encompasses the highly conserved "J-region," whereas the 30-kDa fragment includes the cysteine-rich motifs of full-length DnaJ, but does not contain most of the Gly/Pherich segment of the native molecular chaperone.
To obtain a more refined estimate of the positions of the COOH termini in each papain-resistant fragment of DnaJ, we digested DnaJ with papain again, purified each polypeptide by reverse-phase chromatography, and subjected each fragment to MALDI. This analysis revealed that the small NH 2 -terminal J-region fragment was actually a series of fragments ranging in mass from approximately 8770 to about 9900 Da. Comparison to the known sequence of DnaJ indicates that the smaller protease-resistant fragments consist of polypeptides containing DnaJ amino acid residues 2-75 through 2-89. More prolonged digestion of a related polypeptide (DnaJ2-106, see below) with papain yielded polypeptides whose masses were approximately equivalent to DnaJ2-75 and DnaJ2-78. Thus, extensive papain treatment of DnaJ results in nearly complete digestion of the Gly/Phe-rich segment (amino acids 77-107). We conclude that the papain-resistant structural domain encoded by the J-region corresponds to DnaJ2-75 (Fig. 2). The removal of the NH 2 -terminal methionine of DnaJ, however, is not the result of papain action, but rather seems to be due to post-translational processing in vivo, since our sequence analysis indicated that alanine 2 is the NH 2 -terminal amino acid of purified DnaJ.
The larger COOH-terminal polypeptide produced by the repeat papain digestion of DnaJ was found by MALDI analysis to have a mass centered about 30,115 Da (data not shown). If it is assumed that the COOH terminus of DnaJ is resistant to proteolytic cleavage by papain, then this preparation of the COOH-terminal papain fragment seemingly includes DnaJ residues 99 -376 (M r ϭ 30,130). More prolonged digestion of DnaJ with papain results in a COOH-terminal polypeptide of approximately 29 kDa that has DnaJ residue 112 at its amino terminus, as revealed by NH 2 -terminal sequence analysis. We conclude that the initial papain cleavages of DnaJ occur between amino acid residues 80 and 100 and that the remainder of the Gly/Phe-rich segment of this heat shock protein is excised following more extensive papain treatment (Fig. 2).
Expression and Purification of DnaJ Deletion Mutant Proteins-Based on the identification of stable protease-resistant domains in DnaJ and on the location of sequence motifs that are conserved in multiple DnaJ homologues, we designed a series of DnaJ deletion mutant proteins to be used in structurefunction studies. These mutant proteins, depicted schematically in Fig. 3, consist of the NH 2 -terminal J-region (DnaJ1-75) and the COOH-terminal, 30-kDa papain-resistant domain (DnaJ106 -376) as well as derivatives of each that also contain the Gly/Phe-rich segment (DnaJ1-106 and DnaJ73-376, respectively).
DnaJ1-75 was designed for investigations of the functional significance of the highly conserved J-region. PCR was used to amplify the sequence for dnaJ codons 1-75. In this amplification, as well as in other amplifications of segments of the dnaJ gene by PCR, one of the primers included a "hang-off" sequence encoding a consensus E. coli ribosome binding site and an ATG initiator codon. Similarly, the second primer used in the PCR amplification included a hang-off sequence encoding the complement of two tandem stop codons. These "3Ј" primers were designed such that tandem stop codons were juxtaposed, in the proper reading frame, to the 3Ј terminus of dnaJ coding sequences in the amplified DNA. The PCR products were inserted into the polylinker site on pRLM76, an expression vector that provides thermoinducible expression of genes cloned downstream from a p L promoter present on the plasmid. The resulting plasmid, pRLM233, was transformed into strain PK102, a dnaJ deletion mutant of E. coli (41). Induction of expression of the cloned gene fragment by aeration at 42°C of cells harboring pRLM233 resulted in production of DnaJ1-75 (J1-75) to amounts greater than 10% of the total cellular protein (data not shown).
The DnaJ1-106 (J1-106) deletion mutant protein was designed for investigations of the functional significance of the Gly/Phe-rich sequence distal to the J-region. As for J1-75, a pRLM76-derivative that expresses J1-106 was constructed (pRLM234). Overexpressed J1-106 protein, like J1-75, was highly soluble and constituted approximately 10% of the total cellular protein following induction. Both J1-75 and J1-106 were purified to greater than 95% homogeneity as described under "Experimental Procedures" (Fig. 1). Although J1-75 and J1-106 have the same relative electrophoretic mobility in a 10 -20% gradient SDS-polyacrylamide gel (Fig. 1), these two polypeptides can be readily resolved by electrophoresis in a
FIG. 2. Schematic representation of the partial proteolysis of
DnaJ protein with papain. The linear map of DnaJ protein is depicted at the top. The positions of the major conserved sequence elements are indicated, including the J-region, the glycine-and phenylalanine-rich segment (G/F), and the cysteine-rich motifs (Cys-rich). Papain treatment of wild type DnaJ produces 9-and 30-kDa stable proteolytic fragments that retain the sequence elements depicted. The NH 2 -terminal amino acid sequences, in the standard one-letter code, of each papain-resistant DnaJ fragment are shown. The sizes of the proteolytic fragments were determined by laser desorption mass spectrometry (see "Experimental Procedures" and the text for details). N, amino terminus; C, carboxyl terminus. 12% acid-urea polyacrylamide gel (Fig. 4).
Two additional DnaJ deletion mutant proteins, J73-376 and J106 -376, were designed for investigations of the functional role the COOH-terminal end of DnaJ. Both polypeptides contain the cysteine-rich motifs and the COOH-terminal end of DnaJ. Although neither mutant protein contains the J-region, J73-376 does contain the Gly/Phe-rich segment that links the two papain-resistant structural domains found in wild type DnaJ (Fig. 3). DNA sequences that encode J73-376 and J106 -376, as well as appropriate translation signals, were inserted into the expression site on pRLM76 to produce plasmids pRLM238 (J73-376) and pRLM239 (J106 -376). E. coli transformants harboring these plasmids were thermally induced and the overexpressed J73-376 and J106 -376 proteins were purified to greater than 90% homogeneity as described under "Experimental Procedures." We found that both of these mutant proteins had to be purified from a dnaJ deletion mutant (PK102), since purification of J73-376 and J106 -376 from a dnaJ ϩ strain (RLM569) resulted in the copurification of a protein that we deduce is wild type DnaJ protein, based on its electrophoretic mobility in SDS-PAGE and its reactivity with polyclonal antibodies elicited against purified DnaJ protein (data not shown).
It is possible that one or more of the deletion mutant proteins fails to fold into a stable tertiary structure. Therefore, we probed the structural integrity of each DnaJ deletion mutant protein by examining its sensitivity to partial proteolysis with papain. J1-75 is apparently a compact, well-folded protein. It was not noticeably affected by papain digestion (Fig. 4). In contrast, the J1-106 polypeptide was converted by papain to a fragment that comigrates with J1-75 during electrophoresis in a 12% acid-urea polyacrylamide gel (Fig. 4). Mass spectrometry revealed that papain-mediated proteolysis of the J1-106 deletion mutant protein produced two primary fragments, one with a molecular mass of 8960 Da and the other of mass 8713 Da. These probably correspond to J-region fragments containing amino acid residues 2-78 (M r ϭ 8962) and 2-75 (M r ϭ 8720), respectively.
We obtained evidence that the J73-376 and J106 -376 deletion mutant proteins had also folded properly. Partial proteolysis of these mutant proteins with papain resulted in the production of polypeptide fragments that comigrate with the 30-kDa proteolytic fragment of wild type DnaJ during electrophoresis under denaturing conditions (Fig. 1). Further-more, both J73-376 and J106 -376, like wild type DnaJ, bind Zn 2ϩ and display extensive secondary structure as revealed by atomic absorption and circular dichroism spectroscopy, respectively. 2 Capacity of DnaJ Deletion Mutant Proteins to Stimulate the ATPase Activity of DnaK-We wished to determine if any of the DnaJ deletion mutant proteins retained any of the functional activities characteristic of wild type DnaJ, for example, its capacity to stimulate the weak intrinsic ATPase activity of DnaK (23,24). Because DnaJ stimulates the DnaK ATPase specifically at the hydrolytic step in the ATPase reaction cycle, 3 the DnaK ATPase activity is especially sensitive to the presence of DnaJ when the ATPase assay is performed using single turnover conditions (i.e. when the concentration of DnaK greatly exceeds that of ATP). Under these conditions, DnaJ strongly stimulates ATP hydrolysis by DnaK (Fig. 5). The rate constant for ATP hydrolysis by DnaK is increased at least 200-fold at saturating levels of DnaJ, from 0.04 min Ϫ1 to more than 8.5 min Ϫ1 . These data yielded an apparent K A of 0.2-0.3 M DnaJ for activation of the DnaK ATPase (Fig. 5 and Table I).
It has been suggested (48) that the highly conserved J-region, which we have shown is essentially equivalent to the NH 2 -terminal structural domain of DnaJ, interacts with DnaK. We used the single turnover ATPase assay to determine if the J-region is both necessary and sufficient for stimulation of DnaK's ATPase activity. Purified J1-75 failed, even at very high concentrations, to produce any detectable activation of the DnaK ATPase activity (Fig. 5). We next examined whether J1-106, which contains both the J-region and the Gly/Phe-rich segment, had any capacity to stimulate ATP hydrolysis by DnaK. In striking contrast to J1-75, J1-106 is capable of stimulating DnaK's intrinsic ATPase activity (Fig. 5). However, the interaction of DnaJ1-106 with DnaK is clearly deficient in ATPase assays were performed as described under "Experimental Procedures." These data were used to determine the first-order single turnover rate constant for ATP hydrolysis by DnaK at each concentration of DnaJ, DnaJ1-75, or DnaJ1-106. The single turnover rate constant for DnaK alone was determined to be approximately 0.04 min Ϫ1 . some respects. At saturation, J1-106 yielded a significantly slower ATPase rate constant than did wild type DnaJ. Moreover, the apparent K A for J1-106 was determined to be approximately 4 M (Table I). This concentration is about 20-fold higher than the concentration of wild type DnaJ required for half-maximal stimulation of DnaK's ATPase activity.
These results suggested to us that two conserved DnaJ motifs, i.e. the J-region and the Gly/Phe-rich segment, participate jointly in the activation of the ATPase activity of DnaK. However, it was still possible that the Gly/Phe-rich segment alone is responsible for DnaJ's capacity to activate the DnaK ATPase. It is known in this regard that peptide C binds to DnaK in an extended conformation (49) and that many short peptides of 7 to 9 amino acids or longer are capable of stimulating the intrinsic ATPase activity of DnaK as well as the intrinsic ATPase activities of its eukaryotic counterparts in the Hsp70 family (24,25,46,47). For DnaK, the level of stimulation depends on the peptide sequence and can be as much as 30-fold (24). 4 To examine the potential role of the Gly/Phe-rich segment in the DnaJ-mediated activation of the DnaK ATPase, we synthesized a set of five overlapping peptides, each 15 amino acids in length, that span the entirety of the Gly/Phe-rich segment and tested each for its capacity to serve as an effector of the DnaK ATPase. Under single turnover conditions, all of these peptides failed to produce a significant activation of the DnaK ATPase; no more than a 2-or 3-fold stimulation of the ATPase rate was observed, even at millimolar concentrations of peptide (data not shown). These results lend additional support to the hypothesis that the J-region and the Gly/Phe-rich segment collectively contribute to the DnaJ-mediated activation of DnaK.
Two additional DnaJ deletion mutant proteins, J73-376 and J106 -376 (Fig. 3), were studied to examine whether the cysteine-rich motifs and the COOH-terminal portion of DnaJ play any independent role in activation of DnaK's ATPase activity. Neither mutant protein contains the J-region, but J73-376 does contain the flexible Gly/Phe-rich segment in addition to the COOH-terminal structural domain of DnaJ. Our results indicate that neither J73-376 nor J106 -376 (in the range between 0.1 and 20 M) was capable of providing detectable stimulation of the intrinsic ATPase of DnaK (Table I and data not shown). The inability of J73-376 to activate the DnaK ATPase provides additional evidence that the Gly/Phe-rich segment and the J-region must be simultaneously present in the DnaJ polypeptide in order to achieve significant stimulation of ATP hydrolysis by the DnaK heat shock protein.
Activation of the DnaK ATPase by a Combination of DnaJ1-75 and Peptide-Polypeptides, such as full-length DnaJ and J1-106, that have a covalent linkage between the J-region and the Gly/Phe-rich segment, have the capacity to stimulate ATP hydrolysis by DnaK. The apparently unstructured nature of the Gly/Phe-rich segment, as judged by its sensitivity to proteolysis by papain, suggested the possibility that it interacts with the peptide-binding site on DnaK during DnaJ-mediated activation of the DnaK ATPase activity. We therefore sought to determine if a free peptide could replace the Gly/Phe-rich segment and complement the J-region for activation of DnaK. Incubation of DnaK with both J1-75 and any of the five synthetic peptides derived from the Gly/Phe-rich region produced no detectable stimulation of DnaK's ATPase activity under single turnover conditions (data not shown). However, when we used peptides, such as peptide C (24) or peptide NR (47), that are capable of stimulating DnaK's ATPase activity, a significant further increase in the rate constant for ATP hydrolysis was observed in the presence of both J1-75 and peptide (Fig. 6). At saturating levels of J1-75 and peptide, the rate of ATP hydrolysis by DnaK was roughly equivalent to the enhanced DnaK ATPase rate elicited by the presence of wild type DnaJ. The maximal rate constant obtained in the presence of peptide and J1-75 was more than 200-fold greater than that for DnaK alone under similar conditions and more than 15-fold higher than that obtained when DnaK was supplemented with just peptide C or peptide NR. The concentration of J1-75 which produced half-maximal stimulation of DnaK's ATPase activity (i.e. apparent K A ) in the presence of added peptide was determined to be ϳ 1.3 M (Table I). Under similar conditions, those DnaJ deletion mutant proteins that lack the 70-amino acid J-region, J73-376 and J106 -376, were unable to activate hy- P]ATP) as described under "Experimental Procedures." DnaJ and each DnaJ deletion mutant protein was included in the assay mixture at a range of concentrations and the single-turnover rate constant for ATP hydrolysis by DnaK was determined at each concentration. Where indicated, peptide was present in the reaction mixture at a concentration of 500 M. The apparent K A for activation of ATP hydrolysis by DnaJ and by each DnaJ deletion mutant protein was determined as described under "Experimental Procedures." A listing of "None" indicates that the specified DnaJ deletion mutant protein failed to activate ATP hydrolysis by DnaK at the highest protein concentrations tested (Ն14 M).
b The relative molar specific activities of DnaJ protein or DnaJ deletion mutant proteins in the in vitro DNA replication system are listed.
DNA replication assays were performed as described under "Experimental Procedures," except that, where indicated, a DnaJ deletion mutant protein was substituted for DnaJ. 100% activity represents 24 pmol of deoxyribonucleotide incorporation per min per pmol of DnaJ in the standard DNA replication assay. c ND, not done. Free peptide, at the level (500 M) required for maximal stimulation of ATP hydrolysis by DnaK, acts as a potent inhibitor of DNA replication in vitro (25) . FIG. 6. DnaJ1-75 and peptide cooperate to activate ATP hydrolysis by DnaK. ATPase assays were conducted under single-turnover conditions as described under "Experimental Procedures" and the legend to Fig. 5, except that each reaction mixture also contained peptide C (500 M) and either DnaJ1-75 or DnaJ73-376 as indicated. The reaction progress curves at each concentration of DnaJ1-75 or DnaJ73-376 were used to determine the first-order single turnover rate constants. The single turnover rate constant for ATP hydrolysis by DnaK in the presence of 500 M peptide C alone was determined to be approximately 0.5 min Ϫ1 . Neither DnaJ1-75 nor DnaJ73-376 alone was capable of stimulating ATP hydrolysis by DnaK under single turnover conditions ( Table I). drolysis of ATP by DnaK beyond that yielded by peptide alone (Fig. 6 and data not shown). Based on these data, we conclude that the DnaJ-mediated stimulation of ATP hydrolysis by DnaK occurs as the result of two separate interactions between these two molecular chaperones. ATP hydrolysis by DnaK apparently is maximally activated when it simultaneously interacts both with the amino-terminal domain of DnaJ and with a flexible peptide that can adopt an extended conformation. Wild type DnaJ protein provides both signals in the form of the J-region and the Gly/Phe-rich sequence, respectively.
Replicative Potential of DnaJ Deletion Mutant Proteins-We have previously demonstrated that the DnaJ and DnaK molecular chaperones are absolutely required for the initiation of phage DNA replication in a system that is reconstituted with 10 highly purified and E. coli proteins (26,36). We examined various DnaJ deletion mutant proteins for their capacity to support DNA replication in the reconstituted multiprotein system. We wished to determine if there was a direct correlation between the capacity of a particular mutant protein to stimulate ATP hydrolysis by DnaK and its ability to support the initiation of bacteriophage DNA replication. Only a few nanograms of wild type DnaJ is sufficient to support maximal DNA replication in vitro (Fig. 7). In contrast, the DnaJ1-75 deletion mutant protein was inactive in this replication assay, even at very high protein concentrations (Fig. 7). DnaJ1-106, which contains both the J-region and the Gly/Phe-rich segment, supported limited DNA synthesis in the replication assay (Fig. 7). This response was extremely weak, however, requiring on a molar basis ϳ1000-fold more J1-106 than fulllength DnaJ to attain a similar level of replication. In related studies, we found that both J73-376 and J106 -376 were inactive in the replication assay (Fig. 8). These data indicate that linkage of the J-region to the Gly/Phe-rich segment produces the minimal combination of DnaJ sequence elements that is capable of both activating ATP hydrolysis by DnaK as well as supporting initiation of DNA replication in vitro.
DISCUSSION
Our investigation of the capacities of various DnaJ deletion mutant proteins to stimulate ATP hydrolysis by the E. coli DnaK molecular chaperone has identified two regions of DnaJ that mediate this activation. The first region consists of the highly conserved J-region, located at the amino terminus of DnaJ. This 70-amino acid region, which is the signature sequence of each member of the ubiquitous DnaJ (Hsp40) family of molecular chaperones, forms a stable structural domain in DnaJ (50,51). The Gly/Phe-rich region of DnaJ, a polypeptide segment that links the NH 2 -and COOH-terminal domains of DnaJ, also plays a central role in the activation of ATP hydrolysis by DnaK. The hypersensitivity of this segment to proteolysis, as well as the preponderance of glycine residues in this region, suggest that this polypeptide linker segment is both relatively unstructured and highly flexible. This conclusion is consistent with the results of a recent NMR structure determination of an amino-terminal DnaJ fragment (DnaJ2-108) which found that the Gly/Phe-rich region was flexibly disordered in solution (50). Mutant DnaJ proteins that contain either the J-region or the linker region alone are incapable of stimulating ATP hydrolysis by DnaK (Table I). Furthermore, in preliminary experiments, we have not observed any capacity of these two regions of DnaJ to complement one another and stimulate DnaK under conditions where the required regions are located in trans on separate polypeptides (e.g. when DnaJ1-75 and DnaJ73-376 (Fig. 3) are mixed together with DnaK). A truncated DnaJ polypeptide produced detectable activation of DnaK's ATPase activity only when both the J-region and the Gly/Phe-rich linker region were present in cis on the same DnaJ deletion mutant polypeptide, as with, for example, DnaJ1-106.
We sought to localize more definitively the amino acid sequence or sequences present in the Gly/Phe-rich linker region of DnaJ that contribute to the activation of ATP hydrolysis by DnaK. However, none of a series of overlapping 15-amino acid synthetic peptides corresponding to subsections of this linker region were found to provide significant activation of DnaK, whether or not the J-region domain (i.e. DnaJ1-75) was also present in the incubation mixture. This result was interesting, especially in view of the fact that previous studies have established that random peptides with as few as 6 -9 amino acid residues are capable of stimulating the intrinsic ATPase activity of DnaK (24,46,47). We have not determined if the synthetic DnaJ linker peptides simply fail to bind stably to DnaK or if, on the other hand, they bind to DnaK but fail to provoke a necessary response, e.g. a conformational change in DnaK needed to potentiate ATP hydrolysis.
Further exploration of the factors that influence ATP hydrolysis by DnaK led us to the conclusion that DnaK must simultaneously undergo two separate interactions to acquire optimal activation. One such interaction is with the J-region of its partner chaperone, DnaJ. But, as discussed above, the presence of the J domain alone has no discernible impact on ATP hydrolysis by DnaK. Thus, our results are not in agreement with a previous study that concluded that the J-region of DnaJ is both necessary and sufficient to stimulate ATP hydrolysis by DnaK (52). We have shown that a second stimulatory interaction is required, one that involves binding of a peptide or protein substrate to the polypeptide-binding site on DnaK. Proteins, such as wild type E. coli DnaJ or DnaJ deletion mutant DnaJ1-106, that carry both the J-region and the Gly/ Phe-rich linker segment can individually furnish in cis both interactions needed for activation of the DnaK ATPase (52). We have demonstrated, however, that the requisite stimulatory interactions with DnaK can also occur in trans. A combination of the J domain (DnaJ1-75) and any short peptide that has high affinity for the polypeptide binding site of DnaK produced an activation of the DnaK ATPase comparable to wild type DnaJ alone (e.g. compare the data in Figs. 5 and 6). While peptide C alone can stimulate ATP hydrolysis by DnaK as much as 20-fold (24), addition of the J-domain to a reaction mixture containing peptide C and DnaK resulted in a 15-20fold further enhancement of the rate constant for ATP hydrolysis.
Considerable genetic and biochemical evidence has been accumulated in support of the idea that proteins of the DnaJ family functionally cooperate with specific Hsp70 proteins in all organisms to mediate protein folding, protein assembly, and disassembly events, and translocation of polypeptides across intracellular membranes. Although direct physical evidence for a stable protein-protein interaction between these two ubiquitous chaperone types has been observed only in a thermophilic bacterium thus far (53), recently it was demonstrated that the primary E. coli Hsp70 protein, DnaK, does bind to DnaJ when ATP is present (54).
Genetic suppression studies in the yeast Saccharomyces cerevisiae (55), as well as subsequent biochemical and cell biological analysis (56,57), provide additional support for the occurrence of both functional and direct physical interactions in the endoplasmic reticulum (ER) between chaperone-like proteins of the Hsp70 and Hsp40 families, i.e. between Kar2p, a DnaK and BiP homologue, and Sec63p, a member of the DnaJ family, respectively. These two proteins are thought to play a central role in the translocation of polypeptides from the yeast cytosol into the ER (10,11,58,59). While the precise molecular role of ER-localized Hsp70 proteins in protein translocation remains to be defined, it is reasonable to assume that the translocation process takes advantage of the capacity of such molecular chaperones to couple ATP hydrolysis and binding to polypeptide binding and release. Although Sec63p is an integral membrane protein associated with the polypeptide translocation complex in the ER, it does contain a 70-amino acid J-domain that faces the ER lumen (60). Interestingly, Sec63p does not contain a segment that is homologous to the Gly/Phe-rich linker polypeptide of DnaJ. Thus, it is highly probable that Sec63p itself, like the J-domain (DnaJ1-75), is capable of contributing only one of the two signals required for activation of ATP hydrolysis by the Kar2 Hsp70 protein. In reaching this conclusion, we make the presumption that the dual signal requirement for maximal ATP hydrolysis we identified for DnaK has been conserved during evolution in all primary Hsp70 family members.
If Sec63p indeed only contributes a J-domain to the activation process for Kar2p, then what is the source of the second signal? Since the missing signal involves interaction of a peptide or polypeptide with the polypeptide-binding site on the COOH-terminal domain of Kar2p (BiP), we suggest that it is the translocating polypeptide itself that supplies the other required signal for activation of the Kar2p ATPase. This proposal is consistent with the polypeptide binding specificity of the Hsp70 COOH-terminal domain as well as the presumed structure of translocating polypeptide chains. The available evidence indicates that Hsp70 proteins prefer to bind to extended polypeptide chains containing substantial hydrophobic character (25,47,49,(61)(62)(63). Accordingly, translocating polypeptides associated with Sec63p and the ER translocation apparatus would be expected to be in an unfolded or partially folded state as they emerge from the lipid bilayer of the ER. In our proposal, simultaneous interaction of a molecule of the ATP-bound form of BiP (Kar2p) with both the translocating polypeptide and Sec63p would activate ATP hydrolysis by BiP. Recent findings suggest that such ATP hydrolysis by BiP would effectively lock the polypeptide substrate onto a BiP⅐ADP enzyme complex (34,64). Moreover, this stable Hsp70-polypeptide interaction, mediated in part by the J-domain of Sec63p, may render polypeptide translocation into the ER irreversible, a role that has also been suggested for Hsp70-polypeptide interactions that occur during protein translocation into mitochondria (65,66). The translocating polypeptide, presumably still in an unfolded or partially folded conformation, would be anticipated to remain firmly bound to BiP until the ADP present on the enzyme is exchanged for ATP (64). 4 Since no GrpE homologue in the ER lumen has yet been identified, this nucleotide exchange step may be slow (23,64).
A number of instances have been described where it is DnaJ, rather than DnaK, that first binds to a protein substrate of this chaperone system. This situation was initially found for binding of DnaJ to a nucleoprotein preinitiation complex formed at the bacteriophage replication origin (26,27,30) and for binding of DnaJ to the P1 phage-encoded RepA replication initiator protein (67). DnaJ also has high affinity for the E. coli 32 heat shock transcription factor (31,68) and there is experimental support for the idea that DnaJ may bind to nascent polypeptides as an early step in protein folding in vivo (33,69,70). In each of these cases, it seems likely that DnaJ may play roles both in recruiting one or more molecules of DnaK to the locale of the protein substrate and in subsequently facilitating the action of DnaK on the substrate.
While our data and that of others (52,59,71) provide clear biochemical evidence that the J-domain is critical to the process of Hsp70 recruitment and activation, the potential involvement of the Gly/Phe-rich segment of DnaJ in Hsp70 recruitment, suggested by the findings in this report, draws attention to a possible mechanistic problem. For example, we have concluded here that the Gly/Phe-rich segment of DnaJ provides one of the signals for DnaK activation by binding to the polypeptide binding site of DnaK. Thus, DnaK recruited to close spatial proximity of a protein substrate via interactions with DnaJ apparently would first have to release the Gly/Pherich segment of DnaJ before it could bind to its protein substrate. Our biochemical studies are consistent with this pathway for DnaK action. The inability of any of the synthetic peptides derived from the DnaJ Gly/Phe-rich region to stimulate ATP hydrolysis by DnaK to a significant extent suggests that the interaction of the DnaJ Gly/Phe-rich region with DnaK must be both weak and transient. Perhaps the interaction between DnaK and the J-domain present in the DnaJ-substrate complex is sufficiently strong to keep DnaK from disso-ciating completely from DnaJ until DnaK has had the opportunity to bind to the protein substrate. Moreover, a prediction of this model is that binding of DnaK to the protein substrate would be expedited by the high effective concentration of the substrate which would arise because both DnaK and the substrate are tethered to the same molecule of DnaJ. If it is presumed that DnaK is bound to the NH 2 -terminal J-domain and that the protein substrate interacts with the COOH-terminal domain of DnaJ, it is possible that the conformational flexibility of the Gly/Phe-rich segment linking these two structural domains of DnaJ is an important factor contributing to the optimization of DnaK-substrate interactions. Wall et al. (68) have recently suggested a similar model, as well as other possible scenarios, to explain the properties of a DnaJ deletion mutant protein that is missing the Gly/Phe-rich linker region.
We have provided evidence that a fragment of DnaJ consisting of the amino-terminal 105 amino acid residues is capable of activating ATP hydrolysis by DnaK. Nevertheless, our results indicate that the COOH-terminal domain of wild type DnaJ must play some role in the activation process. For example, the maximal rate constant of ATP hydrolysis by DnaK elicited by DnaJ1-106 saturates at a level more than 5-fold lower than that produced by wild type DnaJ (Fig. 5). Furthermore, the K A for DnaJ1-106 in this process (4 M) is approximately 20-fold higher than the K A for wild type DnaJ (Table I and Fig. 5). This suggests that DnaK interacts more effectively with the fulllength DnaJ polypeptide than with DnaJ1-106. However, The beneficial effect of the COOH-terminal domain of DnaJ on the interaction with DnaK may be indirect. It is conceivable that the COOH-terminal domain of DnaJ simply serves to lock or position the two required elements, i.e. the J-domain and the Gly/Phe-rich region, in a configuration that is optimal for interaction with DnaK. On the other hand, we have not rigorously excluded the possibility that the COOH-terminal domain of DnaJ directly enhances interactions with DnaK by providing other sequence elements that bind to the polypeptide binding site on DnaK. Our findings suggest, however, that if such elements exist, they must have relatively low affinity for the DnaK polypeptide-binding site. This conclusion is based on our finding that the DnaJ106 -376 and DnaJ73-376 deletion mutant proteins, which each contain an intact COOH-terminal domain, fail to complement DnaJ1-75 for activation of ATP hydrolysis by DnaK.
It is interesting that the DnaJ1-106 mutant protein can support initiation of DNA replication in vitro, albeit at a much reduced level (52) (Fig. 7). The apparent specific activity of DnaJ1-106 in this process is approximately 1000-fold lower than that of wild type DnaJ (Table I). None of the other deletion mutant proteins described here could support detectable levels of DNA replication at the highest concentrations tested (Table I). Thus, the minimal sequence elements of DnaJ required for initiation of DNA replication include both the J-domain and the Gly/Phe-rich linker segment, which apparently must both be present in cis. Since these are the same two DnaJ sequence elements required for activation of ATP hydrolysis by DnaK, it is conceivable that DnaJ1-106 aids DNA replication simply by converting DnaK into a more active ATPase. Activated DnaK, presumably composed of a complex of DnaJ1-106 and DnaK, may have an improved capacity, because of its heightened ATPase activity, to bind directly to nucleoprotein preinitiation structures formed at ori. This nonspecific route would, in effect, by-pass the normal initiation pathway whereby DnaK is apparently recruited to bind at specific sites, i.e. at those locations where wild type DnaJ is already bound to the preinitiation complex assembled at the replication origin (26,27,30). The greatly lowered specific activity of DnaJ1-106 in initiation of DNA replication may well reflect the inability of this truncated DnaJ mutant to bind specifically to preinitiation nucleoprotein structures; as a consequence, DnaJ1-106, unlike wild type DnaJ, is not capable of directing DnaK to act at precise sites on specific substrate molecules. A recent characterization of the properties of a similarly truncated DnaJ polypeptide lends additional support to this interpretation (71). An amino-terminal fragment of DnaJ, DnaJ12 (equivalent to DnaJ2-108), was found to be capable of activating DnaK to bind to one of its physiological substrates, the 32 heat shock transcription factor. Furthermore, in contrast to the behavior of wild type DnaJ, the DnaJ12 mutant protein was reported to be capable of activating DnaK to bind the 32 polypeptide in the absence of any prior interaction of the DnaJ12 protein itself with this heat shock factor.
There are two discrepancies between the findings reported here and previously published data that merit further discussion. First, in contradiction to this report, it was previously concluded that the DnaJ J-domain alone was both necessary and sufficient to activate ATP hydrolysis by DnaK (52). This inference was based on the properties of a DnaJ deletion mutant protein, DnaJ12, composed of the first 108 amino acids of DnaJ. The properties of the DnaJ12 mutant protein appear to be nearly identical to those of the DnaJ1-106 protein described here. It is evident that the mutant DnaJ protein used to reach the earlier conclusion in fact contained not only the J-domain, but also the essential Gly/Phe-rich region as well. Second, Wall et al. (68) have recently described a DnaJ deletion mutant protein, DnaJ⌬77-107, that is missing 31 amino acids covering the entire Gly/Phe-rich region. These authors demonstrated that this mutant protein, nevertheless, is still capable of activating the ATPase activity of DnaK. One possible explanation for the difference between our findings is that, as alluded to earlier, DnaJ may contain multiple sequence elements capable of interacting with the polypeptide-binding site on DnaK. In addition to the element reported here in the Gly/Phe-rich segment, other potential DnaK interaction sites could reside in the COOH-terminal structural domain of DnaJ. A second possibility is that the random six amino acid linker, HMGSHM, that replaced the Gly/Phe-rich segment as a consequence of the construction of the DnaJ⌬77-107 deletion mutant protein (52), can itself serve as a polypeptide binding element for DnaK. Perhaps almost any unstructured and flexible polypeptide chain of sufficient length (i.e. greater than 5 amino acids (24)) covalently linked to the J-domain will support productive interactions between DnaJ and DnaK.
|
2018-04-03T04:35:44.288Z
|
1996-05-10T00:00:00.000
|
{
"year": 1996,
"sha1": "f8408a95c8cc5f4151e928d3781d359579c8429a",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/271/19/11236.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "639b0bcc540a012e7d47c13df218d1e587c2c6be",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
236987627
|
pes2o/s2orc
|
v3-fos-license
|
Which treatment to prevent an imminent fracture?
Purpose To provide a summarized state of the art of the relative efficacy and rapidity of action of pharmacological treatments to prevent imminent osteoporotic fractures. Methods We reviewed metanalyses (MA) and network metaanalyses (NMA) published during the last 10 years concerning the pharmacological treatment of osteoporosis. We compared the anti-fracture efficacy and the rapidity of action of various agents versus placebo and versus risedronate. Results All bisphosphonates decrease the incidence of vertebral fractures compared with placebo. Ibandronate is the only one without demonstrated efficacy against non-vertebral and hip fractures. Zoledronate, denosumab and anabolic therapy are associated with a higher fracture risk reduction than oral bisphosphonates. Compared with risedronate, which significantly reduces the rate of hip fractures, zoledronate, denosumab, teriparatide, abaloparatide and romosozumab are more efficient for vertebral fractures but not for non-vertebral or hip fractures reduction. No studies have compared bone anabolic treatments with zoledronate or denosumab. Oral bisphosphonates significantly reduce fracture risk only after more than one year of therapy. A faster reduction of fracture risk is observed with zoledronate and denosumab, or with anabolic agents. For denosumab and anabolic agents, a sequential treatment is required to keep gains after treatment withdrawal. Conclusions In patients at high risk of imminent fracture, starting therapy with potent antiresorptive agents or with an anabolic agent seems most appropriate to promptly reduce the fracture risk. Available NMA/MA suggest that, compared to zoledronate and denosumab, anabolic agents have a higher efficacy for vertebral fractures but head-to-head studies are lacking.
Introduction
Osteoporotic fractures are a major and increasing cause of mortality, morbidity, loss of independence and altered quality of life worldwide (Alarkawi et al., 2020). Because of population aging, osteoporosis is among the most important health crises for industrialized countries, with a high cost of incident fragility fractures, estimated at € 37 billion in European Union, and a predicted increase of 25% from now to 2025. The cost of treatment and long-term care of patients with fractures are considerably higher than those of pharmacological prevention, which remains largely underused (Hernlund et al., 2013).
Guidelines for the assessment and treatment of osteoporosis imperatively recommend work up and treatment for patients after a first fragility fracture, with secondary fracture prevention as an obvious first step in the development of a systematic approach (Hernlund et al., 2013). The risk for recurrent fractures is maximal during the first two years after a fragility fracture ("imminent fractures" period) and decreases gradually afterward (Kanis et al., 2020a). This concept of imminent fracture is therefore central to the categorization of very high risk and has implications for the choice of therapy: these patients at high risk of imminent fractures are most at need of immediate treatment with agents that reduce fracture risk most efficiently and as promptly as possible. Hence the need to identify such agents.
Several anti-osteoporotic agents have a high antifracture efficacy, proven in many randomized controlled trials (RCT): anti-resorptive drugs such as oral bisphosphonates, denosumab and zoledronate, or anabolic agents of the first-generation, teriparatide, or newer ones, namely abaloparatide and romosozumab. However, they differ by their potency and the lag time before observing a significant fracture risk reduction.
The aim of the present paper is to provide the reader with a summarized view of the relative potency and rapidity of action of pharmacological treatments available to prevent osteoporotic fractures. For this purpose, we did not perform another metaanalysis but rather synthetized available metanalyses (MA) and network metaanalyses (NMA) published in the last 10 years. We analysed the anti-fracture efficacy of active treatments versus placebo. To better appreciate differences in efficiency, the power of the more recent agents (parenteral antiresorptives and anabolics) was also systematically compared with that of risedronate, chosen as representative of oral bisphosphonate activity.
Three explicit questions were defined: 1. Which treatment would be the most powerful to prevent fracture? 2. What are the fastest anti-osteoporotic agents to promptly reduce fracture risk? 3. How to maintain the early benefits of treatment?
Methods
A search of Scopus was performed to find NMAs and MAs published in the last 10 years. The language was limited to English for pragmatic reasons. Furthermore, the reference lists of studies selected for inclusion in the present review were screened for additional relevant reports.
Studies were eligible for this review if they met the following criteria: (a) MA/NMA included RCTs for which only postmenopausal women with primary osteoporosis or osteopenia were included; (b) one or more active agents were compared to placebo or to each other; and (c) the outcomes of interest (vertebral, hip, and nonvertebral fragility fractures) were reported as a primary or secondary outcome.
Three different types of pharmacological treatments were studied: 1) oral and parenteral bisphosphonates (alendronate, ibandronate, risedronate, and zoledronate), 2) denosumab, and 3) anabolic therapy (teriparatide, abaloparatide, and romosozumab). We present the efficacy versus placebo, versus risedronate and the head-to-head comparisons. Risedronate was chosen as representative of oral anti-resorptive treatments that has been shown in a placebo-controlled trial to reduce the rate of hip fractures (Barrionuevo et al., 2019;Murad et al., 2012).
Studies on the following topics were excluded: acute fracture care, high-energy fractures, fracture healing, secondary osteoporosis (including osteoporosis induced by glucocorticoid therapy or by cancer therapy), male osteoporosis, premenopausal osteoporosis, and studies in languages other than English.
We summarize the results of NMAs/MAs published during the last 10 years with available information on timing of action and efficacy of available osteoporosis treatments in relation to fracture risk reduction. No specific statistical analysis was performed.
Which treatment would be the most powerful to prevent fracture?
For vertebral fractures, all bisphosphonates showed efficacy in preventing fractures compared with placebo (see Fig. 1 and Supplementary Table 1). Zoledronate was associated with a higher reduction (RR = 0.28-0.42) than the three oral bisphosphonates: alendronate (RR = 0.45-0.65), risedronate (RR = 0.46-0.60) and ibandronate (RR = 0.46-0.67). The efficacy of denosumab was similar to that of zoledronate (RR = 0.30-0.32). With respect to anabolics, the reduction of fracture risk following treatment with teriparatide and abaloparatide was substantially greater than that after anti-resorptives, even zoledronate or denosumab, with a RR at 0.23-0.31 and 0.13-0.15 vs placebo, respectively. In a head-to-head comparative trial (Kendler et al., 2017), teriparatide was more efficient than risedronate for prevention of vertebral fractures (RR = 0.44, p < 0.0001) (see Table 1). Moreover, the VERO study showed that the antifracture efficacy was superior in a subgroup of patients with imminent fracture risk (those with a prior clinical vertebral fracture (VFx) in the year before entering the study), with a reduction of new VFx, new and worsened VFx, and clinical fractures by 65%, 68%, and 62%, respectively, in patients treated with teriparatide as compared with risedronate (Geusens et al., 2018).
No difference in efficacy was apparent between risedronate and other oral bisphosphonates. Zoledronate, however, was more efficient, at the same level as denosumab. The three anabolic agents were slightly more potent than parenteral antiresorptives, particularly abaloparatide (for which there are only few data). The NMA of Ding et al. (2020) is the only one to assess the comparative anti-fracture effectiveness of various drugs according to the proportion of prevalent vertebral fractures (PVF): in the subgroup where more than 50% of the patients had a PVF, the greatest risk reduction was obtained for romosozumab (RR = 0.28); in the subgroup where PVF was <50%, abaloparatide was the most potent (RR = 0.16).
For non-vertebral fractures, bisphosphonates associated with a significant reduction in fractures were alendronate (RR = 0.53-0.83), risedronate (RR = 0.55-0.81) and zoledronate (RR = 0.30-0.76) (see Fig. 2 and Supplementary Table 1). Ibandronate did not reduce nonvertebral fractures in the majority of NMA/MA. The same efficacy was obtained for teriparatide, denosumab and abaloparatide (RR = 0.31-0.81; 0.26-0.63 and 0.13-0.54, respectively). The few head-tohead comparisons showed no significant differences between the evaluated drugs (see Table 2). However, in the study of Saag (Saag et al., 2017), the risk of nonvertebral fractures was lower by 19% in the romosozumab-to-alendronate group than in the alendronate-toalendronate group (P = 0.04). Cosman et al. (2016) did not obtain with romosozumab a significant reduction of the fracture risk within 12 months (p = 0.10) or 24 months (p = 0.06). In a post hoc analysis of the role of regional background fracture risk (Cosman et al., 2018), risk reductions were observed in "rest-of-world" (p = 0.012), with no treatment effect observed in Latin America. (Fig. 3). Romosozumab followed by alendronate reduced the risk of hip fracture to a greater extent that alendronate alone (P = 0.02).
None of the parenteral drugs were apparently more potent than risedronate for non-vertebral and hip fractures prevention (see Supplementary Table 2 and Fig. 4).
What are the fastest anti-osteoporotic agents to reduce fracture risk?
For vertebral fractures, a significant reduction of fracture risk was only demonstrated after more than one year of treatment with oral bisphosphonates (Black et al., 2000;Chesnut et al., 2004;Harris et al., 1999;Liberman et al., 1995) (Table 3): after the first year for risedronate (p < 0.001) (Harris et al., 1999) and alendronate (Cosman et al., 2018), Table 1 Vertebral fracture data reported in head-to-head studies.
and during the second year for ibandronate (p < 0.001) (Chesnut et al., 2004). With zoledronate (Dennis et al., 2007) and denosumab (Steven et al., 2007), a significant risk reduction was already observed after 6 months (p < 0.001). The protective effect of teriparatide became evident after 9 to 12 months (Neer et al., 2001). With romosozumab, a significant reduction of the risk of vertebral fracture was obtained within 12 months (P < 0.001) (Cosman et al., 2016). Abaloparatide had a similar efficacy (p < 0.001), but no data are available for the rapidity of action (Cosman et al., 2017;Miller et al., 2016a). However, only a small number of fracture events occurred across treatment groups, with the event rate in the placebo group being smaller than anticipated. Moreover, the result could be influenced by the fact that 63% of participants had a prior fracture.
In the few available head-to-head comparisons, both teriparatide and romosozumab seem to be more efficient than the oral bisphosphonate already during the first year (Saag et al., 2017;Body et al., 2020). Body et al. (2020) compared teriparatide to risedronate. The largest difference in incidence rates of clinical fractures occurred during the 6-to 12month period (p = 0.03). With regards to romosozumab, in the study of Saag et al., 2017, a significantly lower risk was observed in the romosozumab-to-alendronate group than in the alendronate-toalendronate group (P < 0.001), but the difference was significant only after the first year of treatment (p = 0.003). Comparing the efficacy of abaloparatide and teriparatide, Miller et al. (2016a) found a significant reduction in the risk of new vertebral fractures in the abaloparatide vs placebo group (RR 0.14, P < 0.001) but without data for the rapidity of effect.
For non-vertebral fractures, the effect of oral bisphosphonates became significant after the first year only (Black et al., 2000;Chesnut et al., 2004;Harris et al., 1999;Liberman et al., 1995) (Table 3). For zoledronate (Dennis et al., 2007) and denosumab (Steven et al., 2007), a significant reduction of the risk of fractures was already observed after 6 months (p < 0.001). Regarding the effect of romosozumab, in the study of Saag et al. (2017), a significantly lower risk (19%) was observed in the romosozumab-to-alendronate group than in the alendronate-toalendronate group for new non-vertebral fractures during the second year of treatment (p = 0.037). With abaloparatide, Miller et al. (2016aMiller et al. ( , 2016b obtained an early significant reduction (RR = 0.57, p = 0.049). Kaplan-Meier curves for time to first nonvertebral fracture showed early separation (before 12 months) between the abaloparatide group and both the placebo and teriparatide groups. The curve of the abaloparatide-group continued to diverge from the placebo group and maintained consistent separation from the teriparatide group over the full course of the 18-month trial (Cosman et al., 2017;Miller et al., 2016aMiller et al., , 2016b. For hip fractures, the same delay as for vertebral and non-vertebral fractures was observed after bisphosphonates and denosumab. For teriparatide, the effect became significant after 6 months (p < 0.05) (Eriksen et al., 2014). No significant effect was observed for romosozumab at 1 and 2 years by Cosman et al. (p = 0.18 and 0.06 respectively) (Cosman et al., 2016). However in the study of Saag et al., 2017, hip fractures occurred in 2.0% of patients in the romosozumab-to-alendronate group as compared with 3.2% in the alendronate-toalendronate group at the time of the primary analysis, representing a 38% lower risk with romosozumab (hazard ratio, 0.62; 95% CI, 0.42 to 0.92; P = 0.02) during the second year.
How to maintain the early benefits of treatment?
Because they are stored in bone, the anti-fracture effect of bisphosphonates (oral or parenteral) persists for several months or years after they are stopped. It is not the case with denosumab or anabolics. When denosumab is withdrawn, there is a rebound of bone turn-over and bone loss, and several cases of multiple vertebral fractures have been described (3.4% of patients in the post hoc analysis of the Freedom trial) (Anastasilakis et al., 2017;Cummings et al., 2018;Popp et al., 2016). Cummings et al. reported that the odds of developing multiple vertebral fractures after stopping denosumab were 3.9 times higher in those with prior vertebral fractures, sustained before or during treatment, than those without, and 1.6 times higher with each additional year of off-treatment follow-up. Thus, denosumab should be given lifelong (Hansen et al., 2020) or, if stopped, replaced with another potent antiresorptive. Results of studies are still limited and controversial: one infusion of zoledronate did not prevent bone loss after discontinuing treatment according to studies of Solling and Horne (Horne et al., 2018;Sølling et al., 2020); conversely, Anastasilakis (Anastasilakis et al., 2017) showed that a single intravenous infusion of zoledronate given 6 months after the last denosumab injection prevented bone loss for at least 2 years independently of the rate of bone turnover.
The optimal timing to start a bisphosphonate treatment after denosumab is unknown, nor the dosage, nor the duration of treatment. A recent review (Tsourdi et al., 2021) concluded that the duration of denosumab treatment is an important determinant of the extent of the rebound phenomenon. A short duration of denosumab treatment (i.e. up to 2.5 years) in patients with otherwise low fracture risk could justify treatment with an oral bisphosphonate for 1-2 years. Patients having been treated with denosumab for a longer period (i.e., more than 2.5 years) or who are at persistently high risk for fracture should receive zoledronate.
Because the use of anabolic drugs for postmenopausal osteoporosis is limited to 12 to 24 months and the beneficial anti-fracture effect of Table 2 Non -vertebral fracture data reported in head-to-head studies. anabolic therapy decreases rapidly when the treatment is stopped, a sequential treatment of antiresorptive therapy is required (Eastell et al., 2019;McClung et al., 2018). In postmenopausal women who have completed a course of teriparatide or abaloparatide, guidelines recommend treatment with antiresorptive therapy to maintain bone density gains (V; Shoback et al., 2020). In the case of abaloparatide, efficacy was maintained with a subsequent 24-month treatment with alendronate; eighteen months of abaloparatide followed by 24 months of alendronate reduced the risk of vertebral, nonvertebral, clinical, and major osteoporotic fractures (Bone et al., 2017). For romosozumab, 12 months of alendronate after 12 months of romosozumab showed superior efficacy on fracture outcomes compared with alendronate alone (Saag et al., 2017). Treatment effects of romosozumab are reversible upon discontinuation and further augmented by denosumab (McClung et al., 2018): women receiving romosozumab who transitioned to denosumab continued to accrue BMD, with additional mean gains of 2.6% at the lumbar spine, 1.9% at the total hip, and 1.4% at the femoral neck, whereas BMD returned toward pretreatment levels with placebo.
Discussion
There is a substantial body of evidence that the risk of a subsequent osteoporotic fracture is the highest immediately after the index fracture and wanes progressively with time (Kanis et al., 2020b). Therefore, the first 2 years after the index event is a period of "imminent risk" which requires that a pharmacological treatment should be given as soon as possible. Also, the chosen treatment should be most efficient to reduce the risk, and act promptly. The available pharmacologic treatments can be classified as anti-resorptive drugs: oral and parenteral bisphosphonates or inhibitors of RANK-ligand, and anabolic agents: activators of the PTH receptor and sclerostin inhibitors. These treatments differ in their mechanism of action and do not have the same power to reduce fracture risk. Also, the lag time before observing a significant risk reduction is variable. A number of RCT's, MA and NMA have been published about their relative efficiency and timing of action. The aim of the present narrative review was to summarize their results in order to help choosing the best therapeutic approach in the prevention of imminent fractures.
The NMAs and MAs showed that all pharmacological treatments significantly reduce the fracture risk. All bisphosphonates decrease the incidence of vertebral fractures compared with placebo. In contrast to other oral bisphosphonates, ibandronate has no significant efficacy against non-vertebral and hip fractures in the majority of NMA/MA. Zoledronate and denosumab are associated with a higher fracture risk reduction than the oral bisphosphonates. Anabolic therapy (romosozumab, abaloparatide or teriparatide) are more efficient for fracture risk reduction than an oral bisphosphonate. Compared with risedronate, which has a proven efficacy in the reduction of fracture risk, chosen as representative of oral anti-resorptive treatments, zoledronate, denosumab, teriparatide, abaloparatide and romosozumab are more efficient for vertebral fractures reduction but not for non-vertebral and hip fractures. Therefore, given their greater antifracture efficacy on vertebral fractures, zoledronate, denosumab or an anabolic treatment would be a better option than oral bisphosphonates for patients at high and imminent risk of such fractures.
Regarding the rapidity of action, a significant reduction of fracture risk was demonstrated only after more than one year of treatment with oral bisphosphonates. A faster reduction of fracture risk is observed with more potent antiresorptive agents, intravenous zoledronate and denosumab, or with anabolic agents.
The rapidity of action of these parenteral antiresorptives is probably due to their much faster inhibition of bone remodeling (within a week), compared to the 3-6 months it takes to achieve remodeling inhibition with oral agents. These drugs have a protective effect already during the first year, especially for non-vertebral fractures and should be recommended for patients at very high risk of imminent fracture, even if they are more costly. Davis et al., 2020 showed indeed in a systematic review and economic evaluation that the incremental cost-effectiveness ratios for newer treatments are generally greater than the commonly applied threshold of £20,000-30,000 (23,000-34,000€) per quality-adjusted life-year. However, the incremental cost-effectiveness ratio for denosumab may fall below £30,000 (34,000€) per quality-adjusted life-year at very high levels of risk or for high-risk patients with specific characteristics. Nevertheless, a major problem arises from the fact that the beneficial anti-fracture effect of anabolic therapy and denosumab is reversible and quickly disappears when therapy is stopped (Eastell et al., 2019;McClung et al., 2018). Thus, when these treatments are discontinued, a bisphosphonate should be given to avoid a rebound fracture risk after denosumab (Hansen et al., 2020) and an anti-resorptive to keep the gains after an anabolic agent (Shoback et al., 2020). The optimal regimen to prevent rebound after denosumab, particularly if given for long periods, has yet to be investigated.
Unfortunately, despite the availability of effective treatments, the prescription and adherence to an osteoporosis therapy after a sentinel fracture is low (around 20% of eligible patients) and declining (Iconaru et al., 2020). The estimated probability of osteoporosis medication used in the year after hip fracture decreased significantly from 40% to 21% over a 10-year study period (Kanis et al., 2017;Solomon et al., 2014). This highlights the urgent need of additional education for the medical profession and patients regarding the risk-benefit balance of treatment (Iconaru et al., 2020).
A limitation of our review is that we could not find enough specific studies where the efficacy of treatment was investigated in patients with a recent fracture. However, studies of efficacy based only on patients with a recent index fracture would be quite impractical. We hypothesize that published data in patients with osteoporosis can be applied for those with an imminent fracture risk. Another limitation is the small number of studies and subjects with new agents, romosozumab and abaloparatide, which could explain the homogeneity of results concerning these treatments in the analysed MA/NMA. On the other hand, there are only a limited number of head-to-head studies which could allow a better comparison of the drug's efficiency and rapidity of action. Additionally, Table 3 Minimal duration of treatment before obtaining a significant risk reduction for a) vertebral fractures; b) hip fractures and c) non-vertebral fractures according to the included studies. (*p < 0.05, **p < 0.01, ***p < 0.001, NSnot significant, NA -not analysed). a) For vertebral fractures Before 12 months
NS *
Zoledronate (Dennis et al., 2007) *** *** Denosumab (Steven et al., 2007;Boonen et al., 2011) * * Teriparatide (Eriksen et al., 2014;Lindsay et al., 2009) no studies have compared bone anabolic treatments with zoledronate or denosumab, so it was not possible to analyse the benefit risk ratios of the anabolics compared with these drugs. Moreover, these analyses are done in different populations and there may be differences in many characteristics of the trials accounting for differences in fracture incidence other than the therapy.
In conclusion, in patients at high risk of imminent fracture, starting therapy with potent antiresorptive agents, intravenous zoledronate or denosumab, or anabolic agent seems most appropriate to promptly reduce the fracture risk because of their higher potency and faster effect on fracture risk reduction. In the absence of head-to-head studies to compare anabolic treatments with zoledronate and denosumab, the synthesis of NMA/MA suggests a higher efficacy of anabolics for vertebral fractures prevention, a moderate advantage for non-vertebral fractures and not enough data for hip fractures. For denosumab and anabolics, a sequential treatment is required to keep gains after treatment withdrawal, but the optimal regimen of these treatments remains to be defined with certainty. As these treatments are much more costly, a rigorous choice of patients is needed, underlying the need to develop a model for predicting imminent fractures.
Transparency document
The Transparency document associated with this article can be found, in online version.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2021-08-13T05:19:52.549Z
|
2021-07-09T00:00:00.000
|
{
"year": 2021,
"sha1": "7faf0336476243f43f2066855d9814865bcc97d3",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bonr.2021.101105",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7faf0336476243f43f2066855d9814865bcc97d3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119076130
|
pes2o/s2orc
|
v3-fos-license
|
Computer-automated tuning of semiconductor double quantum dots into the single-electron regime
We report the computer-automated tuning of gate-defined semiconductor double quantum dots in GaAs heterostructures. We benchmark the algorithm by creating three double quantum dots inside a linear array of four quantum dots. The algorithm sets the correct gate voltages for all the gates to tune the double quantum dots into the single-electron regime. The algorithm only requires (1) prior knowledge of the gate design and (2) the pinch-off value of the single gate $T$ that is shared by all the quantum dots. This work significantly alleviates the user effort required to tune multiple quantum dot devices.
Electrostatically defined semiconductor quantum dots have been the focus of intense research for the application of solid-state quantum computing [1][2][3]. In this architecture, quantum bits (qubits) can be defined by the spin state of an electron. Recently, several experiments have shown coherent manipulation of such spins for the purpose of spin-based quantum computation [4][5][6][7][8]. Enabled by advances in device technology, the number of quantum dots that can be accessed is quickly increasing from very few to many [9,10]. Up to date, all these quantum dots have been tuned by 'hand'. This is a slow process whereby gate voltages are tweaked carefully, first to reach a regime with one electron in each of the dots, and then to adjust the strength of all the tunnel barriers. Defects and variations in the local composition of the heterostructure lead to a disordered background potential landscape, which must be compensated for by the gate voltages. On top, cross-capacitances of each gate to neighboring dots increases the tuning complexity as the number of dots increases. The ability to tune these dots automated by computer algorithms, including tuning of many dots in parallel, is an important ingredient towards the scalability of this approach to create a largescale quantum computer.
In this Letter, we demonstrate the computer automated tuning of double quantum dot (DQD) devices. We have created an algorithm that only requires as input: (1) prior knowledge of the gate design, which is reasonable for future large-scale quantum dot circuits and (2) the measured pinch-off value of the single gate T shared by all the quantum dots. We describe the algorithm used and verify its robustness by creating three independent DQDs inside a quadruple dot array. The algorithm finds the correct gate voltages to tune all DQDs into the singleelectron regime and the computer recognizes that this goal has been achieved within an overnight measurement.
A scanning electron microscopy (SEM) image of a device nominally identical to the one used is shown in Fig. 1(a). Gate electrodes fabricated on the surface of a GaAs/AlGaAs heterostructure are biased with appropriate voltages to selectively deplete regions of the twodimensional electron gas (2DEG) 90 nm below the surface and define the quantum dots. The main function of each gate is as follows: gates L and R set the tunnel coupling with the left and right reservoir, respectively. D1 − D3 control the three inter-dot tunnel couplings and P 1 − P 4 are used to set the electron number in each dot. However, each gate influences the other parameters as well. Changing L for example, will also change the electron number in dot 1 and influence the inter-dot tunnel barrier between dot 1 and 2. This needs to be taken into account by the algorithm. Two other nearby quantum dots on top of the qubit array, sensing dot 1 and 2 (SD1 and SD2), are created in a similar way and function as a capacitively coupled charge sensor of the dot array. When positioned on the flank of a Coulomb peak, the conductance through the sensing dot is very sensitive to the number of charges in each of the dots in the array. Changes in conductance are measured using radiofrequency (RF) reflectometry [11]. High-frequency lines are connected via bias-tees to gates P 1, P 3 and P 4. The device was cooled inside a dilution refrigerator to a base temperature of ∼15 mK. All measurements were taken at zero magnetic field.
Before running the algorithm the user is required to input a range of T -values for which the algorithm should try to find DQDs. This range is currently determined by measuring the pinch-off value of T manually, and then choosing a set of gate voltages more negative than this pinch-off value. The pinch-off value can for example be determined by setting all other gates to 0 mV and next measuring the current from O1 to O4 (other ohmics open) whilst sweeping T . This step could be automated in future work.
The algorithm consists of 3 steps: (1) to determine the starting values for the gate voltages, we first measure the pinch-off characteristic between each individual gate and the shared T -gate. Based on those results we (2) cre- To measure the pinch-off characteristic we apply a small voltage-bias (∼ 500 µV) to O4 and measure the current I array through the quadruple dot array. Variations in the local composition of the heterostructure underneath each gate will be reflected in the required voltage to create quantum point contacts (QPCs). We term this voltage the transition value, V tr gate,i , which is defined as the gate voltage for which I array is at ∼ 30% of its maximum value (see Supplementary Information II A). This procedure is repeated for a range of T -values. Figs. 1(bd) show an example for T = −400 mV and the gates controlling the leftmost dot (L, P 1 and D1). In practice, it is best to continue with the most positive T -value that still allows pinch-off for all gates. In our experience this tends to create better quantum dots for this gate design.
We start by creating single quantum dots, as they already include much of the cross-talk between gates, dots and barriers, that is present in double dots. To create single quantum dots we apply a fixed voltage for the plunger gate (usually -80 mV) which we know is appropriate for this device design, and use the transition values of the barrier gates as input for a 2D coarse scan. A suitable scan range is [V tr gate,i -10 mV, V tr gate,i +400 mV]. We again monitor I array . The structure of these scans is always similar: for negative gate voltages the channel is closed, so there is no current. For more positive voltages the channel is open, so there is a large current. We fit a tetragon to the area corresponding to large current, see Fig. 2(a) for an example of the leftmost dot (details can be found in Supplementary Information II B). We next take a finer scan of the area closest to the tetragon corner with the most negative gate voltages, see Figs. 2(b-e). In the experiments we have performed, this point is always showing the start of quantum dot formation through the appearance of a Coulomb peak. We use this point as the starting point in gate-space for creating DQDs. The exact location of the Coulomb peak is determined using a Gabor filter and is shown as black dots in Figs. 2 When going to double dots, transport measurements are not suitable as current levels through few-electron double dots are impractically low for this device design. Therefore, once the single dots have been formed, we tune the SDs in a similar way. They can then be used for non-invasive charge sensing which does allow one to distinguish single-electron transitions in the dot array through RF-reflectometry. To achieve a high sensitivity it is important that the SD is tuned to the flank of one of its Coulomb peaks. After finding a Coulomb peak for the SD in a similar way as described for the qubit dots, we make a 1D scan of the plunger gates, see Fig. 2(f). Each detected Coulomb peak is given a score based on its height and slope that allows the algorithm to continue with the most sensitive operating point for the corresponding plunger gate (see Supplementary Information II C).
With the SD tuned we create a double dot in the following way: first we set the voltages of the gates for the double dot to the values found for the individual single dots (black dots in Figs. 2(b-e)). For the single gate shared by the two individual dots (e.g. gate D1 for the leftmost double dot) the average of the two values is used. Next, we record a charge stability diagram of the double dot structure by varying the two plunger gate voltages involved. We use a heuristic formula to determine the correct scan range that takes into account the capacitive coupling of the gates to the dots (see Supplementary Information II D). Typical results for such scans are shown in Fig. 3(a-c). Scans involving two plungers are measured by applying a triangular voltage ramp to the plungers on the horizontal axis using an arbitrary waveform generator, and by stepping the other plunger gate using DACs [12]. Whilst stepping the latter gate we also adjust the sensing dot plunger gate to compensate for cross-capacitive coupling and thereby improve the operating range of the SD.
To verify that the double dot has reached the singleelectron regime, the algorithm first detects how well specific parts of the charge stability diagrams match the shape of a reference cross (see inset of Fig. 3). Each match should ideally correspond to the crossing of a charging line from each dot. The shape of the reference cross is derived from the various capacitive couplings, which follow from the gate design and are known approximately from the start. Instead of detecting crosses, one could also try to detect the individual charge-transition lines. This turned out to be more sensitive to errors for two reasons: (1) Extra features in the charge stability diagrams that do not correspond to charging lines are wrongfully interpreted as dot features. (2) Not all charging lines are straight across the entire dataset; this makes it harder to interpret which line belongs to which dot. The cross-matching algorithm is robust against such anomalies because of the local, instead of global, search across the dataset. In future work it could actually be useful to still detect these extra and/or curved lines. They could give information about e.g. unwanted additional dots and aid in determining the electron numbers in regions with higher tunnel couplings. For the current goal of finding the single-electron regime this extra information is not required.
Next, the algorithm checks whether within a region slightly larger than 70×70 mV 2 , it finds other charge transitions for more negative gate voltages with respect to the most bottom-left detected cross (see Supplementary Information II D). These regions are depicted by the green tetragons in Fig. 3. If no extra transitions are detected: the single-electron regime has been found and the result is given a score of 1 for that specific measurement outcome. If extra transitions are found the algorithm outputs the score 0. In both cases this is where the algorithm stops. At the end of the run the user can see the measurement results for the various initial choices of T and select the best one.
All combined, the running of this complete algorithm (for a single value of the T -gate) takes ∼ 200 minutes. Per device typically 5 T -values are tested. In practice we have observed that for some cooldowns of the sample the algorithm could not attain the single-electron regime. A thermal cycle combined with different bias cooling [13] can significantly influence the tuning and solve this issue; just as for tuning done by hand. The key difference is that with the computer-aided tuning hardly any user effort is required to explore tuning of double dots to the few-electron regime. In future work the time required for automated tuning (as well as for tuning by hand) can be further reduced by also connecting the tunnel barrier gates of each single dot to a high-frequency line which would allow much faster scans for Figs. 1-2 [14]. These scans currently form the bottleneck in the overall tuning process. Future experiments will also address the automated tuning of more than two dots and the tuning of the tunnel couplings in between dots and their reservoirs, which are key parameters for operating dots as qubit devices.
In summary, we have demonstrated computerautomated tuning of double quantum dot devices into the single-electron regime. This work will simplify tuning dots in the future and forms the first step towards automated tuning of large arrays of quantum dots.
I. METHODS AND MATERIALS
The experiment was performed on a GaAs/Al 0.307 Ga 0.693 As heterostructure grown by molecular-beam epitaxy, with a 90-nm-deep 2DEG with an electron density of 2.2 · 10 11 cm −2 and mobility of 3.4 · 10 6 cm 2 V −1 s −1 (measured at 1.3 K). The metallic (Ti-Au) surface gates were fabricated using electron-beam lithography. The device was cooled inside an Oxford Triton 400 dilution refrigerator to a base temperature of 15 mK. To reduce charge noise the sample was cooled while applying a positive voltage on all gates (ranging between 100 and 400 mV) [13]. Gates P 1, P 3 and P 4 were connected to homebuilt bias-tees (RC=470 ms), enabling application of d.c. voltage bias as well as high-frequency voltage excitation to these gates. Frequency multiplexing combined with RF reflectometry of the SDs was performed using two LC circuits matching a carrier wave of frequency 107.1 MHz for SD1 and 86.4 MHz for SD2. The inductors are formed by microfabricated NbTiN superconducting spiral inductors with an inductance of 3.2 µH (SD1) and 4.6 µH (SD2). The power of the carrier wave arriving at the sample was estimated to be -93 dBm. The reflected signal was amplified using a cryogenic Weinreb CITLF2 amplifier and subsequently demodulated using homebuilt electronics. Data acquisition was performed using a FPGA (DE0-Nano Terasic) and digital multimeters (Keithley). Voltage pulses to the gates were applied using a Tektronix AWG5014.
II. SOFTWARE AND ALGORITHMS
The software was developed using Python [1] with SciPy [2].
The image processing is performed in pixel coordinates. We specify the parameters of algorithms in physical units such as mV. The corresponding parameter in pixel units is then determined by translating the value using the scan parameters. By specifying the parameters in physical units the algorithms remain valid if we make scans with a different resolution. Of course making scans with a different resolution can lead to differences in rounding of numbers leading to slightly different results.
A. Determination of the transition values
To determine the transition values we perform the following steps: • Determine the lowvalue (L) and highvalue (H) of the scan by taking a robust minimum and maximum. For L this is done by taking the 1 th per-centile of the values. H is determined by first taking the 90 th percentile of the scan data H 0 and then the 90 th percentile of all the values larger then (L+H 0 )/2. This two-stage process to determine H also works well when the pinch-off occurs for very positive gate voltages. Simply taking for example the 99 th percentile of the scan data could then result in a too low estimate.
• Smoothen the signal and find the first element in the scan larger than .7L + .3H. The position of that value is selected as the transition value.
• Perform several additional checks. The above two steps will always results in a transition value, even though the channel could be completely open or closed. The checks include amongst others: -If the transition value is near the left border of the scan we conclude the transition has not been reached. We then set the transition value to the lowest value of the gate that has been scanned. In principle the algorithm could continue to search for a transition value for more negative gate voltages. However, making gate voltages too negative may induce charge noise in the sample so we do not want to apply very negative voltages. Choosing the most negative voltage of the scan range then turns out to be a good choice. In the next steps of the algorithm, this transition voltage is just a starting value and the gate voltage will still be varied. Due to cross-talk, the neighboring gates in follow-up steps will together with the gate that did not yet close, typically still ensure the formation of single dots.
-The difference of the mean of the measured values left of the transition value and the mean of the values right of the transition value should be large enough. Large enough means more than 0.3 times the standard deviation of all the values in the scan. If it is not large enough, we set the transition value to the lowest value of the gate that has been scanned following a similar reasoning as for the previous check. In this scenario we assume that the scan range started at a voltage around 0 mV, and thus that no significant change in the measured current corresponds to a channel that was always open.
B. Analysis of single dots
As described in the main text the initial analysis of a 2D scan is performed by fitting a tetragon to the image. area. In this scan we search for the Coulomb peaks using a Gabor filter [3,4]. A Gabor filter is a sinusoidal wave multiplied by a Gaussian function. The sinusoidal wave seeks out features that show contrast with their environment. We define G(x, y, λ, θ, ψ, σ, γ) = exp − with x = cos(θ)x + sin(θ)y, y = − sin(θ)x + cos(θ)y. We create a Gabor filter with the following parameters: The orientation is set to θ = π/4, standard deviation of Gaussian σ = 12.5 mV, γ = 1, λ = 10 mV, ψ = 0. A rectangular image patch of size 40 × 40 mV 2 is created using this Gabor function (see inset Fig. 5). The response of the Gabor filter to the 2D scan of Fig. 5 is shown in Fig. 6. 1 From the response figure it is clear that there is a peak (red color) at the location of the Coulomb peak. To extract the precise location of the peaks we threshold the image with a value automatically determined from the response image and determine the connected components, i.e. the pixels that together constitute a relevant feature. The center of the component 1 We use the OpenCV function matchTemplate with method TM CCORR NORMED.
furthest into the direction of the closed region is selected as the best Coulomb peak (green point). The concept of connected components for a binary image is standard in the computer vision community. We find these using OpenCv, see http://docs.opencv.org/3.0.0/d0/d7a/ classcv_1_1SimpleBlobDetector.html, but any implementation will return the same results. If there are more than two Coulomb peaks there are two possibilities: 1. Both peaks are converted into two separate connected components. Then, the blob furthest into the direction of the closed region of the scan is selected.
2. One of the two peaks might be much stronger than the other. In that case the other peak is below the threshold selected by the algorithm and the other peak will be not be visible.
In our experiments we observed that for the single dots at most a couple of Coulomb peaks are visible. Scans of the region of Fig. 5 with a charge sensor instead of the current through the array confirm that the last Coulomb peak visible in the image is indeed the Coulomb peak corresponding to the zero-to-one electron transition. This behavior is typical for this specific gate design and influences the choice for the plunger gate values for the DQD scans such as shown in Fig. 3 to find the single-electron regime, see also section II D.
C. Selection of Coulomb peaks
In this section we describe how the selection of Coulomb peaks for the SD is performed. We start with a scan of the plunger gate SDxb (x is 1 or 2) in a configuration for which Coulomb peaks can be expected. In the The peaks in the data are detected by selecting the local maxima in the plot. All peaks below a minimum threshold are discarded. 2 For each of the peaks the position of the peak half-height on the left and right side is determined. Also the bottom of the peak is determined (see details at the end of this section). From these values we can determine the peak properties such as the height and the half-width of the peak. Finally the peaks are filtered based on the peak height and overlapping peaks are removed (see details at the end of this section) leading to the detected peaks shown in Fig. 8. After this filtering step the peaks are ordered according to a score. For various applications we can define different scores. In this work the SD-peaks are primarily selected for proper charge sensing. For a good charge sensitivity we need a large peak with a steep slope. We then tune the SD to the position at half-height on the left of the highest-scoring peak. The scoring function we used is score = height 2 1 + hw/hw 0 The value of hw 0 is a scaling parameter determining the typical half width of a Coulomb peak. In our experiments we used hw 0 = 10 mV. In our experience, this scoring represents a reasonable trade-off between the height and the slope of a peak. The result is shown in Fig. 9. Details to detect the bottom left of a peak : the xcoordinate of a peak, x peak , has already been determined by selecting the local maxima in the data. To find the x-coordinate of the bottom on the left side of the peak, l, the following steps are performed after smoothing the data: 1. Search for the x-coordinate, x bottom low , of the minimum value (bottom low) in the range [x peak − 3 × thw, x peak ]. The variable thw is a measure for the typical half width of a peak and is set to ∼ 10 mV.
2.
Starting from x bottom low , scan from left to right and select the first datapoint that fulfills the following two conditions: (1) the slope is positive, and (2) the y-value is larger than 'bottom low + 10% of the peak height'.
This method does not require a specific fitting model, and also works well for asymmetric Coulomb peaks.
Details of the filter to remove overlapping peaks: for each peak we have the position of the bottom on the left (l) and the top of the peak (p). For two peaks the overlap is defined using the intersection of the intervals [l 1 , p 1 ] and [l 2 , p 2 ]. The length of an interval L is denoted as ||L||. The overlap ratio is then equal to ||π([l 1 , p 1 ], [l 2 , p 2 ])||/ ||[l 1 , p 1 ]||||[l 2 , p 2 ]||. To make the overlap a bit more robust we use a smoothed version of this formula using Laplace smoothing: When the overlap s between two peaks is larger than a threshold (0.6), then the peak with the lowest score is removed.
D. Tuning and analysis of a double dot
The main text describes how we set the gate values for the tunnel barriers of each double dot using the information of the single dot scans. For the plunger gates of the double dot an extra compensation factor is added. When each single dot is formed, the dot-barrier gate of its neighbor is kept at zero Volt. When next making a double dot, these dot-barrier gates are activated and shift the electrochemical potential of their neighbor, for which we compensate with the corresponding plunger voltage. This compensation factor is determined heuristically. For a double dot with gates L − P 1 − M − P 2 − R the compensation values for P 1, P 2 are (−φR, −φL) , with φ = 0.1. See Table I for an example. In future experiments we plan to use the capacitive-coupling information from the single dot scans in order to create more precise compensation values for the tunnel barrier gates. The exact values of the plunger gates are not very important, since we will make a scan of the double dot using the plunger gates. A good initial guess does reduce the measurement time.
The important structures in the scan of a double dot are the charging lines and the crossings of two charging lines. To determine the locations of the crossings in the image we create a template for such a crossing. We then search for crossings using template matching. The response to the template is thresholded and local maxima are calculated. The template of the crossing consists of 4 lines at angles π/8, 3π/8, 9π/8 and 11π/8 (radians) that are separated by a distance of 1.5 mV at 45 degrees that represents the interdot capacitive coupling (see inset of Fig. 3 of the main text). The width of these lines ensures that experimentally measured crossings still overlap with the template despite unavoidable small variations in the interdot capacitive coupling and the lever arms between gate voltage and electrochemical potential, which affect the slope of the transitions. The final step consists of checking whether extra charging lines are visible in a region of ∼70×70 mV 2 to the side of more negative gate voltages. The size of this region should be larger than the charging energy of each dot in mV. The topright corner of the 70×70 mV 2 area is located -10 mV southwest of the most bottom-left cross. We slightly extend this region on top and on the right to reduce the probability that a charging line is missed. If the total region falls outside the scan range of the data, the algorithm reduces the size of this region accordingly (alternatively, one could take data over a larger gate voltage range). The algorithm could then draw the wrong conclusion. When the region that results from clipping at the border of the scan range is smaller than 40×40 mV 2 , the algorithm will stop and output that it cannot properly determine whether the single-electron regime has been attained. In the typical case that the region is large enough, we first smoothen the data within this region. We subtract the smoothed data from the original data and check whether the resulting pixel values fall above a certain threshold that is proportional to the standard deviation of the smoothed dataset. If at most one pixel value is larger than the threshold, the algorithm classifies the dataset as 'single-electron regime'.
|
2016-03-07T21:00:03.000Z
|
2016-03-07T00:00:00.000
|
{
"year": 2016,
"sha1": "ea9942289b82619a5aa2bb0c6307657a3bd71888",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1063/1.4952624",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "fbaff34128a48ae18d379f88115d1a7321a479db",
"s2fieldsofstudy": [
"Physics",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
229374455
|
pes2o/s2orc
|
v3-fos-license
|
An Efficient Profiling Attack to Real Codes of PIC16F690 and ARM Cortex-M3
This article presents a new and efficient method based on power analysis, hierarchical recognition of instructions, and machine learning for reverse engineering of the instructions of PIC16F690 as an 8-bit microcontroller and LPC1768, which includes an ARM Cortex-M3 core as a 32-bit platform. Both dynamic and static power consumption were considered and analyzed. The instructions were classified in different Hamming weight groups using ensemble classification algorithms along with the Kullback-Leibler feature selection method to improve the recognition rate of opcodes and operands of real instructions. Results demonstrated 99.5% and 93.3% average success rate in recovering test instructions and real codes of PIC16F690, respectively. This work also presents promising results in reverse engineering of the instructions of LPC1768 with an overall recognition rate of 98% for test codes and 80.2% for real codes. To the best of our knowledge, this is the first serious report about profiling attack to a 32-bit platform without the need for any sophisticated laboratory tools.
IoT Internet of Things TA
Template attack IR Instruction register TN True Negative
I. INTRODUCTION
Side channel attacks are hardware cryptanalytic attacks that can pose efficient and serious threat to various systems from small security tokens to industrial control systems and critical infrastructures [1]- [3]. The notorious Meltdown and Spectre attacks that can affect CPUs even eighth-generation core platforms, are clear examples. However, these attacks involve only software and mainly rely on speculative execution, which helps speed up the execution. The so-called profiling attacks are a subcategory of power analysis attacks that are used for code breaking and reverse engineering purposes. They enable attackers to not only reveal secret keys of a particular device but also to retrieve its secret program codes or algorithm without having any prior knowledge about the running algorithm on the target device. These attacks are one the most powerful and dangerous kinds of side-cannel attacks that consist of two major phases. First, the adversary procures a copy of the target platform and tries to find any meaningful dependency between data obtained from the target device and device operation. Then, he mounts a code or key-recovery attack on the victim device. Profiling attacks mainly include template attacks and stochastic cryptanalysis. Using such attacks, an attacker can extract the cipher key and use it to make adverse configuration changes, cause the system malfunction or have it send false data back to the master system, which can have serious consequences.
There are many papers about template and profiling attacks and instruction disassembling. Most of the existing papers deal with 8-bit processors with relatively simple architectures such as Microchip PICs or AVRs. This is mainly because these target platforms are easy to have access to and to use for related experiments. In addition, they mostly evaluate their methods on test codes, while evaluating the efficiency of the attack in the breaking of actual codes running in a real program is much more important. Although template attacks have shown their effectiveness in recent years, they still suffer from numerical problems and need some assumptions regarding high data dimensionality and noise distribution, which limit their range of applications. Usually, they rely on a large amount of accurate and high quality profiling traces, which makes them almost impractical in many cases, specifically when there are only a limited number of power traces with low signal-to-noise ratio. Machine learning (ML) based methods can prevent these problems, and are found to perform better than previously known techniques when faced with complexity in numerical and statistical calculations or some of the restrictive assumptions on noise distribution [4]- [15]. Power analysis attacks compromise the security of cryptographic devices through analyzing their power consumption. Profiling attacks use a more effective statistical modelling of power consumption in such a way that they estimate the conditional probability density function of the time series for each possible key-dependent information through a Gaussian parametric model. ML-based techniques do not require any parametric or normal assumption. Based on these facts, this work presents a different method from what has been presented in the literature, and achieves far better results compared with other works published in other papers. By combining machine-learning-based methods with an efficient hierarchical trace classification approach and instruction grouping, we have presented a powerful sidechannel-based disassembler that is able to reveal the instructions of PIC16F690 and ARM Cortex-M3 with 99.5% and 98% average success rate, respectively. We have chosen PIC16F690 to have a fair comparison with other existing works [8], [10], [14], and ARM Cortex-M3 as a 32-bit low power and relatively high-performance target with more complex architecture. To the best of our knowledge, this is the first serious report about profiling attack to 32-bit platforms. We used the Advanced Encryption Standard (AES) as the underlying algorithm for evaluating the efficiency of the attack on real codes in both target devices.
In this work, we demonstrate how to overcome some previously identified shortcomings of power analysis-based profiling attacks using the ML techniques, which not just improves the solutions but improves the accuracy of templates and computations as well. We demonstrated how to use ML techniques as a powerful alternative to standard sidechannel evaluation methods. In addition, unlike other papers that focus only on dynamic power analysis, we have analyzed both dynamic and static power analysis in order to make our method more powerful than other previous works. To explain, unlike other works, we are able to recover actual codes of both processors, not their test codes. To the best of our knowledge, this is the first serious report about a successful profiling attack, specifically an ML-based profiling attack to real codes of a 32-bit target platform. Therefore, the main contributions of this work can be summarized as follows: -Performing both dynamic and static power analyses to extract the maximum possible information leakage from the power consumption of the target devices in such a way that dynamic power analysis is used to reveal the Hamming weight (HW) of each instruction's operand, while using static power analysis to recover the total HW of the opcode and operand of the next instruction.
-Adopting instruction grouping and hierarchal instruction classification methods to reduce the number of required traces and computational costs for mounting the attack.
-Employing t-distributed Stochastic Neighbor Embedding (t-SNE) machine learning algorithm for instruction grouping and Kullback-Liebler (KL) feature selection and Principle Component Analysis (PCA) for dimension reduction.
-Using ensemble of classifiers method to improve the classification rate of instructions.
-Achieving 99.5% average success rate for test codes and 93.3% for real codes in reverse engineering of the most commonly used instructions of 8-bit PIC16F690 microcontroller.
-Achieving 98% average success rate for test codes and 80.2% for real codes of LPC1768, which includes a 32-bit Cortex-M3 microcontroller for embedded applications.
This article is organized as follows: in sections II some previous works are reviewed. Section III presents a brief overview on PIC16F690 and LPC1768. Hierarchal instruction recognition is discussed in section IV. Power consumption of PIC16F690 and pipelined architectures are discussed in section V. Instruction grouping and classification using static and dynamic power traces are briefly illustrated in section VI. Section VII presents machine-learning-based instruction recognition. In section VIII, the results of the implementation of the attack against the target processors are presented. The scalability and computational cost of the proposed approach are discussed in section IX. This work is compared with other works in section X. Finally, in the conclusions, we summarize our results and discussions.
II. PREVIOUS WORKS
In [4], Choudary and Kuhn present a so-called portable template attack that is able to recover key bytes from a hardware implementation of the Advanced Encryption Standard on an ATMEL AVR XMEGA 8-bit microcontroller with 85% success rate. They present several methods to reduce numerical problems that happen during the statistical calculations VOLUME 8, 2020 related to a practical template attack. The results are valuable and interesting; however, they do not present comprehensive results for reverse engineering of the instructions. Instead, they focus on a single 'LOAD' instruction and use it to break the cipher while transferring the cipher key on data bus. Their method is not based on intelligent or ML algorithms. In addition, they do not provide any results about 16 or 32-bit target platforms. Eisenbarth et al. in [8] mounted poweranalysis-based template attack on PIC16F687 and achieved 70% recognition rate for test instructions and 58% for real instructions by exploiting prior knowledge about its program code through hidden Markov model and Viterbi algorithm. Msgna et al. [9] classified power traces of instructions of ATMega163 using PCA and k-Nearest Neighbors (k-NN) algorithms. They achieved 100% recognition rate for test codes, but they were not able to reproduce the results for real codes. In their article, Fisher's Linear Discriminant Analysis (LDA) in combination with k-NN achieves 48.74% recognition rate. While PCA with k-NN achieves a slightly better result of 56.88 %, plain PCA with k-NN unexpectedly improves the recognition rate up to 100%. Strobel et al. in [10] raised the recognition rate to 88% for real codes running on a PIC16F687 via EM analysis with multiple antennas. Nevertheless, their method needs decapsulating facilities with several extra processes which increases the complexity and cost of the attack. D-Tsague and Twala in [11] measured EM emanations of ATMega163 on a smart card and achieved 78.3% classification success rate by combining PCA and k-NN algorithms. Park et al. in [12] presented a side-channel-based disassembler using QDA and SVM for data transfer instructions in ATmega328p AVR microcontroller that is able to identify test instructions with almost 99.03% accuracy. However, they did not evaluate their attack on real codes and left it to the future. They published a comprehensive version of their work in [13] with almost the same results. Cristiani et al. in [14] proposed a new approach for side-channel-based disassembler that directly focuses on bit encoding of an instruction using local EM leakage. They employed high-precision motorized XYZ for EM probe and PIC16F15376 as the target device to build a bit-level classifier, and achieved 99.41% recognition rate at bit level and 95% at the full 14-bits instructions. However, like most of the other papers, they left to the future the evaluation of their approach on pipelined and more complex processors. Medwed and Oswald in [16] present a practical template attack against the implementation of elliptic curve digital signature algorithm (ECDSA) on ARM-7 processor, but they do not provide any detail about the efficiency of the attack, the success rate or the number of required traces.
McCann et al. [17] have used instruction-level power models for ARM Cortex M0 to emulate leakages and to detect even subtle leakages in the implementations. The results are interesting but their work is different with the traditional power side-channel models that estimate secret data as they present a circuit-level simulator based on data dependent switching effects for ARM. Also, they do not present an actual disassembler for code breaking or reverse engineering.
III. PIC16F690 AND LPC1768
In the PIC architecture, all instructions are executed within a single instruction cycle, unless a conditional test or branching occurs, or the program counter is changed as a result of the execution of an instruction. In such cases, the execution takes two instruction cycles. Such performance is mainly caused by two-stage pipeline which means that, while one instruction is decoded and executed, the next cycle is executed as NOP, where NOP stands for no operation with zero HW, and has the lowest effect on power consumption of other instructions. Each instruction cycle consists of four oscillator periods referred to as Q1 to Q4. For an oscillator frequency of 4 MHz, this gives a normal instruction execution time of 1µs.
The LPC1768 includes an ARM Cortex-M3 core and operates at CPU frequencies of up to 100 MHz. The Cortex-M3 includes a three-stage pipeline of fetch, decode and execute, and uses a Harvard architecture with separate local instructions and data buses along with a third bus for peripherals. It also contains an internal pre-fetch unit that supports speculative branching. Cortex M-3 has 16 32-bit registers, three of which are reserved for stack pointer, program counter, and link register. ARM processors have two different instruction sets: ARM instructions and Thumb instructions. ARM instructions are 32-bit instructions while Thumb instructions are 16-bit instructions extended with Thumb-2 32-bit instructions. Cortex-M3 does not support ARM instructions and implements ARMv7-M Thumb instruction set. It supports a variable-length instruction set that provides both 32-bit and 16-bit instructions for improved code density. On execution, 16-bit Thumb instructions are decompressed and decoded to full 32-bit ARM instructions, without performance loss [18]. In this work, we mainly focus on a selected number of 16-bit Thumb instructions that are mostly used in the implementation of symmetric cryptographic operations. The selected Thumb instructions are those mainly employed for data processing, shifting, and data transfer: 'LSL', 'AND', 'TST', 'ADC', 'EOR', 'ASR', 'MVNS', 'SUBS', 'STR', 'LDR', 'ARD', 'ADDS', 'MOVS' and 'CMP. Many of the other instructions that exist in the Thumb instruction set such as loading or storing a half word also fall into similar or the same categories.
IV. HIERARCHAL INSTRUCTION RECOGNITION
In ML-based instruction disassembling, power consumption features of target instructions are collected to form feature vector for ML prediction. A hierarchical approach maps ML computations to a corresponding hierarchical order to reduce computational cost and increase the accuracy. Hierarchical classification intrinsically requires to execute O(Nlog C 2 ) cascaded binary classifiers for an N dimensional C-classes classification problem, which is lower than other well-known binary classifiers such as one-vs-all and one-vs-one as shown in Table 1 [19]. The process of instruction disassembling for a typical microcontroller is usually performed hierarchically in four different steps as demonstrated in Fig. 1. At first, feature selection and dimension reduction are used as preprocessing steps for mapping high-dimensional data to a lower-dimensional space. Measured power traces are then clustered into instruction groups based on their operation in the second stage. In the third step, instructions in each selected group are classified based on their HWs, and finally each instruction is recognized by identifying the HW of its opcode and operand.
V. INSTRUCTION POWER ANALYSIS IN PIC AND ARM
In PIC processors, the rate of the clock significantly affects the shape of power consumption traces, mainly due to the charging and discharging of internal capacitances. Moreover, the fetching process influences the power consumption of Q2 to Q4 proportionally to the HW of fetched instructions. In the first oscillator cycle or Q1, the fetched instruction is stored in the instruction register (IR) so that in case of an arithmetic or logical instruction, the operand is sent to the ALU through the ALU multiplexer. In case of a file-register operation, the operand is sent from the SRAM memory to the data bus which is connected to the ALU through the same multiplexer. After that, the previous data stored on the bus is replaced by the operand of the instruction. This leads to a power consumption that is proportional to the Hamming distance (HD) of both values. In Q2, the ALU reads the operand on the data bus. Hence, the HW of the opcodes is the most important factor that affects the power consumption, specifically in the case of the literal or file-register operations in which their address are stored in the opcodes. The ALU processes the instructions in Q3 and the result is transferred to the data bus. Since the actual value on data bus is the operand loaded in Q1, the power consumption directly depends on the HD of these values. Finally, in the last oscillator cycle or Q4, the result on the data bus is stored in memory or any external device connected to the data bus. It should also be mentioned that the next instruction is latched into instruction register in Q4 cycle causing the power consumption of the device to be directly proportional to the HD of the executed and the latched instructions [20].
In a pipelined architecture, the power consumed in one clock cycle is the sum of the power consumed in all pipeline stages at the same clock cycle. This means that the power consumption of each instruction includes the power consumption of the same instruction and inter-instruction effects due to the existence of multiple instructions inside the pipeline. Thus, we need to consider changes caused by instructions in different pipeline stages and power consumption of each instruction including the power consumption of the same instruction and inter-instruction effects due to the existence of other instructions in the pipeline [21]- [23]. Therefore, power consumption of each instruction in clock cycle n can be described by Eq. (1).
In Eq. (1), M is number of pipeline stages, P b (I s ) is the base power consumption of an instruction at pipeline stage s and is usually obtained by putting each target instruction between two NOPs (NOP, target instruction, NOP). The term P I s−1 ,I s is a complex term as it is affected by other instructions in the pipeline. These effects are proportional to h d (I s , I s−1 ) , HD of two consecutive instructions and h w (I s ) , HW of different parameters such as instruction fetch address, register number, opcode encoding, memory address, register value, and immediate operand inside each instruction. This relationship can be described by Eq. (2).
In Eq. (2), P x (I s ) is a part of power consumption for instruction I s that is caused due to the variation in parameter x in different pipeline stages, and β is the variation coefficient that depends on the variation of HD or HW. Parameters with higher β value have larger impacts on inter-instruction power consumption compared to other parameters with the same HD/HW changes.
Depending to instruction type, each instruction may include some power-sensitive factors, and the effect of all factors involved must be considered. Therefore, P i the power consumed during the execution of the instruction i, can be calculated as: where P b (I s ) is the base power consumption of the instruction i, β i,j and N i,j are the coefficient and the HW/HD of the j th power-sensitive factor of the instruction i, respectively. The processors not only dissipate power when executing instructions but also when stalling occurs. The stalling happens due to dependencies between multi cycle instructions that need more than two exe-stage cycles or stall cycles due to resource conflicts, data or control hazards. Therefore, the power consumed for instructions running in a real program can be written as Eq. (4).
where ∈ is the power consumption of a pipeline stall [21]- [23]. In order to find the effect of power-sensitive parameters on inter-instruction power consumption during the profiling phase, we need to create a table that shows the power consumption for each parameter at each stage of the pipeline corresponding to minimum, average and maximum HD and HW. Also, we need to sample power traces during the execution of each instruction in such a way that one parameter can change while all other parameters are kept constant.
For the case of inter-instruction effects where the Hamming Distance (HD) between two consecutive instructions is a basic factor, we create a table that shows power consumption for each instruction at each pipeline stage. Then, we make a triple set of such tables, corresponding to minimum, average and maximum HD as illustrated in Fig. 2. For example, in the case of the instructions with 8-bit operands, the proper ranges of power consumptions can be obtained by changing the values of data operands from minimum to maximum values (e.g., 0 × 00 to 0xFF) which in turn will change the Hamming distance between two consecutive operands. Inter-instruction power consumption for varying HD values of different parameters in different pipeline stages is determined with ignoring parameters with the least influence and contribution in the profiling phase, which in turn reduces the computational cost in the matching phase.
VI. INSTRUCTION GROUPING AND CLASSIFICATION USING STATIC AND DYNAMIC POWER ANALYSIS
The instruction set of a typical microcontroller is often categorized based on the operation of each instruction (arithmetic, logic, data, and control), its operands or register type. Since HW model is an effective model in side-channel analysis, instructions of a microcontroller can be categorized based on their HWs. Each PIC16F690 microcontroller instruction is a 14-bit word, including a 6-bit opcode and an 8-bit operand which specifies the instruction type and its operation as shown in Fig. 3 [20]. The grouping based on the Hamming weight of the instructions opcodes for this processor is presented in Table 2. In this work, we used the combination of static and dynamic power analyses. Many drawbacks, which has been reported in the previous works [21], could be resolved by this combination. Notably, HW of a byte or word is defined as the number of '1' symbols in each byte or word. Static power analysis is related to Hamming weight or the number of '1's that exist on the bus and is used to recover the total HW of the target instruction, i.e., HW of opcode + HW of operand. HD between two bytes or words is the number of places at which the two bytes or words differ, and is dynamic in nature. Hence, Dynamic power analysis is employed to reveal HW of opcodes while static power analysis is applied to recover the total Hamming weight of operand and opcode. Therefore, the combination of these methods could be used for recovering the instruction types as well as the Hamming weight of their operands.
In order to analyze dynamic power of each instruction, we need to accurately record the shape of sudden and rapid changes in power consumption patterns, while other parts of power consumption are used for static power analysis. Static power analysis requires fewer samples and lower dimension, imposes less computational burden, and can be performed at a higher speed compared with dynamic power analysis. In order to reduce the effect of noise in the experiments, measurements should be averaged over ten times. Looking at power trace of 'ADDWF' instructions in PIC16F690 (Fig. 4), a sharp change occurs during the rising edge of the clock where dynamic power is the dominant component. Due to the effect of capacitance discharging, this effect is diminished after falling edge of the clock and after that static power will be the dominant component. Fig. 5 shows a part of power consumption of PIC16F690 while the AES algorithm is running on the processor. As seen in this figure, the combination of static and dynamic power analyses enables us to recover the instructions as well as their operands. HW0 and HW8 with only 1 member are the smallest class and HW4 with 70 members is the largest class. These instructions with known HWs are classified by the ensemble of classifiers in their corresponding HW classes using dynamic power consumption. Ensemble learning can significantly improve the performance of pattern classification [24], [25].
In order to classify the Cortex-M3 16-bit Thumb1 instructions based on their HWs, we need to consider 17 HW classes and 33 HW classes for 32-bit Thumb2 instructions.
The general encoding formats for 16-bit and 32-bit instructions of ARM Cortex-M3 are depicted in Fig. 7 [18]. As mentioned, execution of instructions in Cortex-M3 takes 2 to 12 clock pulses. Some of the most commonly used instructions such as 'ADD', 'AND', 'EOR', 'MOV' and 'MUL' with 32-bit operands are performed during 1 clock cycle; some other instructions such as 'Load', and 'Store' require 2 clock cycles per instruction. 'MUL' with 64-bit result requires 3-5 clock cycles while 'DIV' timing depends on its dividend and divisor, and its execution will take 2-12 clock cycles. Table 3 categorizes the instructions of Cortex-M3 based on their required cycles as well as their corresponding bit length. Therefore, we need to distinguish cycles per instruction in ARM Cortex-M3. The multi-cycle instructions in Cortex-M3 create pipeline stall and lead to different power consumptions, which is useful for our purpose. As we know, pipelined processors frequently insert NOP instructions to the pipeline to remove hazards, avoid data interference, and generate some delays for the proper execution of the instructions [21]- [23]. In such cases, only the EX stage of the pipeline is active and the other stages are stalled and consume a constant amount of power. In order to measure these effects on the power consumption in such circumstances, small program loops to activate the corresponding conditions could be implemented and executed. This subject is shown in table 4 for 'STR' instruction as an example in which stall stages happen at cycle n + 3. To distinguish the instructions of the Cortex-M3, we need to build power templates for pipeline stalls during the instruction profiling phase. Then, power consumption of the victim device in every cycle in the matching phase is compared with created templates. The pipeline stalls for the target instruction in both profiling and the matching phase in the victim device are the same, and so the stall states can be identified and determined by classification algorithms. Therefore, stall cycles for the instruction are revealed and accordingly, cycles per instruction are determined. Once the number of clock cycles required for each instruction in the victim device is determined, it is compared with the pre-prepared templates for finding the group to which that instruction belongs.
VII. MACHINE LEARNING BASED INSTRUCTION RECOGNITION
The aim of ML is to allow machines to learn from data so that they can produce more accurate outputs.
In ML-based instruction disassembling, feature selection, dimension reduction, instruction clustering and classification are implemented through ML algorithms. Fig. 8 provides a flowchart of the entire process including the profiling and matching phases.
Feature selection is one of the fundamental concepts in machine learning which significantly impacts the performance of the selected model. The term 'model' describes the output of the algorithm that is trained with data. Feature selection is the process where one can manually or automatically select those features which have the highest contribution to the prediction variable or output of interest. Having irrelevant features in the data of interest can decrease the accuracy of the models and make the selected model learn based on irrelevant or inconclusive features [25], [26].
In instruction disassembling, feature selection means how useful and important each sample point in power traces is before the ML algorithm feeds on it. It is performed as a pre-processing step for faster training, reducing complexity, and improving accuracy. In this work, KL Divergence-based feature selection algorithms were employed to highlight differences between power traces of different instructions [27]. Continuous wavelet transform (CWT) was utilized to map data from time domain to frequency domain, to align power traces, and to remove noise created during the sampling [28]. Wavelet coefficients indices were used for calculating same class (instruction) KL divergence (KLD SC ) and different class (instruction) KL divergence (KLD dC ). Points with highest KLD dC and KLD SC lower than a specific threshold are selected as stationary feature points, as illustrated in algorithm I [29]. Peaks of the KL divergence exhibit distinct differences between two traces at some specific points. In real nonstationary environments, several runs of a specific program with the same instructions may lead to different values at the same sampling points/times. Therefore, such points could not be selected as feature points. Selected points were applied to PCA for more reduction as PCA eliminates low important indices without losing information data. Implemented feature selection and stationary points for 'ADDWF' and 'XORWF' instructions are shown in Fig. 9 and Fig. 10, respectively.
Using the same procedure for Cortex-M3 reduced the number of sampling points as demonstrated in Fig. 11 for 'EOR' instruction. In order to perform instruction clustering, the Kullback-Leibler divergences were used as a statistical metric to distinguish the distribution of static power traces of the victim device and the templates. T-SNE Machine learning algorithm uses KL divergence for visualization and clustering. This unsupervised nonlinear method maps high-dimensional data to a low-dimensional space (e.g. two-dimensional) and is able to provide wellseparated clusters for instruction grouping. It minimizes the Kullback-Leibler divergence between Gaussian distribution of high dimensional space and corresponding points of Student t-distribution to a low dimensional space [30]. In this work, ensemble classification was performed for each classifier, and the results were calculated based on confusion matrix as shown in Table 5
VIII. ATTACK ON REAL SYSTEMS
To investigate the efficiency of the proposed approach, two dedicated boards were designed for power analysis. A 10 resistor was used to connect the ground pin of each microcontroller to the ground of the module. The power traces were recorded by Infiniium Keysight DSO90604A 20 GS/sec digital oscilloscope with BW = 6 GHz, and an Agilent SN MY 6596/4PEA, and a current probe to measure the boards VOLUME 8, 2020 supply currents. In order to evaluate the performance of the proposed approach, the AES algorithm was implemented in both microcontrollers. Fig. 12 shows the experimental setup for mounting the attack on PIC16F690. Fig. 13 shows the photograph of the designed LPC1768 module. In order to improve signal-to-noise ratio and to remove trace misalignment in the time domain (or time-shift), the samples were mapped to time-scale domain using CWT transformation. The clock signal of the microcontroller was set to 8 MHz. Sampling with 1 GS/sec from a power signal that changes with 8 MHz clock frequency generates 500 sampling points per instruction which were expanded to 25,000 points per instruction in the transformed domain with scale = 50. After applying Kullback-Leibler features selection in 16 instructions of group I with optimum threshold 0.5 * (10) −7 , 1126 stationary points were selected which showed 95.4% reduction in the sampling points. The tests were performed using 5-fold cross validation: For each instruction, 500 traces were measured and were divided into five parts, where 80% of the data were used for training and 20% were used as test data. The results showed that the classification for each instruction with ensemble of classifiers was 99% successful on average.
In order to find the effect of the number of measurement traces on instruction recognition rate, the recognition rate was calculated with the total number of traces in each class. Then, the calculations were performed with fewer traces or samples. Fig. 14 shows the successful recognition rate versus the number of traces for 'ADDWF' instruction for both hierarchical and non-hierarchical methods. It is obvious that decreasing the number of traces reduces the recognition rate. However, the results showed that even reducing the number of traces to half leads to only 2% recognition rate reduction while reducing the number of traces to half in non-hierarchical approach leads to 11 % reduction in the recognition rate. Table 6 shows parts of real codes running on PIC16F690 as well as their 14-bit opcodes and their corresponding HWs. In order to attack the real codes, static power analysis was applied and the instructions with known HWs were classified using ensemble of classifiers in their corresponding HW classes through dynamic power consumption. Table 7 displays some of the HW4 instructions for classification of HW4 class. The Ensemble machine learning algorithm with Random forest, J48, Rep tree, Jrip and One r classifier were utilized for classification of instructions in their corresponding HW classes. Ensemble classification was performed for each classifier by WEKA tool [31] and the results were calculated based on the confusion matrix. The matrix corresponding to HW4 instructions is presented in Table 8. Fig. 15 depicts 2-D graph of the averaged power consumption of some instructions in HW4 class in which the trace for the 'INCF' instruction is plotted in blue, 'RLF' is plotted in red . . . As expected, all traces have almost an identical shape since they belong to the instructions that are all in the same HW class and have the same operand. In order to achieve the best possible results, we adopted the most widely known methods of ensemble including voting, bagging and adaptive boosting. Fig. 16 plots the recognition rate for PIC16F690 'ADDWF' and 'RLF' instructions. It clearly shows that the recognition rate is considerably improved via hierarchical ensemble classification along with the adaptive boosting, bagging and weighted voting. As evident in Fig.16, without hierarchical and ensemble classification, the highest recognition rate for 'ADDWF/RLF' instruction in HW-4 class are 47% and 44.6%, respectively, which belong to Random forest and Rep tree algorithms. By applying hierarchical approach, the highest recognition rate improves 25.4% and 24.2% using J48 and Random forest algorithms, respectively. By applying bagging, the recognition rate rises while the highest improvement rate is 14.2% and 13.6%, which belongs to Boosting Jrip and One r algorithms, respectively. By the application of boosting, recognition rate increases and the highest rate of improvement is 20.6% and 18.8% owned by boosting random forest and J48 algorithms. Finally, by applying the weighted majority voting, it reaches 91.8% and 90.6% for the same instructions. This process was carried out for other instructions in the other HW classes. Table 9 presents instruction recognition rate for some other instructions with other HWs that change from 88.4% to 98%. The results show that a considerable improvement has been obtained in recognition of the instructions in a real program with 93.3% success rate on average.
It should also be mentioned that, as KL Divergence-based feature selection algorithms are employed to select non-stationary feature points, we should find the optimum value for KL threshold to achieve the best recognition rate, the minimum selected points, the maximum feature reduction, and the minimum computational cost. In Fig. 17, the recognition rate and trace reduction versus the KL threshold value are illustrated. As seen in Fig. 17, the intersection of the recognition rate and trace reduction curves gives the optimum KL value which is equal to 0.5.
Applying the same procedure as was used for PIC, 946 stationary points were obtained for Cortex-M3 which showed 96.2% reduction in the number of sampling points. The stationary points were applied as input variables to PCA algorithm for more reduction and resulted in 98.2% recognition rate for test codes using 50 components in 5-fold cross-validation.
The result of attack on some of the selected 16-bit thumb instructions of LPC1768 ARM Cortex-M3 with immediate offsets and register operands in the real program are shown in Table 10. The results showed 25.5% improvement in recognition rate on average after applying the proposed hierarchal approach. Fig. 18 shows the results of successful recognition rate versus the number of traces for both hierarchical and non-hierarchical methods. Similar to the PIC, decreasing the number of traces reduces the recognition rate, but according to the results even reducing the number of traces to half in hierarchical method leads to only 4% recognition rate reduction. However, reducing the number of traces to half in non-hierarchical approach leads to 15% reduction in the recognition rate.
IX. SCALABILITY AND COMPUTATIONAL COST
Scalability is an essential component of any state-of-the-art design and implementation. It leads to better user experience, lower maintenance costs, and higher agility. Our design is scalable in terms of both hardware and software. With respect to the general scenario of the proposed approach, it consists of four main components: a target processor (in this work, PIC16F690 as an 8-bit and ARM Cortex M-3 as a 32-bit platform), hierarchical grouping of the instructions based on their Hamming weight, building power templates for each instruction power analysis, and template matching using machine learning approaches.
The software architecture of our customized simulation and attack environment can accommodate various scenarios depending on the processor type or model. It should be noted that other than the two processors studied in this work, the proposed methodology would be applicable for both 8-bit cores and (16 and) 32-bit cores of the same families or even different families. It would also be applicable for various security algorithms, specifically symmetric key cryptographic algorithms. The sampling process and template building phase are performed offline. Hence, changing the target processor will not impose any computational burden on the work. In this case, we may only need to change the sampling frequency of the digital oscilloscope. After the sampling process, the input data and their associated power traces are paired, stored and exported to the analysis and preprocessing program written in MATLAB, and also to WEKA tools for machine learning-based analysis. This process is also offline and quite flexible in such a way that it can be changed and updated based on the attack scenario.
On the other hand, our proposed attack framework is much more efficient in terms of computational cost compared with the other methods published in the literature due to the following reasons: a) The most important factor in the amount of calculations is the number of required power samples as it directly affects time consumption and storage volume required for the profiling phase. The large profiling base required to successfully construct an accurate template is still an open problem in the existing profiling attacks. This problem is deteriorated by the necessity for the target and profiling devices in similar operating conditions or even aging condition, so it may increase the required number of traces even further. Normally, a successful profiling attack requires several thousands of power traces. Using hierarchical approach along with instruction grouping, the number of required sampling traces is reduced. As shown in Fig. 14 and Fig. 18, the proposed method is able to recover the real instructions of PIC16F690 with almost 100 power traces and ARM Cortex-M3 with almost 250 traces, while other methods require a far larger number of traces to reach the same for success rate for test codes in 8-bit platforms (not real codes and not 32-bit platforms).
b) The proposed method does not include statistical analysis for large volumes of input data. Instead, it uses some machine learning algorithms as pre-processing steps for a limited number of power samples which does not impose heavy computational burden to the user. It should also be noted that, compared with the individual classifier, ensemble classifiers obtain better performance but as the number of base classifiers increases, the time and cost of the profiling phase may slightly increase. However, it will not cause serious problems or harm the practicality of the method. In order to have a better estimate, as is shown in table 11, 5 base tests were performed on an Intel 2.5 GHz Core i5-3210M PC with a 4 GB RAM memory. As mentioned, the profiling phase is usually performed offline. So, ensemble of classifiers is independent of matching phase. Table 11 provides a comparison between the execution time of different classification algorithms on the same platform. Table 12 presents a comparison between some of the existing side-channel based disassemblers with our work. As mentioned, most of the existing works have not evaluated their approaches for real codes. Some of them use EM analysis which requires decapsulating facilities with several extra processes, and also needs a motorized probe as well as precise experimental setup that are available only in special laboratories. There are a few references about reverse engineering of the ARM processor using power analysis attacks none of which present an actual disassembler to recover real instructions from non-invasive measurements.
XI. CONCLUSION AND FUTURE WORKS
In this work, we demonstrated how to use hierarchical and ensemble classification along with dynamic and static power analyses to increase the average rate of extraction for test codes running on PIC16F690 and Hamming weight of their corresponding operands up to 99.5%. We also performed the same procedure for ARM-based LPC1768 and achieved 98% extraction rate. In real experiments on real codes running on these devices, 93% success rate was obtained on average for PIC16F690 and 80% for LPC1768. This means that the proposed method works for both a mid-range core and a relatively high-end core, and is portable between different chips of the same family or even different families. In the field of hardware and cyber security, any improvement in the accuracy of machine learning algorithms can be of great value. ML-based template attacks are less prone to some of the weaknesses found in statistical-based attacks and outperform them in scenarios where complicating factors such as ambient factors or countermeasure mechanisms are present, specifically when a sufficient profiling base is available. These methods are also successful for smaller profiling traces. The result of this work can be applied for malware detection where an adversary can insert a malicious code with similar functionality as the original code in the device. It should also be noted that, as the dimensions of CMOS transistors decreases to few nanometers, the role of static power consumption is becoming increasingly important in modern devices and lowpower applications. This is a central issue from the security point of view since it is being converted to a new target for hardware and side-channel attackers. An advanced organized security attack could take over the control of power grids, energy providers, financial services, or other critical infrastructures, resulting in catastrophic consequences. Enhancing hardware and cyber security is perhaps the fundamental component to protect such infrastructures. As a suggestion for future work, other processors in more advanced technology generations need to be scrutinized. In addition, deeplearning-based approaches may provide better results over commonly used ML-techniques [32].
|
2020-12-17T09:12:09.569Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "577df03357f6eb48a31ca3189456c7569431c16b",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09288737.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "031bce3122420bed082c56f64fa64840f4b017cc",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
53049571
|
pes2o/s2orc
|
v3-fos-license
|
A wavefront orientation method for precise numerical determination of tsunami travel time
We present a highly accurate and computationally efficient method (herein, the “wavefront orientation method”) for determining the travel time of oceanic tsunamis. Based on Huygens’ Principle, the method uses an eight-point grid-point pattern and the most recent information on the orientation of the advancing wavefront to determine the time for a tsunami to travel to a specific oceanic location. The method is shown to provide improved accuracy and reduced anisotropy compared with the conventional multiple grid-point method presently in widespread use.
Introduction
Determining tsunami travel time is one of the fundamental roles of tsunami warning centres and other agencies tasked with estimating the possible impact of approaching tsunami waves on different coastal regions.In contrast to most other natural hazards, tsunami travel-time calculations can be performed well in advance.Because the travel time is the same when the source and the target are inverted, the inverse travel time for populated coastal sites have been computed and the corresponding databases of inverse travel-time maps created.When there is a tsunami event, the travel time to a specific coastal site is available immediately once information has been received about the initial parameters of the underwater earthquake and its location.Any initial travel-time estimation will be preliminary (it is not based on the actual extent of the tsunami source) and needs to be refined as new information regarding the tsunami source becomes available.
In addition to the need for accurate real-time travel-time estimates for tsunami alerts and warnings, highly reliable estimates of tsunami travel time (TTT) are required for optimizing the location of offshore tsunami warning stations used for refining information on the parameters of approaching tsunami waves (Poplavski et al., 1988).These stations require accurate advanced knowledge of the difference in wave travel time between specific coastal locations and the warning station (Titov et al., 2005).The greater the distance between the tsunami station and the coast, the greater the advanced forecast time for distant remotely generated tsunamis but the shorter the forecast time for locally generated tsunamis.The optimal warning system design should, therefore, be based on extensive determination of wave propagation times, which, in turn, requires an efficient and accurate method for calculating the tsunami travel time.Accurate estimation of observed wave arrival times is also needed for research delineating the tsunami source region.In particular, numerical modelling of the travel times for near-source locations can provide important information concerning the size and shape of the source region (e.g.Abe, 1973;Fine et al., 2005;Hayashi et al., 2012).Travel-time discrepancies of only a few minutes can lead to source-positioning errors of several tens of kilometres.Lastly, any discrepancy between the computed and observed tsunami travel times may indicate that the geometry of the source region is incorrect or that there may be physical effects, such as wave dispersion, nonlinearity, and coupling with elastic earthquake modes, which should be taken into account.These affects often require better wave travel-time accuracy than for tsunami warning purposes.
There are presently two primary approaches for calculating tsunami travel time from gridded bathymetry: (1) by kinematic wavefront propagation calculations based on Huygens' Principle (Shokin at al., 1987); and (2) by solving the dynamical equations of motion, typically using the finite-difference method (cf.Kowalik et al., 2005).The first method is the one most commonly used in modern tsunami travel-time calculations and is favoured by commercial software such as GEOWAVE (GEOWARE, 2013).Following Huygens' Principle, each of the points along a wavefront is a source for the tsunami waves.These points serve as start locations for travel-time computations to the next N neighbouring points.This set of points is referred to as "the pattern of neighbouring points".In GEOWAVE, N can have the values 8, 16, 32, 48 or 64, and the TTT between points is computed using the shallow water propagating speed formula, c = √ gh, where g is the gravitational acceleration and h is the water depth.Small meridional changes in gravity are incorporated in the methodology.Water depths are obtained from the gridded bathymetry provided by ETOPO (Amante and Eakins, 2009).If the arrival time to the nearest point is not determined, or if the previously computed arrival time is greater than the currently computed arrival time, the arrival time at the latter point is replaced with the newer value.At the end of the computational step, a new source point is specified as that frontal point that had the minimum travel (and arrival) times.This frontal point is converted to a permanent point to be used as source point for the next time step A critical step in the calculation is defining the spatial pattern of the N neighbouring points.The larger the value of N, the greater the isotropy of the wave field, and the better the accuracy of the timing calculation for open-ocean waves.On the other hand, a larger pattern also creates a broader region of uncertainty along the frontal zone, which, in addition to the computational overhead, creates problems in zones where the water depth is changing quickly.Moreover, a larger grid pattern can cause the tsunami arrival estimations along certain angles to "jump" over shallow water areas or over islands, leading to the need to integrate the travel time over the line connecting the two points, which significantly increases the computational complexity.In the case of a variable-depth ocean, this integration introduces an error arising from the fact that the actual path is no longer a straight line.
The wavefront orientation method proposed in this study uses a less extensive spatial grid pattern by taking advantage of the most recently gained information on the tsunami propagation in the neighbouring grid points.In particular, the method uses knowledge about the local direction of wavefront propagation obtained from previous calculations and, as a result, yields a more accurate travel-time calculation without the need to apply a large pattern of grid points.
The conventional method
The conventional method calculates tsunami travel time for each time step by dividing the spatial grid into three categories: (A) grid points for which the travel time has already been computed; (B) intermediate, or frontal, grid points where the travel time is to be evaluated; and (C) grid points that have yet to be reached by the advancing waves.At each time step, GEOWAVE starts the calculations from specific grid points in category A and then calculates arrival times for N neighbouring points in categories B and C (Fig. 1).New values replace the previously computed values if the new one is smaller.Figure 1 shows an example for a single step for the N = 32.In this example, the program computes the travel time for points 1 to 3 and points 20 to 32.Because points 4 to 19 belong to category A, they are not used in the current step calculation.
For any given grid pattern, there are errors in the conventional method due to the fact that not all propagation directions from the different source regions to the new grid point are covered by the pattern.Moreover, because of the structure of the rectangular pattern, the directional coverage is not uniformly distributed.As indicated by Fig. 1, the biggest gaps are in the directions along the grid axes, corresponding to those directions in line with the grid points (i.e. between the direction from centre O to point B1 and to point B2, or between directions from point O to point B24 and to point B25, and so on); consequently, the main arrival errors are in these directional sectors.Note that we are restricting our analysis to the case for uniformly propagating wave velocity, where the relative errors in travel time are equivalent to the relative path length errors.Directional errors for the conventional method can be estimated analytically.Each pattern has a fixed number (or list) of directions of propagation, α i , (i = 1, 2, . ..), corresponding to the grid pattern number N. If the directional angle β from the source to a given point is not in the list, the fastest way to reach a given point in the model is along one of the listed directions, α k , and then in the adjacent direction α k+1 such that α k < β < α k+1 . (1) The resulting accumulated distance for the two-direction pass will be greater than the distance between the source O and the given point A (Fig. 2).The relative increase in path length, δ (corresponding to an increase in the tsunami travel time), is The relative change in path length, δ, is a maximum when the direction of propagation, β, is equal to the mean of the bounding list directions, α k and α k+1 ; specifically, whereby The value of δ max is highest when either α k or α k+1 coincides with one of the coordinate axes.
The angle β max and statistics for the relative error err = δ− 1 are listed in Table 1.Results for the conventional method are listed as P8, P16, P32, P48 and P64 according to the number of points N used in the travel-time computations.Errors for the conventional method are always positive or zero, and only zero in those directions lying along the direct interconnection between spatial grid points.Based on Table 1, the directional errors are typically small and decrease with an increase in the number of points used in the computational grid pattern.However, because the relative error remains uniform with distance, the absolute error can become significant over long distances, such as in the case of trans-Pacific tsunami propagation.
The wavefront orientation method
We propose a new methodology for calculating tsunami travel time which, in addition to having a small spatial grid pattern, makes use of the most recently derived information regarding the tsunami travel time at the point closest to the current source position.Let O be the current source point and let A5 be the nearest point for which the travel time has been determined (Fig. 3).Because the time it has taken the tsunami to reach the current source position is, by definition, a maximum, the difference in travel (or arrival) time Table 2. Relative errors for the proposed wavefront orientation method and a plane square grid.Distance from the source location at point O (0, 0) is measured in terms of the specific point number on the grid.The larger the grid number, the greater the distance from the source region.RMS is the root mean square of the error, and STD is the standard deviation of the errors.
If we assume that the tsunami wavefront is a straight line in the vicinity of the source point, and that the tsunami arrival time, t, in the triangular region O-A5-B7 in Fig. 3 is a linear function of the local spatial coordinates x and y, then where c −1 is the wave slowness (the inverse of the wave speed, c) and the orientation angle, φ, of the tsunami wavefront can be found using known values of the time difference, t OA5 , corresponding to the arrival time between points O and A5.Specifically, where x is the incremental grid step in the x direction.The travel time to point B7 from point O will be where y is the grid step along the y direction.Equation ( 7) allows for more accurate computation of the travel time to point B7 than the conventional method.However, because it is based on previously computed travel times at two points, our method requires some initialization, which can be provided by the conventional method using the simple 8-point algorithm.Unlike the conventional method, this new approach has no fixed directional errors.As a consequence, relative errors decrease with distance from the start location (origin) of the tsunami event.This provides an improvement over the conventional method, whose level of error depends on the initialization pattern.
Table 3. Errors in tsunami travel time for a constant (4000 m)-depth ocean with bathymetry gridded at 1-arcminute steps for both the conventional and proposed methods.Values are compared to exact values obtained from an analytical solution.Results for the conventional method are denoted with a "P"; those for the waveform method with an "F".RMS is the root mean square of the error, and STD is the standard deviation of the errors.3 Method comparison
Constant-depth ocean on a spherical geographical grid
One of the primary concerns in tsunami research is determining the travel times for trans-oceanic (mostly trans-Pacific) tsunamis.It is, therefore, of interest to determine how our newly proposed method performs relative the conventional method for a spherical, constant-arc step grid for the Pacific Ocean.We first examine the method's performance for an ocean of uniform depth and then compare the results to those for an exact analytical solution.Table 3 shows the absolute errors in tsunami travel time obtained for an ocean of 4000 m depth for both methods.
According to Table 3, the maximum timing error decreases with grid pattern number but remains measurable even for the conventional P64 algorithm.The errors are significant for the P32 and, especially, the P16 algorithms.In addition, the conventional method has a positive bias, which needs to be corrected.The GEOWARE software decreases this bias by introducing a "correction coefficient".In contrast, the proposed wavefront orientation method is almost free from errors; the maximum differences between the numerical results and the theoretical (analytical solution) values do not exceed 12 s over the entire propagation distance of 15 000 km.
Figure 4 shows the distribution of errors for the different methods.As for the plane wave case discussed in Sect.2, the accumulated timing errors for the conventional method generally increase with distance, although the distribution of errors differs from that for the plane wave case.In particular, the errors concentrate near both sides of the latitude meridians passing through the source region but are almost absent near the parallels of longitude passing through the source region.This is because, for high latitudes, the wave propagation beams are not straight lines in the spherical coordinate system (except in the meridional direction).Specifically, the beams cannot be along parallels but must be along great circles.In the near-meridional direction, the relative error is similar to that for the plane wave case -i.e. about 0.5 %, 1.3 % and 2.7 % for the conventional grid cases P64, P32 and P16.For the new wavefront model, the error distribution has little directional bias mainly because it is an almost error-free calculation to begin with.
Pacific Ocean with realistic seafloor bathymetry
For estimations of tsunami travel time under realistic ocean conditions, accurate measurements of water depth are of major importance.In addition to its direct effect on the actual travel time, changes in water depth can affect the direction error as the wave beams undergo convergence and divergence.As shown by Satake (1988), these effects completely change the beam pattern of tsunami energy flux in the Pacific Ocean compared to that for an ocean of uniform depth.The difficulty with estimating the reliability of the tsunami traveltime calculation over a variable-depth ocean is that it cannot be compared to an analytical solution.Moreover, because of measurement uncertainties, the observed travel times to tsunami recording sites such as tide gauges and bottom pressure recorders are not sufficiently accurate to judge the numerically derived results.We are, therefore, limited to comparing differences between the different numerical methods, taking into account the hierarchy of the conventional method, denoted as models P8 to P64, for which the accuracy increases with increasing spatial pattern number.
It is important to understand how the conventional and wavefront methods compute the travel distance between two points over a variable-depth ocean.As noted earlier, GEOWARE computes the travel times over straight lines connecting pairs of points.For each line, the travel time is computed as the sum of the travel times for each individual grid cell that a particular line crosses.Inside each grid cell, the algorithm interpolates the inverse celerity so that the computed time between neighbouring points corresponds to the travel time based on the inverse celerity for individual grid cell regions of uniform depth.Thus, the uniform depth value used is always closest to that for the shallower point.
The wavefront method uses the same type of interpolation.However, because of the short spatial template (distance between neighbouring points), each inner-cell computation is limited to just one cell.We have run computations for the conventional and wavefront methods for a realistic ocean using the identical source position as for the uniform depth ocean (i.e.175 • W, 50 • N) and covering the same spatial domain (lower left corner: 95 • E, −72 • S; upper right corner: 80 • W, 65 • N).We have also used the same ETOPO-1 dataset for both the conventional and wavefront method.To avoid complications arising from different interpolation schemes in coastal areas, we have restricted our comparisons to openocean points only, for which water depths are greater than 3000 m. Results of these comparisons are presented in Table 4 and Fig. 5.
Because the conventional method always overestimates the tsunami travel time regardless of the number of grid points used, the travel-time errors using conventional model P16 versus using model P64 (written P16-P64) or model P32 versus P64 (P32-P64) are on the low side relative to the actual analytically derived timing errors.For this reason, the errors for conventional model P16 versus our wavefront Table 4. Statistical parameters for differences in computed trans-Pacific tsunami travel times using the ETOPO-1 dataset for the conventional and the proposed wavefront orientation methods.F8 denotes runs using the new waveform orientation method based on the 8-point GEOWAVE algorithm.RMS is the root mean square of the error, and STD is the standard deviation of the errors.3.
Methods Minimum difference
As shown by the bottom 3 rows of Table 4, the difference in travel time P16-F8 is close to the difference P16-P64, while the differences for P32 versus F8 are close to the P32 versus P64 differences.The last row in Table 4 shows differences between the best conventional method results and the wavefront orientation method results.These values are clearly quite low, with maximum and minimum differences only 1.6 min and −1.7 min, respectively.On the other hand, the differences of P16-F8, P32-F8 and P64-F8 are consistent with the first 3 rows of Table 3 (which compares P16, P32 and P64 with the exact solutions for a uniform depth ocean).This indicates that F8 (which combines the P8 algorithm with the wavefront orientation algorithm) outperforms all of the conventional runs, including P64, even for the variable-depth case.
Figure 5 shows the difference distributions presented in the same order as in Table 4. Generally, the distributions for the cases P16-F8, P32-F8 and P64-F8 (panels a1, b1 and c1) are similar to the error distributions shown in Fig. 4 (panels a, b, and c, which represent the differences between the conventional method and the exact solution for the uniform depth ocean case).This again confirms that the new method provides more accurate time-of-arrival values than the conventional method.
Discussion and conclusion
The previous results demonstrate that our proposed "wavefront orientation method" provides a more effective and accurate methodology for calculating the travel times for transoceanic tsunamis compared to the conventional method.Moreover, our method works especially well for large bathymetric grid regions; the larger the gridded array, the more advantageous the method.
It is important to note that the accuracy of our newly proposed method should not be confused with the general accuracy of tsunami travel-time calculations for specific events.Actual tsunami travel times depend on many factors that are independent of the particular algorithm being used (Wessel, 2009).With respect to the two methods discussed in this study, both assume small depth changes inside each grid cell.In practical terms, the actual bathymetry often varies more rapidly than is assumed by the numerical methods.This raises the question of whether any method is accurately representative of real situations.If the water depth (or, more to the point, the wave celerity) varies between neighbouring points, the travel-time error can be of the order of the computed values themselves (which in turn strongly depends on the type of interpolation used between points).In the deep ocean, the travel time between points can be small, and, accordingly, the error will be small.Unfortunately, the biggest change in water depth usually occurs in shallow areas, where the travel time between points is long and, accordingly, the errors are especially large.The only way to avoid such errors is to increase the grid resolution Actual travel-time accuracy depends on the accuracy of the bathymetric dataset.In the open ocean, present gridded global datasets are sufficient for most practical applications.However, this is not the case for coastal areas.Moreover, local, high-resolution bathymetric data are not fully incorporated in the ETOPO datasets provided with the GEOWARE software.This can lead to significant errors in the traveltime estimates.In addition, global datasets do not resolve narrow passes in coastal areas, necessitating the use of local nested bathymetric data.The limitations of the bathymetric data are compounded by the fact that the tsunami generation region is typically poorly defined.In a scientific twist, observed tsunami travel times are often needed to better define the source region that generated the waves in the first place.
There are other physical factors which can cause the actual speeds of tsunami waves to depart from wave speed estimates obtained from simple linear shallow water theory.One of the most important factors is frequency dispersion related to nonhydrostatic effects.Non-hydrostatic effects are especially important during far-field wave propagation than during 1 Figure 5. Maps of differences in tsunami travel time (minutes) using numerical solutions for 2 the ETOPO-1 grid for the Pacific Ocean.Left-hand panels: inner-method differences in travel 3 time for the conventional solutions only; (a1) between P16 and P64; and (b1) between P32 4 and P64.Right-hand panels: between model differences in travel time derived using the 5 conventional and waveform orientation methods; (a2) between conventional model P16 and 6 the waveform method; (b2) between conventional model P32 and the waveform method; and 7 (c2) between conventional model P64 and the waveform method.8 Fig. 5. Maps of differences in tsunami travel time (minutes) using numerical solutions for the ETOPO-1 grid for the Pacific Ocean.Lefthand panels: inner-method differences in travel time for the conventional solutions only; (a1) between P16 and P64; and (b1) between P32 and P64.Right-hand panels: between model differences in travel time derived using the conventional and waveform orientation methods; (a2) between conventional model P16 and the waveform method; (b2) between conventional model P32 and the waveform method; and (c2) between conventional model P64 and the waveform method.
near-field tsunami wave propagation, or when the tsunami source area is relatively small (shorter waves) and/or located in relatively deep water (González and Kulikov, 1993).Although non-hydrostatic effects modify tsunami wave propagation in a depth-variable ocean, the travel times for nonhydrostatic waves can be computed in the same way as for shallow water theory by examining each wave frequency separately.This is because, for a given frequency, the arrival time is determined by the group velocity of the dispersive waves, which depends on the depth only.
In shallow water regions, frequency dispersion becomes less important.However, non-linear effects arising from wave interaction with the tsunami wave field, and from interaction between the tsunami waves and tidal currents, can modify the travel times.In estuarine areas, tidal motions and river flow may also alter the travel times.
As shown by Watada (2013), ocean stratification, seawater compressibility and elasticity of the solid earth also affect the gravity waves.All these factors can generate noticeable changes to tsunami travel times.
Regardless of the factors affecting tsunami wave propagation, it is clear that accurate estimation of tsunami travel time is of considerable importance.Because Huygens' Principle does not depend on wave speed formulations, the above factors (excluding perhaps non-linear effects) can be included in our travel-time algorithm.Thus, in addition to providing improved computational efficiency and better accuracy compared to the conventional methodology, our proposed method can be used to enhance general tsunami research.
Fig. 1 .
Fig.1.Schematic showing one time step in the calculation of tsunami travel time for an N = 32 spatial pattern of neighbouring grid points for the conventional method.Grid points are divided into three categories: (A) points for which the travel time has been calculated (dark grey area); (B) points in the frontal zone (light grey area) for which the time is being calculated; and (C) points where no calculation has yet been made (unfilled area).
Figure 2 .Fig. 2 .
Figure 2. Schematic showing the source of the directional error arising from a calculation based on wave propagation along a path with two separate line segments (solid lines).The dashed line denotes the path for which there is no directional error but for which there is no intermediate point, B.
Figure 3 .Fig. 3 .
Figure 3. Schematic showing the N = 8 pattern of neighbouring spatial points 4 proposed wavefront orientation method.The shading is the same as for the c 5 method presented in Fig. 1.The group of three small arrows show the direction of 6 propagation.7 8
Figure 4 .Fig. 4 .
Figure 4. Spatial distributions of tsunami travel time errors (minutes) for a uniform depth 5 (4000 m) ocean using: (a, b, c) the conventional method for N = 16, 32, and 64 neighbouring 6 point patterns, respectively; and (d) the wavefront orientation method based on an N = 8 7 neighbouring point algorithm.8 9
Table 1 .
Directional error parameters for the conventional method (denoted by the letter "P") applied to a plane square grid.RMS is the root mean square of the error, and STD is the standard deviation of the errors.Column 2 gives the angle yielding the maximum error.
|
2018-10-10T01:32:24.653Z
|
2013-11-15T00:00:00.000
|
{
"year": 2013,
"sha1": "984153df71c538d78dba8b22577eb1f91d511e53",
"oa_license": "CCBY",
"oa_url": "https://nhess.copernicus.org/articles/13/2863/2013/nhess-13-2863-2013.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "984153df71c538d78dba8b22577eb1f91d511e53",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
85453340
|
pes2o/s2orc
|
v3-fos-license
|
Use of a Public Fishing Area Determined by Vehicle Counters with Verification by Trail Cameras
This study was performed in order to determine visitation at remote areas that would be unaffordable due to logistic reasons. Four Trafx vehicle counters, each programmed with different settings, were placed along the lone access road to remotely sense the daily use activities and count accuracy at the New Underwood Lake Public Water Access Area. Use was corroborated during daylight hours with game cameras. Data was stratified between weekdays/weekends due to differences between the two periods. Two counter settings, threshold and delay, were best when set at a value of 8, but a value of 16 for delay provided almost equal results. Overall, there were 38 counts of use per day for a total of 2318 over the 61 day period. This study demonstrated how vehicle counters, in combination with game cameras for verification, can aid managers for determining use in remote access areas. Future work may lead to identifying details for producing a surrogate to traditional angler use surveys.
Introduction
Sport fisheries are a major component to inland fisheries in northern industrialized countries [1] [2] [3] [4].From a perspective of the individual angler, sport fishing is defined as any method that is non-commercial in nature [3] [4].Providing insight to the use of specific fisheries is long established techniques that have also seen modifications across time [5] [6] [7] [8].These techniques, with expansions into other related fields (i.e.marketing outreach, management plans G. Simpson decision analysis and human dimensions) may increase the sustainability of inland fisheries [1].
Angler surveys in themselves can be divided into several categories [5].Contacting anglers can occur through many methodologies with direct contact methods being commonly used.Roving creel and bus route are two designs commonly used in freshwater angler surveys, each with shortcomings; yet, having specific strengths in certain situations [5].Roving site design methods are extensively used to sample recreational fisheries and provide information essential to management activities such as estimating fishing effort and catch rate and are used on a variety of waters [9].Bus route designs have been shown to provide an excellent alternative to roving creel surveys, but are based on large water systems with many access sites being visited in a single day.Alternatively, mail surveys, including internet surveys, can obtain information for anglers past experiences and are at least perceived to have good accuracy and save money, yet have issues and biases as well [10] [11] [12] [13].
One concern to the proposals of Arlinghaus et al. [1] is the discordant between determining angler use in sparsely populated, rural areas and how to fiscally justify the collection when few contacts are anticipated.In some instances, a planned reduction of data gathering effort can achieve similar results [9].Yet, it is prudent for researchers to determine if alternatives to data collection, different from the most common design approaches of creel surveys, can describe angler use in these situations.Estimating use by anglers is often necessary for managers to make informed decisions on fisheries management and can be used in prioritizing habitat and access improvements.
Use is commonly determined in situations where there is a need to quantify human activity in managed areas [14] [15].To achieve this, a variety of methods for defining the processes of collecting data, and to determine the accuracy of the data collected have been developed by Watson et al. [14].One of the approaches described by Watson et al. [14] demonstrated a theoretical method using mechanical counters paired with visual calibration.A recent work combined the use of traffic counters with visual observation to estimate daily and seasonal use [13].Often human observers are used in the calibration method; however, this study considered if trail cameras could be used in lieu of more expensive alternatives.Other researchers have utilized cameras to aid in determining angler effort or fishery closure compliance [13]
Study Area
New Underwood Lake Public Water Access Area (NULPWAA) is a 62.3 ha pubic recreation area northwest of New Underwood, South Dakota (population, 674) (Figure 1).The most proximate population center is Rapid City (population, 70,812) which lies 37.7 km west from New Underwood.About one-half of NULPWAA consists of a small dam that is managed for a number of warmwater fish species (i.e.largemouth bass (Micropterus salmoides), channel catfish (Ictalurus punctatus), bluegill (Lepomis macrochirus).Gravel roads allow public access to the shoreline of NULPWAA and were central in the project design.G. Simpson
Vehicle Counter Settings
Four Trafx TM vehicle counters (TRAFx Research Ltd., Canmore, Alberta, Generation 3)were placed just under the soil surface in separate watertight plastic boxes, 2.4 meters (8 feet) from the road edge where vehicles would be passing (Figure 2).These counters record a count with a change in the local electromagnetic field.Most combustion engines change the local electromagnetic field and thus would activate a count by the TRAFx counter.Each counter was programmed with differing variants of delay and threshold settings (Trafx Manual: Part II, Vehicle Counter G3, Jan 2010) (Table 1).Trafx TM counters have three settings (i.e.delay, threshold and rate) which can be adjusted to fit particular situations.The delay setting allows for a break between counts so that only one count is made per vehicle.Threshold adjusts for vehicle detection sensitivity.
Analysis of Counter Accuracy
The months of June and July, 2016 were used in analysis of counter accuracy with each day considered as a separate sample.This gave a total sample size for counter accuracy of 61 (30 days in June, 31 days in July).Each day was run separately for accuracy through a process defined by Watson, et al. [14] for computation of an estimate of use.This process required the counts to be tested for accuracy and was accomplished in this study with the use of a trail camera system (Plotwatcher Pro TM , http://day6outdoors.com/index.php/products/).The Plotwatcher system allows for photos to be taken at predetermined intervals (every five seconds in this study) and when finished the photos are "stitched" into one video file by each day via proprietary Tru-Video TM software (Day 6 Outdoors, LLC., Columbus, Georgia).Each video file was blind reviewed and later compared to the counter data.The paired data were used to establish a calibration relationship between the counter and observed analysis of accuracy, including a ratio estimator, for each counter setting was determined through the summation of each day period across the two month period.
Overall Use
Use at the NULPWAA was assumed to be related to angling during the sampling period.This assumption was based on that few other activities were legal (i.e.no hunting seasons) and much of the area roads are associated with lake use activities.Additionally, a separate, part-time creel survey was being conducted and reports of other nefarious activities were not reported.Data was stratified by the day of week to determine if there were significant differences of visitation between weekday and weekend/holiday periods through chi squared analysis.
Counts were derived from the total number of Trafx data, corrected using the most accurate settings and corrected for day of the week bias as determined from data post-processing.
Comparison of Counter Settings
From sixty-one sampling days, mechanical counts ranged from 37 to nearly 41 per day (Table 2).Strong correlations were observed between the average counts made by the counter and those verified by the camera system.An r estimator [14] showed that the strongest relationship was obtained with counter 1 (delay 8, threshold 8).This estimation indicated that 99 percent of the counts could be attributed to visitor traffic, with the remaining one percent due to other factors.The lowest relationship was indicated with counter 2 (delay 8, threshold 3) where 87 percent of the counts were verified between the systems as visitor use and 13 percent were from other causes.To account for any possible data bias based on day of the week, stratification based on day was investigated.
A chi-squared test of independence was performed to examine the relation between the days of the week and mechanical counts.variables was significant (x 2 = 181.9,p < 0.05) and there were differences between visitation and days of the week.Due to this occurrence, the data was stratified by weekends and weekdays (Table 3).It was previously determined that counter one was the most accurate thus by using this counter alone we had on average 38 counts per day.Two counts, 50 and 51, were calculated from the weekend data from the various counter settings (Table 3).Results were similar for weekday counts as counts of 28 or 29 were calculated.The weekend and weekday calculated results produced an overall average estimate of 35 or 38 by the separate counters.These counts equal twice the number of vehicles using the NULPWAA as each vehicle would be counted going in and coming out.
Discussion
This study demonstrated that using counters to determine use in remote areas was a viable option.Here it was revealed that by testing four settings of threshold and delay using Trafx counters, a specific combination for threshold and delay was more accurate than others.Setting both the threshold and delay at a value of eight is the most accurate of the four settings tested.Others have noted that calibration of remote counters is imperative in the study design for accurate results [18] [19] [20] [21].Watson et al. [14] do give a detailed review of alternatives for setting up studies, evaluation of counter data and thoughts on reduction of bias in the study design.
Interviewing every angler every time they fish is problematic and prohibitive based on monetary, spatial and temporal limitations [8] [9].A statistical design must be employed to overcome this problem utilizing a subsample of the population based off a randomized schedule [5].To produce statistically valid estimates of variables, such as catch and effort, all anglers must theoretically have the chance to be surveyed.Without a detailed knowledge of the angling group in remote areas, the possibility of interviewing all groups may not be met.Knowing the extent of visits to angling destinations can aid managers in the design phase of surveys.Potential impacts might include determining if stratification of the data between the weekdays and weekends for reduced bias [14] [21].Malvestuto [22] and Pollock et al. [5] Ploner [23] found differences between day types and user groups visiting a natural area.By acknowledging that, there are likely more anglers on weekends, and sampling these periods with a slightly greater effort produces results that are appropriate statistically.In many creel surveys, obtaining the needed number of interviews and associated catch data can be a limiting factor.Knowing that there are often a larger number of anglers recreating on weekends over the normal weekday period can often justify stratification of the sampling design towards a more heavily used period.Newman et al. [24] studied how stratified count designs compared with a total census and found that smaller sample sizes lead to large variance and poor confidence intervals for harvest rates.
Few studies derive fishing effort with the use of traffic counters.Douglas and Giles [19] did determine specific travel use in the planning of a creel survey.Lees + Associates [25] described the use of counters and placement at boat ramps and Ryan et al. [26] suggested that they used counters to generate estimates of angler effort, but did not report these findings.A more developed study presented how estimates of fishing effort and harvest could be improved, over common access point surveys, with supplemental data gathered by traffic counters [6].Detailed studies relating the extent of angler effort through technology, such as trail counters, is still in its infancy.Other related fields have looked specific phenomena and were aided in data collection with traffic counters [27].
Trafx Counters® have been successfully used to study effects of roads and red squirrel movement patterns [27], and to determine the risk assessment for trees in an urban environment [28], but many efforts have been directed towards park and wilderness management and use.It has long been acknowledged that problems existed in data collection involving wilderness management [29].In part, this was a stimulus for other projects [10] [15], where defined mechanisms and procedures were defined for future studies.
Use and the ability to utilize technology to determine a numerical estimate has increased over time.Issues in collecting user data were noted with possible "alternatives" towards the collection of data [29] [30].Early work noted that many wilderness managers relied on best guesses for an estimate of use [31]. of remote counters in conjunction with verification through the use of trail camera systems was demonstrated to be a valuable tool in determining use at one lake.Verification by images was useful in that it indicated the data needed to be stratified by weekend and weekday for better accuracy.While some standard Angler Use metrics cannot be quantified using this method (e.g.catch, harvest numbers, etc.), it provides an inexpensive way for basic use estimates to be generated passively and shows promise for collecting use data at remote western fisheries.Using just a trail camera system as a measurement is one option for determining pressure at remote fishing areas.However, the time to review the images can be extensive and costly compared to vehicle counters alone.
Figure 1 .
Figure 1.Location map of New Underwood Lake Public Water Access Area in relation to the northern plains states of the United States.
Delay and threshold were sequenced in order to determine accuracy.An assumption of the sampling design was that due to the gravel road, vehicle speed would be less than 50 km/hr (30 mph) thus a rate setting of slow was used in this study.The Trafx TM counters used in this study sense a change in the magnetometer field through associated software for individual counts.
Figure 2 .
Figure 2. Representation of buried counters (not to scale) in relation to service road entrance to New Underwood Lake Public Water Access Area.The dark rectangles represent the counters contained within a plastic container and placed underground.
Table 1 .
Vehicle counter variants across four counters tested from May-August 2016.
Table 2 .
Results of vehicle counts by mechanical versus photographic techniques with calculated "r estimation" for overall accuracy from May-August 2016.
Table 3 .
[10]nstrated how it was appropriate to stratify weekends and weekdays in some angler survey situations and Brandenburg and Results of vehicle counts stratified by weekend/weekday from May-August, 2016 with corrected estimator from Watson et al.[10].
By utilizing advancements in technology, managers are now able to determine valuable information with lower costs than previous attempts.During this study, the use
|
2019-03-18T05:13:26.695Z
|
2018-05-09T00:00:00.000
|
{
"year": 2018,
"sha1": "5b0c0069ea41d780835f3b74c3697f6635248633",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=84839",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5b0c0069ea41d780835f3b74c3697f6635248633",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
250329617
|
pes2o/s2orc
|
v3-fos-license
|
Editorial: Cyclic Nucleotide Phosphodiesterases (PDEs) in Immune Regulation and Inflammation
by signaling. Cyclic nucleotides by cyclic nucleotide phosphodiesterase (PDE) are the only to hydrolyze and spatial and temporal control over of cAMP/cGMP within families on their speci fi city for structural and of for in fl ammatory diseases. PDE4-selective inhibitors are used for oral treatment of chronic obstructive pulmonary disease, psoriatic arthritis and plaque psoriasis as well as a topical treatment for atopic dermatitis. Recent advances demonstrate a role for in fl ammation involved in many neurodegenerative pathologies. Due to the unique roles of speci fi c PDE isoforms to control distinct pools of cyclic nucleotides in leukocytes, additional PDEs became a focus to study potentially novel therapeutics for in fl ammatory disorders. First, basic molecular functions of PDEs cell regulation are addressed. Kurelic et al. show that PDE2A is upregulated during activation of effector/conventional CD4 + T cells in vitro Kurelic et al. Combining selective inhibitors studies, approaches to induce accumulation of cGMP through various activators and elegant real time FRET imaging, they demonstrate an increase of cAMP in non-activated and a decrease of cAMP in activated effector/conventional CD4 + T cells, presumably dependent on the expression of PDE2A. These studies could explain context dependent regulation of effector/conventional T cell activation and function and provide the basis for developing new therapeutic approaches to T cell mediated autoimmune disorders. Chinn et al. identify PDE4B as the primary PDE expressed in dendritic cells (DCs) and demonstrate using DC-speci fi c depletion of G α s that dynamic, bidirectional regulation of PDE4B expression acts as a key homeostatic regulator of cAMP levels in DCs Chinn et al. They further show that inhibition of PDE4B in G α s-depleted DCs decreased Th2 cell differentiation, which in concert with the known phenotype of DC-selective G α s depletion in mice suggests PDE4B inhibition as a target for Th2-allergic asthma. Golshiri et al. highlighted the complexities of exploring PDE function in disease settings, in this case the role of PDE1 in a mouse knockout model of smooth muscle cell (SMC) aging
Golshiri et al. They found contrasting effects of the PDE1 inhibitor lenrispodun on different in vivo and ex vivo measures of accelerated SMC aging dependent on acute vs. chronic exposure, in vivo or ex vivo administration, or which pathological outcomes were being measured. Reminiscent of Chinn et al., their data suggested that inhibition of PDE1C leading to elevated cAMP/cGMP might be met by a compensatory upregulation of PDE1A that abrogates some of the lenrispodun-mediated effects. Thus a deeper exploration of these complementary pharmacologies is called for.
An area of intense research is covered by the contributions by Hoffman and Hermann et al. describing screening and testing of novel PDE inhibitors. A unique and novel approach to identify PDE inhibitors is presented by Hoffman. Hoffman developed a platform for expressing cloned PDEs in the fission yeast Schizosaccharomyces pombe, which allows for inexpensive, but robust screening for small molecule inhibitors that are cell permeable. An advantage of this screening method is that it is readily accessible to academic laboratories and does not require the purification of large quantities of target protein. In employing this novel approach to using yeast as a model organism for screens of PDE inhibitors as an alternative approach to enzyme-based screens, Hoffman demonstrates its significant potential both for the discovery and profiling of PDE inhibitors to treat inflammation and for inhibitors of targets such as pathogen PDEs to treat infections by parasitic nematodes. Hermann et al. explore the therapeutic utility of novel PDE4 inhibitors in treating idiopathic pulmonary fibrosis Herrmann et al. They present data on BI 1015550 as a candidate PDE4 inhibitor that has advanced into clinical trial for this indication. Their report appears to present the first published dissemination of preclinical data on this novel inhibitor. Testing oral application of BI 1015550 in two established mouse models of lung fibrosis, the authors directly measured clinically relevant endpoints of pulmonary function. Importantly, the study directly addresses the established limitations of using PDE4 inhibitors in humans by demonstrating that BI 1015550 shows preferential inhibitory activity against the PDE4B isoform. This selectivity may be of advantage in reducing the nausea/emesis side effects that are known to occur with the therapeutic use of PDE4 inhibitors.
Due to the therapeutic efficiency of PDE inhibitors in several human diseases, major efforts are spent on broadening the scope of treatments by testing inhibitors in many preclinical disease models. The emerging role of PDEs at the intersection between CNS disorders and neuroinflammation has been investigated by Pilarzyk et al. The mRNA expression of PDE11A in brain, represented solely by the PDE11A4 subtype, is restricted to the hippocampus. This group previously showed that social isolation changes subsequent social behaviors in adult mice by reducing the expression of PDE11A4 in the membrane fraction of the ventral hippocampus. In this paper they extend these findings to show that both acute and chronic social isolation decrease PDE11A4 in adult but not adolescent mice, and moreover that isolation-induced decreases in membrane PDE114A correlate with increased expression of interleukin-6 and activation of microglia. Hence these findings suggests that alteration in PDE11A4 expression may be a key molecular mechanism by which social isolation increases neuroinflammation and alters social behavior. In the context of experimental periodontitis, Kuo et al. investigated the molecular and cellular actions of a unique xanthine and piperazine derivative (KMUP-1) Kuo et al. KMUP-1 exhibits profound inhibition on PDE3, 4, and 5 and therefore increases cellular levels of cAMP and cGMP. Using in vitro models for inflammation, Kuo et al. showed that KMUP-1 displayed anti-osteoclastogenic and anti-inflammatory activities in a macrophage cell line mainly through a cGMP/PKGdependent pathway. The consequent reduction of proinflammatory signaling pathways and cytokine expression and downregulation of crucial regulators of osteoclasts, explain the KMUP-1-induced protective functions on osteoclast maturation and alveolar bone loss in two rat models.
Collectively, the articles of this Research Topic showcase recent investigations of PDE functions in inflammation and provides an overview of their potential as targets for established and novel inhibitors for the treatment of inflammation in animal models and human disease.
AUTHOR CONTRIBUTIONS
SB, PE, RN and, TV edited the Research Topic and wrote the Editorial. All authors contributed to the article and approved the submitted version.
FUNDING
The editors of this Research Topic were funded by the NMSS, FWO, Fondation Charcot Stichting and the Smart Family Foundation.
|
2022-07-07T14:55:17.608Z
|
2022-07-06T00:00:00.000
|
{
"year": 2022,
"sha1": "1a26d5e52871652bc320bb259fed6258b815a85d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "1a26d5e52871652bc320bb259fed6258b815a85d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
38545165
|
pes2o/s2orc
|
v3-fos-license
|
Combined pulmonary fibrosis and emphysema syndrome : a radiologic perspective
Chronic obstructive pulmonary disease (COPD) and pulmonary fibrosis (PF) are chronic progressive lung diseases both of which account for significant morbidity and mortality. COPD is currently predicted to become the third leading cause of death worldwide in 2020 [1, 2]. The impact is so great because it affects such a large population during the prime of their life. Pulmonary fibrosis is a progressive disease with average an survival of 3 to 5 years following diagnosis [3]. Both of these diseases have different etiologies with distinct pathogenic, clinical, functional, radiological, and pathological characteristics. The syndrome of combined pulmonary fibrosis and emphysema (CPFE) is a newly recognized syndrome which has a much worse prognosis than emphysema or fibrosis when occurring in isolation. Combined pulmonary fibrosis and emphysema syndrome was first described in a series of eight patients by Wiggins et al. in 1990 [4]. Some have questioned whether CPFE is really a new and distinct disease, or simply represents the coexistence of the two distinct pathological alterations. However, more recently, CPFE has been characterized as an individual entity that is separate from both pulmonary fibrosis and emphysema alone because when occurring together they are associated with distinct symptoms and clinical manifestations [5]. Computed tomography (CT) of the chest is the most important modality used in the diagnosis of CPFE. It shows emphysema of the upper lung zones and fibrosis predominating in the lower lung zones. Fibrotic changes are seen on the chest CT as honeycombing, architectural distortion, reticular markings, traction bronchiectasis, and occasionally ground glass opacities (representing fine fibrosis), all with a peripheral distribution. Clinically, the patients demonstrate severe hypoxemia with normal to slightly reduced lung volumes and severely reduced diffusing lung capacity for carbon monoxide (DLCO). CPFE syndrome is often accompanied by pulmonary arterial hypertension, which worsens with disease progression and negatively impacts patient prognosis. Given that radiology plays a key role in early diagnosis of CPFE syndrome, our intention is to not only elucidate the clinical manifestations of this disease but also its the radiologic features, reviewing fundamental points and findings of this syndrome.
Introduction
Chronic obstructive pulmonary disease (COPD) and pulmonary fibrosis (PF) are chronic progressive lung diseases both of which account for significant morbidity and mortality.COPD is currently predicted to become the third leading cause of death worldwide in 2020 [1,2].The impact is so great because it affects such a large population during the prime of their life.Pulmonary fibrosis is a progressive disease with average an survival of 3 to 5 years following diagnosis [3].Both of these diseases have different etiologies with distinct pathogenic, clinical, functional, radiological, and pathological characteristics.The syndrome of combined pulmonary fibrosis and emphysema (CPFE) is a newly recognized syndrome which has a much worse prognosis than emphysema or fibrosis when occurring in isolation.
Combined pulmonary fibrosis and emphysema syndrome was first described in a series of eight patients by Wiggins et al. in 1990 [4].Some have questioned whether CPFE is really a new and distinct disease, or simply represents the coexistence of the two distinct pathological alterations.However, more recently, CPFE has been characterized as an individual entity that is separate from both pulmonary fibrosis and emphysema alone because when occurring together they are associated with distinct symptoms and clinical manifestations [5].Computed tomography (CT) of the chest is the most important modality used in the diagnosis of CPFE.It shows emphysema of the upper lung zones and fibrosis predominating in the lower lung zones.Fibrotic changes are seen on the chest CT as honeycombing, architectural distortion, reticular markings, traction bronchiectasis, and occasionally ground glass opacities (representing fine fibrosis), all with a peripheral distribution.Clinically, the patients demonstrate severe hypoxemia with normal to slightly reduced lung volumes and severely reduced diffusing lung capacity for carbon monoxide (DLCO).CPFE syndrome is often accompanied by pulmonary arterial hypertension, which worsens with disease progression and negatively impacts patient prognosis.
Given that radiology plays a key role in early diagnosis of CPFE syndrome, our intention is to not only elucidate the clinical manifestations of this disease but also its the radiologic features, reviewing fundamental points and findings of this syndrome.en idiopathic interstitial pneumonias (IIP's) based on histopathological patterns.The seven clinicopathological entities include: nonspecific interstitial pneumonia (NSIP), organizing pneumonia (OP), acute interstitial pneumonia (AIP), respiratory bronchiolitis-interstitial lung disease (RB-ILD), desquamative interstitial pneumonia (DIP), lymphocytic interstitial pneumonia (LIP), and usual interstitial pneumonia (UIP) [5].UIP is the most common of the IIP's accounting for 50-60% of cases [6].The histological pattern of UIP may develop secondary to dust exposure (e.g., asbestosis), drug toxicity, and collagen-vascular diseases (e.g., rheumatoid arthritis), or it can be seen in the setting of chronic hypersensitivity pneumonitis [5].In many cases, after detailed clinical evaluation, no etiology is found so the morphologic pattern of UIP is considered synonymous with the clinical syndrome of idiopathic pulmonary fibrosis (IPF).
Histopathologic characteristics of UIP include heterogeneous areas of pulmonary fibrosis with fibroblastic foci in association with areas of normal lung, interstitial inflammation, and honeycomb changes with the temporal heterogeneity being the most characteristic feature [5,7].There may also be mild inflammatory cell infiltration to a lesser extent [8].The factors that initiate and maintain the inflammatory and fibrotic responses observed in IPF remain unknown but the current hypothesis is that IPF pathogenesis involves different mechanisms that include repetitive lung injury, deposition of collagen and extracellular matrix, inflammation, proliferation of fibroblasts, in the setting of an inappropriate healing response [9].
Idiopathic pulmonary fibrosis is a chronic and progressive fibrosing interstitial lung disease with a highly variable clinical course [10][11][12] and an average survival of less than 5 years after diagnosis.IPF has a worse prognosis when compared with other causes of pulmonary fibrosis.It affects approximately 14-20 per 100,000 people in the general population [13,14] and currently has no curative treatment.Lung transplant is an option for select patients and can help increase short term survival but is not without its own complications.
The diagnosis of IPF is based on clinical, radiographic, and histopathologic characteristics and the accuracy increases with multidisciplinary discussion between pulmonologists, radiologists, and pathologists experienced in the diagnosis of interstitial lung disease [12,14].According to the official ATS/ERS/JRS/ALAT 2011 consensus the diagnosis of IPF requires: exclusion of other known causes of interstitial lung disease; presence of a UIP pattern on high-resolution computed tomography (HRCT) in patients not subjected to surgical lung biopsy; or specific combinations of HRCT and surgical lung biopsy pattern in patients who have undergone lung biopsy [12] (table 1).Common clinical features include progressive dyspnea, dry cough, and the presence of basilar 'velcro' crackles on physical exam.Digital clubbing and signs of cor-pulmonale may also be present.IPF is rare before the age of 50 and is more prevalent in men, with the majority of patients having a history of cigarette smoking [15][16][17][18].
High-resolution computed tomography plays a vital role in the diagnosis of IPF and can help distinguish UIP from other IIPs [19].When classic features of UIP are present on HRCT, tissue biopsy of the lungs is not needed.If HRCT does not demonstrate the classic features, a definitive pathological diagnosis of UIP can be made from open or video-assisted thoracoscopic biopsy.The presence of coexisting pleural abnormalities such as pleural plaques, calcifications, or significant pleural effusion suggests an alternative etiology for the UIP pattern.
The course of IPF is variable, with few patients remaining stable for long periods of time while a significant proportion developing progressive dyspnea.The dyspnea is particularly present during exertion and worsens in periods of acute exacerbation.It can resolve rapidly or progress to respiratory failure and death, especially when associated with pulmonary arterial hypertension [20].
Pulmonary arterial hypertension (PAH) is defined as a mean pulmonary artery pressure of > 25 mm Hg at rest.When present secondary to underlying pulmonary fibrosis, there is an increase in the al- ready high mortality rate in these patients [21].Early studies showed that patients with IPF had higher mean pulmonary arterial pressures compared to patients with other non-IPF interstitial lung diseases [22].Pulmonary fibrosis probably contributes to the pathogenesis of pulmonary arterial hypertension as a result of chronic alveolar hypoxia and vascular remodeling, but the most important mechanism is likely the reduction in the pulmonary vascular bed secondary to extension of fibrosis itself [23].Patients with pulmonary fibrosis are also at increased risk for pulmonary embolism, which is probably secondary to endothelial damage [24].
COPD: definition and background
Chronic obstructive pulmonary disease was the 12 th leading cause of disability in the world during the 1990's [25] but is estimated to increase in prevalence, with current predictions that it will be among the top five leading causes of death by 2030 [26].It results in great economic and social burden that is both substantial and increasing because it affects a population during their most productive work years.In the United States, COPD was responsible for 126,000 deaths in 2005 [27].The estimated prevalence of COPD varies from 7 to 19% in several well-conducted studies around the world [28][29][30][31][32][33].However COPD still remains underdiagnosed, especially because it often stays clinically silent until it is at an advanced and irreversible stage [34,35].
COPD is a preventable disease characterized by airflow limitation that is not fully reversible and is staged on the basis of the severity of irreversible airflow obstruction quantification [36].The airflow obstruction is secondary to an abnormal inflammatory response of the lungs to noxious particles or gases, primarily caused by tobacco smoking.It is usually progressive and can be the end result of multiple distinct events.Other important risk factors include occupational and organic dusts exposures, socio-economic status, and genetic factors.COPD has a variable natural history with different individuals developing distinct clinical courses [37].With the progression of the disease, COPD patients demonstrate compromised gas exchange leading to respiratory failure with poor prognosis.Death is frequently secondary to associated co-morbidities that are also related to cigarette smoking such as heart disease and lung cancer [36,38].
Repetitive airway injury from inhaled tobacco smoke toxins, chronic oxidative stress, and imbalance of proteinases and antiproteinases trigger an inflammatory response that leads to structural changes in the airways and lung parenchyma [39,40].COPD comprises pathological changes in four different compartments: central airways, peripheral airways, lung parenchyma, and pulmonary vasculature.The main physiologic abnormalities in COPD are mucous hypersecretion and ciliary dysfunction; airflow limitation and hyperinflation; gas exchange abnormalities; systemic effects and pulmonary hypertension [41].
Emphysema and obstructive airway disease are two commonly described subtypes of COPD.Considerable overlap with different proportions of each of these processes exists in COPD subjects [42].Clinically, patients with COPD are classified according to Global Obstructive Lung Disease (GOLD) stages based on the severity of airway obstruction [36] but individuals with the same GOLD stage can exhibit markedly different degrees of emphysema radiographically.Emphysema is a pathologic diagnosis defined as "permanent abnormal enlargement of the airspaces distal to the terminal bronchiole associated with destruction of their walls and no obvious fibrosis" [43].There are three main subtypes of emphysema based on the anatomic distribution: centriacinar (also known as centrilobular), panacinar (also known as panlobular), and paraseptal (table 2).These subtypes of emphysema are better distinguished morphologically during early stages but when they become more severe, their distinction becomes more difficult.
Centriacinar emphysema primarily affects the respiratory bronchioles and alveoli in the central portion of the secondary pulmonary lobule.This subtype of emphysema predominantly involves the upper lung zones and typically results from cigarette smoking.With increasing severity of centriacinar emphysema, destruction progresses to involve the entire secondary pulmonary lobule, making it more difficult to distinct from the pancinar subtype of emphysema.
Panacinar emphysema destroys the entire secondary pulmonary lobule uniformly creating multiple areas of decreased lung attenuation with paucity of vessels in the affected regions.It predominates in the lower lobes.This type of emphysema is classically associated with alpha-1 antitrypsin deficiency but can be seen without protease deficiency in smokers, elderly patients, and in association to certain drugs [43].Paraseptal emphysema predominantly involves the alveolar ducts and sacs at the distal aspect of the acini with the areas of destruction often marginated by interlobular septa.It can be an isolated finding in young patients often associated with spontaneous pneumothorax or can be seen in older patients with centrilobular emphysema [43,44].This type of emphysema is usually seen in association with the centriacinar type in patients who are cigarette smokers but can also be idiopathic in etiology.
Pulmonary hypertension is present in a significant proportion of patients with advanced COPD due to hypoxic vasoconstriction and secondary remodeling of distal small pulmonary arterial branches.The loss of pulmonary capillary bed in emphysema can also contribute to increased pressure in the pulmonary circulation [45,46].The progression of the pulmonary arterial hypertension usually leads to elevated right heart pressures, right ventricular hypertrophy, ultimately resulting in right-sided cardiac failure (cor pulmonale).
Combined pulmonary fibrosis and emphysema (CPFE)
The combination of pulmonary fibrosis and emphysema was initially thought to be an incidental coexistence of the two diseases and it is still unclear whether both entities coexist by coincidence with each one having distinct pathogenesis, or if some individuals share a common pathway that leads to both fibrosis and emphysema.Animal models suggested a shared mechanism through which cigarette smoke leads to both pulmonary emphysema and fibrosis [47][48][49].Such theory has yet to be demonstrated in human subjects however several studies have demonstrated an increased prevalence of IPF in patients with smoking history, suggesting this may indeed be a common pathogen [15][16][17][18].
Nonetheless, CPFE has been treated as a distinct disease after Cottin, et al. in 2005 presented a case series in heavy smokers [50].Patients with combined pulmonary fibrosis and emphysema (CPFE) syndrome demonstrate clinically severe hypoxemia and decreased DLCO with normal to mildly reduced lung volumes.They often have severe secondary pulmonary arterial hypertension, which worsens the prognosis dramatically [50,51].
In the retrospective study conducted by Cottin et al. in 2005 [50], all of the 61 patients were smokers or ex-smokers and most were males, with a mean age of 65 years.The CPFE syndrome was diagnosed by chest computed tomography.This study was one of the first to describe several clinical characteristics of this syndrome.Dyspnea on exertion was present in all patients; bibasilar crackles in 87% of patients; clubbing in 43% of patients; and pulmonary arterial hypertension present in 47% of patients at diagnosis and in 55% during follow-up.Patients were followed in this study for a mean of 2.1 ± 2.8 years after diagnosis.Survival was 87.5% at 2 yrs and 54.6% at 5 yrs, with a median survival of 6.1 yrs.The presence of PAH at diagnosis was a critical determinant of prognosis in this syndrome.
Recently Cottin et al., not only confirmed but also demonstrated in a retrospective multicenter study conducted in 40 patients (38 males; age 68 ± 9 years; 39 smokers) with CPFE and pulmonary arterial hypertension (confirmed by right heart catheterization), that higher pulmonary vascular resistance, higher heart rate, lower cardiac index and lower carbon monoxide diffusion transfer were associated with increased mortality [52].This study provides several new and important insights.For example, they found that the mean time to diagnose PAH after diagnosing CPFE was 16 months, suggesting that pulmonary arterial hypertension occurs rapidly after diagnosing CPFE.They also showed that patients with CPEF who develop pulmonary arterial hypertension have a worse prognosis with an estimated survival of only 60% at 1 year.
Pulmonary arterial hypertension is one of the leading causes of symptoms and negatively impacts the prognosis in these patients to a level much worse than that seen among patients with either pulmonary fibrosis or emphysema alone.Literature has shown that at the time the diagnosis of CPFE is made, 47% of patients are found to have PAH with the percentage increasing with progression of disease [50][51][52].It still remains uncertain if PAH is a response to CPFE alone or a product of emphysema or fibrosis separately.
Spirometry and lung volume components of pulmonary function testing (PFT) in patients with CPFE are usually normal or demonstrate only mild obstructive or restrictive pattern.The basis for this finding, despite the presence of severe underlying parenchymal lung disease may be the opposing physiologic forces of emphysema, which leads to outflow obstruction and hyperinflation, as opposed to fibrosis which decreases lung compliance, decreases lung volumes, and increases outward tethering of small airways [53].This may be one of the reasons why this syndrome remains underdiagnosed until an advanced stage, when it is already associated with severe PAH.However, the diffusing capacity component of PFT's is typically severely impaired due to loss of intact alveolar-capillary surface area for gas exchange, increased diffusion distance across alveolar-capillary bed, and impaired pulmonary capillary blood flow.It is typical for CPFE patients to demonstrate hypoxemic respiratory failure with significantly reduced diffusing capacity and normal to only mildly reduced lung volumes [54].
Chest x-ray
Chest radiographs are a standard part of the routine clinical evaluation of subjects with COPD and IPF.Such images are not expensive, are readily available, and involve minimal radiation expo-sure when compared with computed tomography (CT) [55].
The chest x-ray of patients with pulmonary emphysema usually demonstrates the combination of increased lung volumes seen as flattening of the diaphragms with right diaphragm's height measuring less than 2.7 cm on the lateral projection, increased sternodiaphragmatic angle measuring more than 90 degrees, widening of the retrosternal clear space measuring more than 4.4 cm when measured 3 cm below the manubrial-sternal junction, increased AP diameter; and upper lobe predominant destruction manifested by relative increased lucency and paucity of vessels in the lung apices and apparent increased interstitial markings at the lower lung zones (fig.1).Hyperinflation also results in a narrowed, elongated cardiomedistinal silhouette but this is a secondary sign that is neither specific nor sensitive for emphysema [56][57][58][59][60]. Visualization of associated bullae is diagnostic of emphysema but is seldom seen radiographically.A sensitivity of 80% has been reported when these findings are used in diagnosis but the likelihood of diagnosis based on chest x-ray depends on the severity of disease with mild forms being almost impossible to detect [61] (fig.2).
Approximately 10% of patients with IPF have normal chest radiography [62,63], particularly in early stages of the disease (fig.3).The radiographic appearance of IPF is nonspecific and correlates poorly with histological findings and severity of disease [6].When abnormal, chest x-ray may show lower lobe and peripherally predominant reticular markings associated with decreased lung volumes and relative sparing of the lung apices (fig.4).With more advanced disease, lung volumes become even smaller and the reticular markings progress from fine to coarse.Lower lobe predominant cystic spaces, which represent traction bronchiectasis and/or honeycombing, may also be apparent.The disease classically starts at the posterior costophrenic sulci, which are better assessed on the lateral radiograph.
While chest radiographs are neither sensitive nor specific in diagnosing emphysema or pulmonary fibrosis, when evaluating CPFE syndrome the sensitivity is even lower because the presence of increased lung volumes seen in patients with emphysema is usually masked by the decreased volumes secondary to the concomitant pulmonary fibrosis resulting in normal lung volumes.Other findings of each disease separately may help in the diagnosis but since the lung volumes play an important role in diagnosing each entity separately, with the loss of this finding, accuracy decreases even more.Some findings that may provide clues to the presence of underlying CPFE include: increased lucency of the upper lungs which is seen in patients with emphysema, in conjunction with increased reticular markings in the lower lobes, that is characteristic of patients with pulmonary fibrosis (fig.5).However, chest x-rays are of limited value and insensitive in diagnosing this syndrome with computed tomography remaining the most sensitive and specific method of diagnosis.
Computed Tomography
Computed Tomography is much more expensive than radiographs and exposes the patients to significantly higher radiation doses with mean effective dose using a 64-slice CT scanner of approximately 19.9 mSv [64].On the other hand, with the advancement of technology, the scanners have become faster and are now able to provide the same image quality with less radiation exposure.CT is as widely available as chest x-rays and remains the most accurate diagnostic modality for patients with pulmonary emphysema and fibrosis.It is much more sensitive and specific in diagnosing and classifying pulmonary emphysema when compared to chest radiography and was recently proven to reduce mortality when used for lung cancer screening [65], which these patients are at increased risk of.
Emphysema is characterized on high resolution computed tomography (HRCT) by areas of abnormal low attenuation contrasted by the normal surrounding lung parenchyma [66][67][68].Areas of centriacinar emphysema are seen as focal lucency centered in the middle of the secondary pulmonary lobule, surrounding the centrilobular artery, and without definable walls (fig.6, 7).This type of emphysema is typically seen in cigarette smokers and has an upper lobe predominance.Panacinar emphysema is seen as widespread abnormal low attenuation areas marginated by the interlobular septa and also centered on the centrilobular artery.It maintains the polyhedric shape of the secondary pulmonary lobule and predominates in the lower lobes (fig.8).Paraseptal emphysema involves the distal aspect of the secondary pulmonary lobule and therefore has a subpleural distribution.It has an elongated shape with perceivable thin walls, which generally correspond to the interlobular septa.With paraseptal emphysema, there is a single row of subpleural cystic spaces (bullae).This is in contrast to honeycombing which is secondary to pulmonary fibrosis in which, by definition, there are at least 2 rows of subpleural cysts as well as other associated findings of fibrosis (ie: architectural distortion and traction bronchiectasis).Paraseptal emphysema predominates in the upper lobe.When the involved area is larger than 1 cm in diameter, it is termed a bulla (fig.9).Bullae are diagnostic of emphysema but are seldom seen on chest radiograph or CT.Computed tomography scanning of the chest allows quantitative assessment of the extent and severity of pulmonary emphysema as well as elucidating additional features such as the distribution (eg, apical, basal, diffuse) and subtype (eg, centriacinar, panacinar, paraseptal) cof emphysema [69].Visual scoring by one or more observers [70] was the initial approach utilized when evaluating CT scan data in patients with pulmonary emphysema.As with any subjective form of evaluation, interobserver variability remains a concern.In a study of severe COPD patients [71], interobserver agreement was good for the overall severity of emphysema but was poor in the determination of lobar predominance of emphysema.
Many studies have used CT of the chest to quantify the extent and severity of emphysema and to document progression, but this ability depends on several technical factors including: scanner calibration, collimation, threshold values, window settings, radiation dose, phase of respiratory cycle, reconstruction algorithm, and use of intravenous contrast [72][73][74][75][76][77].Subjective quantification of emphysema is the simplest method and is based on visual assessment of the CT images [78][79][80].Each part of the lung that appears emphysematous is graded from 1 to 4 (1=1% to 25%, 2=26% to 50%, 3=51% to 75%, and 4=76% to 100% of the area) with the total score expressed as percentage of total lung involved at that assigned level [81,82].
Conventional chest CT and HRCT are also much more sensitive than routine chest radiography when assessing patients with IPF, and their findings correlate well with symptoms and pulmonary function test results [83].It has also been proven that the extent of ground glass opacity cor- relates well with the severity of dyspnea and reduction in carbon monoxide diffusing capacity (DLCO), with patients having lower DLCO demonstrating more extensive ground glass opacities by CT [84].
Computed tomography of patients with IPF demonstrates lower lobe and peripherally predominant intra and inter-lobular septal thickening, traction bronchiectasis, honeycombing and architectural distortion, all reflecting underlying parenchymal fibrosis (fig.10).Honeycombing is the most reliable CT finding of fibrosis.Ground glass opacities may also be appreciated, especially early in the course of the disease, or during acute exacerbations, but are never the dominant finding in patients with idiopathic pulmonary fibrosis in the chronic progressive stage.The ground glass opacities can either represent active alveolitis or very fine fibrosis.Thin section CT images can sometimes help distinguish between the two.In about 70-95% of patients, reticulations involve mainly the subpleural areas and have the classic apicobasal gradient of severity with the abnormality being worse at the lung bases, particularly at the costophrenic angles [19,85,86].Because this is a restrictive lung disease, decreased lung volumes result in elevation of the hemidiaphragms with decreased craniocaudal dimension, a typical finding in patients with lung fibrosis.The interpretation of HRCT scans in patients with diffuse parenchymal lung diseases (DPLD) can be difficult, as findings may be nonspecific in 50% of cases [87,88].While difficulty exists with diagnosing DPLDs overall, specialized thoracic radiologists can become quite good at interpreting HRCT scans of patients with IPF and other idiopathic interstitial pneumonias [19,89,90].The positive predictive value of a HRCT diagnosis of UIP is 90-100% in several studies [91][92][93].
A multidisciplinary approach with review of the case by pulmonologists, pulmonary pathologists, and thoracic radiologists is most beneficial when evaluating these often times complicated patients.As discussed above, HRCT plays a key role in distinguishing a UIP pattern from other IIPs.The HRCT criteria for diagnosing UIP pattern is based in the presence of basal and subpleural predominant reticular abnormalities in conjunction with honeycombing and absence of findings that suggest an alternative diagnosis such as peribronchovascular predominance, extensive ground glass abnormality, discrete cysts, segmental/lobar consolidations, and diffuse mosaic attenuation [12].
In clinical practice, the visual assessment of disease extent by HRCT can be used in association with pulmonary function test results to monitor progression of IPF, as well as to evaluate response to therapy and aid with prognosis.This measurement should be made with intermediate interval follow-ups rather than short-term interval followup since longitudinal changes in HRCT scans during short-term follow-up are less predictive of survival when compared to physiologic measurements [94,95].
Given that CT is accurate in diagnosing both pulmonary emphysema and fibrosis alone and that each distinct disease demonstrates different imaging characteristics, it is not difficult to make the diagnosis of CPFE using computed tomography of the chest.Patients demonstrate lucent areas of lung destruction in the upper lobes associated with lower lobe predominant reticular markings, honeycombing, and traction bronchiectasis that reflect underlying pulmonary fibrosis (fig.11).
As discussed above, pulmonary arterial hypertension is seen in patients with emphysema and pulmonary fibrosis.Chest radiography of patients with PAH classically demonstrates the "pruned tree" appearance made by the enlargement of the central pulmonary arteries and abrupt decrease in caliber of the pulmonary vasculature peripherally (fig.12).Chest CT of these patients can demonstrate enlargement of the central pulmonary arteries but this finding is neither specific nor sensitive.It may also show abrupt narrowing and tapering of the peripheral pulmonary vessels, right ventricular hypertrophy, right ventricular and right atrial enlargement, dilated bronchial arteries, and a mosaic pattern of attenuation due to variable lung perfusion (fig.13) [96].IPF patients are also at increased risk of pulmonary embolism, which is likely secondary to endothelial damage.Chest CT, when tailored to evaluate the pulmonary arteries, is very sensitive and specific in diagnosing pulmonary embolism, especially when using multidetector CT scanners [97,98].Patients with pulmonary fibrosis are also at increased risk of developing bronchogenic carcinoma.CT is more sensitive than chest radiography at detecting lung cancer, and can pick them up when they are smaller and at an earlier stage [99][100][101][102].
Radiation Considerations
It is important to keep in mind that patients who undergo multiple serial CT scans are exposed to a higher cumulative radiation dose over time.Therefore the benefits and risks related to this exam should always be weighed prior to imaging.In the early days of CT, the radiation exposure could not be reduced, since the technology available at that time did not allow for a reduction in the dose without increasing image noise.A large amount of image noise degrades image contrast and quality, thus impairing diagnostic accuracy [77].With advancement of technology, the CT scanners are now able to produce the same image quality with a much lower radiation dose to the patient.While the risk of radiation is still dependent mostly on the patient's age, the body part that is being scanned, and how much of the patient's body is covered (length of the scan), the effective radiation dose received during a routine computed tomography of the chest is about 8 mSv, which is equivalent to approximately 100 chest x-rays [103].
In conclusion, combined pulmonary fibrosis and emphysema is an important but still underdiagnosed syndrome that has imaging findings of both pulmonary emphysema and fibrosis but is associated with a worse prognosis when compared to each of these diseases alone.Since radiology plays a key role in making this diagnosis, it is important that radiologists be familiar with the existence and appearance of this syndrome, which will facilitate early and accurate diagnosis to potentially improve prognosis.
Fig. 1 .
Fig. 1. -Frontal radiograph of the chest (a) shows upper lobe predominant pulmonary parenchyma destruction due to emphysema seen as lucency and paucity of vessels in upper lung zones.The lateral projection demonstrates flattening of the diaphragms, widening of the retrosternal clear space, and increased AP diameter of the thorax.
Fig. 2 .
Fig. 2. -Normal frontal (a) and lateral (b) radiographs of the chest in a patient with mild pulmonary emphysema.Two axial images from the chest CT of the same patient demonstrate upper lobe predominant emphysema.
Fig. 3 .
Fig. 3. -Normal chest radiograph (a, b) of a patient with early stage of pulmonary fibrosis.Two axial images from the chest CT of the same patient demonstrating lower lobe predominant reticular markings.
Fig. 4 .Fig. 5 .
Fig. 4. -Frontal (a) and lateral (b) radiographs of the chest in a patient with idiopathic pulmonary fibrosis demonstrating decreased lung volumes and lower lobe predominant increased reticular markings with relative sparing of the upper lobes.Axial (c) and coronal reformatted (d) CT images of the same patient better depict the peripheral and lower lobe predominant involvement by fibrosis with honeycombing formation.(c) (d)
Fig. 6 .
Fig. 6. -Axial CT images (a, b) of a patient with centriacinar emphysema demonstrating multiple areas of increased lucency involving the secondary pulmonary lobule, centered around the centrilobular artery (white arrows).Note the absence of perceivable walls in the lucent areas.
Fig. 7 .
Fig. 7. -Single CT image of a patient with centriacinar emphysema shows the central involvement of the secondary pulmonary lobule with preservation of its polyhedral shape.The interlobular septa appear as walls surrounding the lucent area (arrow) and the centrilobular artery is seen in the middle of it.
Fig. 9 .
Fig. 9. -Axial (a) and coronal reformatted (b) CT images show upper lobe paraseptal emphysema.Note the subpleural location and elongated shape of the lucent areas with thin visible walls (arrow) that represent the surrounding interlobular septa.A few of these areas measure more than 1 cm in diameter, and are therefore termed bulla.
Fig. 10 .
Fig. 10.-71 year old male with biopsy proven idiopathic pulmonary fibrosis.Axial CT image shows peripheral and lower lobe predominant end-stage fibrotic changes with architectural distortion, traction bronchiectasis, and subpleural honeycombing.
Fig. 11 .
Fig. 11.-60 year old female with smoking history, worsening shortness of breath and dry cough.Axial (a, b) and coronal reformatted (c) CT images showing the concomitant presence of upper lobe predominant emphysema with lucent areas of lung destrucion as well as peripheral and lower lobe predominant fibrotic changes seen as increased reticular markings and traction bronchiectasis.
Fig. 13 .
Fig. 13.-87 years old male with history of chronic pulmonary arterial hypertension.Axial (a) and coronal reformatted (b) images from a contrastenhanced chest CT show the enlargement of the pulmonary arteries (white arrows).
Fig. 12 .
Fig.12.-87 years old male with history of chronic pulmonary arterial hypertension.Frontal (a) and lateral (b) radiographs of the chest demonstrate enlargement of the main pulmonary arterial trunk as well as the right and left pulmonary arteries (white and black arrows).Also note the "pruned-tree" appearance due to abrupt reduction in the caliber of the vessels in the peripheral two thirds of the lung, classically seen in this clinical setting.(a) (b)
Table 1 .
-High-resolution CT criteria for UIP
|
2018-04-03T04:18:50.151Z
|
2011-12-01T00:00:00.000
|
{
"year": 2011,
"sha1": "3d563752a7163f378953ad48de175d6e4a7b257f",
"oa_license": "CCBYNC",
"oa_url": "https://www.monaldi-archives.org/index.php/macd/article/download/210/198",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3d563752a7163f378953ad48de175d6e4a7b257f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244816845
|
pes2o/s2orc
|
v3-fos-license
|
A Study on Online Shopping Scams
Online shopping is becoming increasingly popular though there are several risks. Hence, this study investigates the language used by an international e-commerce platform customer to voice out their dissatisfaction, types of online shopping risks on the international e-commerce platform, and strategies acquired by the platform in addressing scamming issues on their site. A qualitative approach was used to analyse the data gathered from consumers’ feedback and review of the product they purchased on the international e-commerce platform. Data were analysed thematically. The study’s findings revealed that delayed delivery and receiving counterfeit products were the most frequent complaints reported. The shopping risks identified were quality of product and delivery risks. Furthermore, the findings also revealed that the international e-commerce platform used negotiation, mediation, confrontation, and litigation strategies to combat scamming issues on its platform. The study concludes with recommendations. In the present, Kolb’s Theory (1984) consists of concrete experience, reflective abstract conceptualisation, and active experimentation. The international e-commerce platform as an organisation has substantial experience with fraud and scams while consumers shop online. Here, the goal is for the international e-commerce platform to experience to learn from these experiences. In the second stage, with the reflective observation, The international e-commerce platform reflects on the experience of the consumers before making any solutions and judgments when encountering consumers’ fraud and risks matter. Then, an abstract conceptualization comes where the international e-commerce platform gives ideas by identifying the issues and recurring problems during customers’ fraud experience. This helps in learning by identifying solutions for customers. In the final stage, The international e-commerce platform applies these experiences of customers’ feedback and problems on fraud and scams to another situation. Here, the goal for The international e-commerce platform is to find and discovers ways to improve the online shopping platform. In relation to our findings, the behavioural intention of the consumers indicates their attitude towards the risks as observed during online shopping. Further, customers’ behaviour from the responses given when encountering problems related to scams and fraud can be seen from our findings. In Theory of Planned Behaviour (TPB), we can see various risks consumers have encountered during online shopping experience from fraud and scams. Here, consumers’ actions on encountering product risks, delivery risks, and many others show the relations between the outcomes to consumers’ perceived behaviours during fraud and scams. In this connection, parties need to find a peaceful solution in order to solve problems among them. These disputes arise out of consumers’ dissatisfaction and online risks, as shown in our findings. Negotiation, mediation, confrontation and litigation are resolutions used to resolve these issues of online shopping. Accordingly, Kolb’s Experiential Learning Theory (1948) and Theory of Planned Behaviour (TPB) encourage the international e-commerce platform and consumers to find conflict resolution in encountering issues of online shopping and in this situation, issues on fraud and scams.
Introduction
Electronic commerce, famously known as e-commerce, is defined as buying and selling products, transmitting funds or data via an electronic network such as the Internet (Market Business News, 2019). Although e-commerce has been around since 1991, the data from Google Trends show that e-commerce in Malaysia appeared ten years ago and was initially dominated by customer-to-customer businesses such as Lelong and Fashion Valet. In 2012, e-commerce started to flourish in Malaysia with the birth of three famous players in the industry, namely; Zalora, Lazada and Hermo. Subsequently, in 2015, it continued to progress rapidly with the addition of new players in e-commerce such as Shopee, 11Street, and GoShop (iPrice Group, 2019). The report from Google Trends, as shown on iPrice insight, also revealed strong competition between Lazada and Shopee in 2017 and by September 2019, Shopee ranked first in monthly web visits, App Store, PlayStore, and the number of employees.
In Malaysia, the progressive development of the Internet and the comfort of being online have developed a new online shopping market environment (Lahsasna, 2018). In connection with the current waves, this development has created a modern consumerist community where the banking field uses online banking platforms as a preferred payment medium (Lahsasna, 2018). However, the Malaysia Computer Emergency Response Team (MyCERT) linked to Cyber Security Malaysia (CSM) has revealed an alarming statistic on cybercrime. It has been disclosed that since 2008, cyber fraud issues are considered the topmost number of cases reported each year. Hence, organisations involved in e-commerce, such as Shopee, have to deal with scammers taking advantage of their site to scam consumers and draw up various strategies to gain customers' trust and eradicate scammers from their platform.
One of the earliest strategies put forward by Shopee was Shopee Guarantee. In this platform, Shopee acts as the mediator where the buyers pay Shoppe for the goods they purchase, and Shopee notifies the seller regarding the purchase. Then, once Shopee receives the ordered item from the seller or supplier, Shopee sends the item to the buyer, and only after the buyer has received and reviewed the purchased item, Shopee would release the payment to the seller. Shopee Guarantee allows buyers to check the item they have received and inquire about a refund if they are not satisfied with items received, and Shopee would refund accordingly (Milo, 2016).
However, scammers are getting more innovative, and even with Shopee Guarantee, scams such as asking buyers to deal outside the platform via chat messengers and account hijacking are still on the rise today. For example, a case was reported on mStar online in which a buyer was almost scammed after purchasing a flat-screen television at an incredibly low price as he received a verification text from Shopee to verify his account. However, Shopee found the seller's account to be fake and closed it immediately within 24hours (Mohd Khaliza, 2019).
There are many types of e-commerce platforms in Malaysia. As of 2019, according to The Star Online, Shopee saw a significant gross rise of 92.7% compared to just a year ago in 2018; Shopee is further ahead compared to other competitors such as Tokopedia, Lazada, Mysale, and many more (Tan, 2019). In addition to this, Shopee is the third most visited e-commerce International Journal of Social Science Research ISSN 2327-5510 2022 portal that replaces Lelong and surpassed Lazada as the best application on iOS Application Store and Google Play Store (Chew, 2018). With an increasing rise of online shopping scamming that is partly due to the insufficient knowledge in online security awareness, inadequate use of personal security devices (computer, laptop, mobile phone, tablet), and uncultured Malaysian mindset, consumers must be cautious in disclosing personal information (Tang, 2019).
As online fraud is facilitated by various strategies such as trickery, persuasion, impersonation, or emotional manipulation (Cross et al., 2016), the manipulation process that occurs between the scammers and victims show that victims are willing to communicate with those scammers (Chiluwa, Chiluwa, & Ajiboye, 2017). In this situation, victims are to be blamed as victims of fraud are seen to actively violate the notion of an ideal victim as they are culpable of their own's victimization. Morad and Raman (2015) explained that some consumers are fully aware that they might be facing scamming risks when purchasing items online, such as financial risk, product experience risk, and privacy risk.
People who use the Internet frequently will need to be aware and ensure their online activities are secured (Wang, 2018). In most cases, the impact will stop the victims from stepping forward to lodge a report. Hence, the cases of scams and victims will not be settled till Internet users prevent themselves from being the next victims.
Problem Statement
In Malaysia, it is reported that Malaysia's online shopping market has reached RM5 billion in the year 2013. The Malaysia Digital Association (2012) claimed that the Malaysian Communication and Multimedia Commission had ranked online shopping as the 11th position of 15 different reasons why Malaysians access the Internet. In gaining access to online purchasing, individuals are experiencing a host of risks during online shopping (Ariff, 2014). According to Ariff (2014), it is understood that online shopping risks directly affect the purchasing power of consumers as the higher the risk in online shopping, the lower the chances of a consumer to buy or make an online purchase. Several studies in the past have looked into consumers awareness on online frauds in e-commerce (Zahari, Bilu, & Said, 2019), risks faced by consumers in e-commerce (Morad & Raman, 2015), and e-consumers rights in online shopping from the legal perspective (Mohd Nor, Md Salleh, Omain, & Selamat, 2019). However, little is found in the study of the language used by consumers in communicating scamming complaints in online shopping and strategies taken by the organisation to solve these issues. Thus, this study aims to fill in the gap by looking at the language used by consumers in communicating their dissatisfaction during their online shopping experience with the international e-commerce platform, the risks observed, and Shopee's strategies to overcome these issues.
Research Questions
1) How do the international e-commerce platform customers voice their dissatisfaction and observe the online shopping risks?
2) What are the strategies used by the international e-commerce platform to overcome International Journal of Social Science Research ISSN 2327-5510 2022 scamming issues on their platform? Figure 1. Conceptual framework of the study This conceptual framework in Figure 1 draws on Kolb's Theory of Learning Styles (1984) and the Theory of Planned Behavior (TPB) which are used to address conflict resolution in solving the issues of online fraud and scams. This amalgamation of theories is expected to lead consumers to examine different conflict resolutions when facing problems in online shopping. The risk will then impact the online shopping experiences and create barriers in shopping online (Forsythe & Shi, 2003).
Conceptual Framework
Kolb's Experiential Learning Theory (ELT) is a learning theory that David created A. Kolb in 1984. This theory works on two (2) different levels, including four distinct learning styles and a four-stage learning cycle (Stice, 1987). In this, Kolb's theory possesses a holistic point of view, including experience, perception, cognition, and behavior. The experiential learning cycle includes four different stages: concrete learning, reflective observation, abstract conceptualization, and active experimentation. Accordingly, effective learning can be observed when the learner advances within the four cycles. Mc Leod (2013) stated that an individual could come into the experiential learning cycle at any stage but with a logical arrangement.
International Journal of Social Science Research
ISSN 2327-5510 2022, Vol. 10, No. 1 Martin and Camarero (2009) realised that perceived risk would take consumers to consider various signals when forming consumers' feelings and attitudes concerning a website. Several perceived risks affect the purchasing behaviour of online shopping, such as financial risk, time risk, delivery risk, product risk, information security risk, and social risk (Almousa, 2011;Javadi et al., 2012;Morad & Raman, 2015). Ariff (2014) shared that Malaysians generally have various perceived risks, especially the level of fear.
Icek Ajzen first introduced the Theory of Planned Behaviour (TPB) in 1985. The TPB is formulated to foresee an individual's intention and purpose in involving oneself in a behaviour at a particular time and place. Simultaneously, this theory was designed to illustrate an individual's capacity to exercise self-control. In the present study, TPB concerns the individual's intention over online e-commerce, which refers conceptually to consumer's behaviour intentions and subjective norms.
Methodology
The study adopted a qualitative approach. The samples of this study are obtained using stratified random sampling from consumers' feedback and review of the product they have purchased on the international e-commerce platform. Stratified random sampling is a sampling method that divides the population into subgroups, and units are randomly selected from the subgroups (Frey, 2018). The population subgroups for this study were divided into five categories, and 38 samples of high-value content were analysed, as shown in Figure 2. Subsequently, the samples were taken verbatim, a content description analysis was done on the samples, and emerging themes were categorised into themes following the conceptual framework. Then, the themes were compared to the strategies used by The international e-commerce platform to resolve a conflict using Kolb's Theory as guided by the proposed conceptual framework of the study.
Findings
The findings will highlight the customer dissatisfaction with the international e-commerce platform and how they voiced their disappointment. This is aligned with the risks that the customers need to face in online shopping using the international e-commerce platform at their shopping platform. Therefore, the international e-commerce platform also initiated strategies to overcome the scamming issues at its shopping platform.
Voicing Customer Dissatisfaction
The customers voiced their dissatisfaction using words and phrases that described the negative features of the products and services. Those words and phrases were directly stated in the international e-commerce platform's feedback sections and could be viewed easily by others. Table 1 shows the language used by the consumers in giving their feedback at the international e-commerce platform's website. The highest number of phrases found is 'slow delivery' and was mentioned 21 times, followed by 'fake product' and 'very disappointed' mentioned 14 times, 'poor product quality and 'poor response' mentioned 11 times, and 'bad packaging' was mentioned eight (8) times. All these descriptions through words and phrases will impact the products by giving direct information to other customers. These findings are supported by Boadi, Li, Sai and Antwi (2017), who highlighted that those direct descriptions by the customers in the websites show the connection of their dissatisfaction and their complaint on the poor product quality.
The complaints can also be seen as a judgment by the customers. This is supported by Singh and Holani (2017), who mentioned that the judgments made by customers are based on the customers' experience in online shopping and the problems they have encountered. It shows that the customers were very particular with the procedures involved in purchasing through the international e-commerce platform, and those phrases used in the complaint's sections emphasised the procedures, besides the description of the unsatisfied products received. They also explained that the experiences in online shopping, especially when the customers encountered these dissatisfactions towards the products, have led customers to write those negative words and phrases in the complaint. This is the way for the customers to convey their direct messages to the sellers. The customers also hope that they will address the issue to the seller and get responses.
The results clearly show that customers express their dissatisfaction through the description of the products or services. Customers described the condition of the products or services such as "bad" and "poor". These are direct words or phrases that will be viewed negatively and will impact the products.
Risks Observed on the International E-Commerce Platform
Customers will not be able to view the products by using e-commerce physically. It means that there are already risks in using the platforms, and they need to be well accepted by the customers. It should not be surprising to the customers when they need to encounter the delivery risks and product risks in using the international e-commerce platform. There are only two main risks in shopping through The international e-commerce platform: delivery risks and product risks. The delivery risks are the risk involved in delivering the products, which is the process of shipment. So, the categories of delivery risks consist of lateness, and the wrong item received damaged product, and bad courier service. Table 2 shows the online shopping risks that the customers have encountered by using The international e-commerce platform. Among all these delivery risks, it shows that the customers encountered lateness as the highest with 21 complaints, followed by bad courier ISSN 2327-5510 2022 service with 11 complaints and the lowest was the risk of not receiving the product with four (4) complaints.
International Journal of Social Science Research
Product risks involved dissatisfaction towards the products received by the customers. These risks consist of receiving a product that is misleading, poor quality, fake, unsuitable size, inappropriate smell, wrong colour, close to the expiry date, and damaged. In their research, Singh and Holani (2017) stated that these are among the risks observed by the customers when they use the e-commerce platform. Among all these risks, the customers encountered the wrong size of products as the highest with 21 complaints, followed by receiving damaged products with 18 complaints, and the lowest was receiving fake products with 14 complaints.
The results show that the customers faced those risks in e-commerce when they purchased the international e-commerce platform. However, even though they realised that these are the risks they would probably encounter, the process of purchasing still occurred. So, the customers use the platform provided to give feedback and voice out their complaints. The international e-commerce platform itself has provided this platform as a way to communicate with the customers. Hence, those dissatisfactions and bad experiences were able to be expressed in complaints. This standard process is a typical experience in e-commerce, as Wu and Ke (2015) mentioned in their articles. So, customers should be aware of those risks and be extra alert with any disappointment they encounter. Furthermore, The international e-commerce platform also has its initiative to handle customers' disappointments. Among these initiatives is applying some negotiation strategies to tackle the issue from prolonging and hoping that they will gain customers' trust.
Strategies Solving the International E-Commerce Platform Scamming Issues
The international e-commerce platform has taken steps to handle the issues faced by the customers. The complaints from the customers highlight those issues. This can be identified through the words and phrases used to express contentment. However, these disappointments are the risks that the customers have already observed. Therefore, to create a better environment through The international e-commerce platform, strategies taken by the international e-commerce platform consist of negotiation strategies. They are mediation, compromising, confrontation and litigation. Table 3. Strategies by The international e-commerce platform in combating online shopping scams The international e-commerce platform introduced the international e-commerce platform Guarantee system to protect buyers from scammers and gain consumers' trust. The international e-commerce platform guarantee acts as a mediator between buyers and sellers in which buyers make payment for their purchase to The international e-commerce platform, and the payment is released to sellers only after the buyers have received the product (Milo, 2016). This mediation strategy initiated by a year after they launched their online shopping site is a result of experiencing numerous reports on scamming activities.
The international e-commerce platform is faced with scammers setting up fake profiles and disguising themselves as sellers. As a compromising strategy, The international e-commerce platform has set up a 'Preferred Seller' system in which sellers on The international e-commerce platform are automatically bound to the seller penalty point system. The international e-commerce platform's team manages this system, and points will be added or deducted to a seller account based on their performance in which one of the violations is the keyword 'spam' found in the seller's product review, which will lead to a deduction of two penalty points to the seller account .
The international e-commerce platform comprehended both experience and reflection on scamming issues to protect their site from scammers further. In this conceptualising process, the international e-commerce platform used a confrontation strategy in which buyers were required to click on 'order received' if they are satisfied with the purchased product or click on 'return/refund' along with a photo of the product and the reason why the buyer is asking for a refund . This strategy is considered a win-win negotiation strategy as buyers will get their money back or get their product replaced, and sellers are given clear evidence as to why the international e-commerce platform does not pay them.
The international e-commerce platform does active experimentation in which the organisation reflects on the experience and devises an immediate action for the present problem (Owen, Brooks, & Curnin, 2018). In this scenario, cases that involve financial risks and hacking into buyers account to release payment are taken seriously by The international e-commerce platform. After confirming that the seller's account is fake, it will be closed immediately within 24hours after the report was made (Mohd Khaliza, 2019).
Risks in using e-commerce are well observed. However, the customers will still be interested in using the international e-commerce platform as one of the e-commerce platforms. They will express their disappointment through those words and phrases written in the complaints section. So, these strategies are essential as tools for the international e-commerce platform to make sure that they can handle the issues.
Discussions
This study highlighted the common scenario between consumers and scammers of online shopping using an e-commerce database, The international e-commerce platform. As has been highlighted by Kolb's Experiential Learning Theory, consumers and organisations managed to experience the four (4) stages of these experiences from online shopping as observed from the findings. The Theory of Planned Behaviour (TPB) further highlights the International Journal of Social Science Research ISSN 2327-5510 2022 consumers' behaviour when voicing their dissatisfaction with facing the risks of online shopping.
In the present, Kolb's Theory (1984) consists of concrete experience, reflective observation, abstract conceptualisation, and active experimentation. The international e-commerce platform as an organisation has substantial experience with fraud and scams while consumers shop online. Here, the goal is for the international e-commerce platform to experience to learn from these experiences. In the second stage, with the reflective observation, The international e-commerce platform reflects on the experience of the consumers before making any solutions and judgments when encountering consumers' fraud and risks matter. Then, an abstract conceptualization comes where the international e-commerce platform gives ideas by identifying the issues and recurring problems during customers' fraud experience. This helps in learning by identifying solutions for customers. In the final stage, The international e-commerce platform applies these experiences of customers' feedback and problems on fraud and scams to another situation. Here, the goal for The international e-commerce platform is to find and discovers ways to improve the online shopping platform.
In relation to our findings, the behavioural intention of the consumers indicates their attitude towards the risks as observed during online shopping. Further, customers' behaviour from the responses given when encountering problems related to scams and fraud can be seen from our findings. In Theory of Planned Behaviour (TPB), we can see various risks consumers have encountered during online shopping experience from fraud and scams. Here, consumers' actions on encountering product risks, delivery risks, and many others show the relations between the outcomes to consumers' perceived behaviours during fraud and scams.
In this connection, parties need to find a peaceful solution in order to solve problems among them. These disputes arise out of consumers' dissatisfaction and online risks, as shown in our findings. Negotiation, mediation, confrontation and litigation are resolutions used to resolve these issues of online shopping. Accordingly, Kolb's Experiential Learning Theory (1948) and Theory of Planned Behaviour (TPB) encourage the international e-commerce platform and consumers to find conflict resolution in encountering issues of online shopping and in this situation, issues on fraud and scams.
Conclusion
In conclusion, as has been highlighted by the findings and conceptual framework of these studies, there is a clear relationship between the risks of online shopping scams and solutions in conflict resolutions. The risk of online shopping scams is detrimental to both consumers of sellers and buyers as well as the organisation. All things considered, this study could create awareness among customers of the international e-commerce platform while at the same time minimising the potential risk of scams and fraud through online shopping.
|
2021-12-03T16:20:13.591Z
|
2021-11-30T00:00:00.000
|
{
"year": 2021,
"sha1": "d17e49d1c2ac7515a6fa90868e85b49d4f14538a",
"oa_license": "CCBY",
"oa_url": "https://www.macrothink.org/journal/index.php/ijssr/article/download/19290/14981",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b1e2cc58b20ccbaf3e8292044a8ebc19de5c18bc",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": []
}
|
241322062
|
pes2o/s2orc
|
v3-fos-license
|
Value Chain Analysis of Smallholder Dairy Production in Debark District, Ethiopia
Milk production in Ethiopia is done largely by the smallholder farmers in the high lands and in low land areas of the country. However, the Ethiopian dairy production and market systems face severe constraints like poor genetics, insucient access to proper animal feed and poor management practices all contribute to the low productivity levels. Similarly, dairy producers and downstream actors in the value chains face many challenges in getting milk to market. Hence this study analyzes dairy products value chain. Methods Value chain analysis framework and nancial analysis were used. Result and conclusion The dairy value chain is The major direct actors include and cafés and The nancial analysis along the entire value chain shows that Producing and marketing of dairy products is protable and creates higher value added. Concerned body should improve access to services and collective actions to enhance local value additions and Smallholder milk producers must be organized into dairy cooperative groups and subsequently into dairy unions.
Introduction
Ethiopia is reported to be endowed with the largest livestock population in Africa. According to the 2010 report of the Central Statistical Agency (CSA) the cattle population was estimated at about 50.9 million.
The indigenous breeds accounted for 99.19 percent, while the hybrids and pure exotic breeds were represented by 0.72 and 0.09 percent, respectively. The CSA survey further indicates that 12%, 2.8% and 30% of the cattle, goat and camel population, respectively, are kept for milk production. Annual milk production is estimated at 2.8 billion liters from cattle and 165.12 million liters from camels. The dairy sector constitutes about 13.7% of the total agricultural production and 39.4% of the total livestock production in 2011 (FAOSTAT).
Smallholder dairy farming, which is de ned here as production, on-farm processing and marketing of milk and milk products, in Ethiopia can be broadly classi ed into the lowland system, comprising the pastoral and agro-pastoral systems, and the highland system in the mixed crop-livestock areas (Azage et al. 2013).
Ethiopia has a complex dairy value chain, with both formal and informal channels. Only 5% of the milk produced in Ethiopia is sold in commercial markets. The dairy value chain has a variety of entrepreneurial actors -smallholder and commercial producers, small and large processors, service and inputs providers, farmers' organizations and cooperatives. Value chain actors are investing in milk production, collection and processing, and increased demand would likely lead to increased investment. Market opportunity is anticipated to lead to value chain deepening and upgrading, more solid horizontal and vertical relationships, and investment in core value chain operations as well as needed services and inputs (Ponguru and Nagalla V, 2016). Smallholder farmers in the highlands produce fresh milk and processed products such as butter and local cheese (ayib). In the rural areas, fresh milk is used for household consumption, and processing into butter and sold in near or far away markets. Zegeye (2003) also asserts that butter dominates dairy marketing and the transaction in the form of raw milk is limited to the surroundings of major urban centers.
The low consumption of milk and milk products coupled with the huge potential for dairy development clearly indicates that there is ample opportunity to improve the sector. This is even more appealing given the considerable potential of dairy production in creating income-generation opportunities and its further contribution in improving human nutrition, particularly for women and children (Ahmed et al. 2004).
According to Gizaw et al., 2016 the major constraints were low scale of production, low productivity that varies across systems, failure to maintain exotic inheritance at farm level resulting in herds with mixed genotypes which are not amenable to recommendation for value chain interventions, least access by the rural system to arti cial insemination (AI) service and questions by the urbanites on its e ciency, heifer supply least satisfactory among breeding services, concentrate feed cost threatening urban/peri-urban dairies, unhygienic milk handling and consumption, particularly in rural areas, price of milk generally too low for producers, especially for rural farmers.
On the other hand, the dairy sub-sector is currently facing a number of problems that have persisted for decades. Productivity of the dairy herd is low (CSA, 2008). The population of exotic and crossbred dairy cows accounted for less than 1% of the total dairy cattle. High mortality rates occur due to poor nutrition which makes cattle vulnerable to disease. Producers in the rural areas lack access to markets and extension services which reduces the ability of smallholder producers to be competitive. Feed production and distribution is not coordinated. (Tefera, 2010). Moreover, market-orientation of the production systems and the possibility of exporting Ethiopian dairy products are limited by high transaction costs despite low costs of production (Ahmed et al. 2004). Development of a vertically integrated and coordinated milk value chain is thus an important option to reduce operational and transaction costs to meet consumers' demand and to encourage partnerships along the chain (Costales et al. 2006).
There is a serious concern, however, that smallholder agricultural producers are often excluded from participation in value chain since they usually lack access to credit, make limited investment in their human capital (including skills and entrepreneurship training), and are isolated by physical distance from the market (Mendoza and Thelen, 2008).
The general objective of this study is to analyze dairy value chain in Debark district, North Gondar zone, Ethiopia. Speci cally, to identify and examine the actors involves their functions, interactions and market channels along the value chain, and to estimate the distributional equity and value added along the value chain (pro t margins along the VC).
Description of the study area
The value chain analysis was conducted in Debark district, located in the former North Gondar zone of Amhara National regional state (ANRS) of Ethiopia. It is located at about 80 km north east of Gondar town. According to the agro ecological classi cation used in Ethiopia, the climatic condition of the city is characterised as Dega climatic zone. Average annual rainfall of the district is about 1200mm. The mean annual temperature ranges from 12.90C to 26.40C. The households engaged in raring animals and plantation activities. Production of milk is a widespread economic activity in the area which is the major source of livelihood for numerous smallholder farmers in terms of subsistence uses and cash income generation.
Sampling technique and sample size
Sampled smallholder householders were selected using a multi-stage sampling. In the rst stage Debark District was selected purposively due to potential area in dairy production, three kebeles (Debir, yekirar and Mikara) were selected purposively in discussion with district agricultural o ce experts due to being a potential Kebeles and better experience in dairy production. Then a total of 60 households were selected randomly and proportional to number of producing households using the list of dairy producer inhabitants in the Kebele. Moreover, 20 sample traders (village collectors, cooperatives, café and restaurants and semi processors) were selected by using snow ball sampling method.
Data source and data collection method
This study employed both primary and secondary data relevant to answer the research objectives.
Primary data were collected using interview schedule and key informant interview checklists for the focus group discussion. It was collected from sample household head farmers and value chain actors such as traders at different levels along the value chain. Secondary data was collected and reviewed from published and unpublished o cial sources such as the district administrative and agricultural o ce report.
Method of data analysis
Data were analyzed by using both descriptive statistics and econometrics analyses. The value chain analysis was carried out rst by identifying the actors and activities in the chain, examining the roles played by each actor and the mapping the ow of the dairy products. Then estimation of pro ts margins, and value added within the actors at different stages of the value chain, dictating the possible upgrading mechanisms and identi cation of strengths, opportunities, weaknesses and threats in the value chain were analyzed.
Quantitative data on the cost and revenue structures, value added, bene t distribution was analyzed by calculating using the expressions brie y by (Marshall et al., 2006). Pro t margin at each stage was calculated to evaluate the bene ts along the value chain actors.
Pro t margin = Revenue -total cost Where, Revenue = sale volume * unit price Value added can be broken down into different segments of the value chain as net pro ts, the amount of personal remuneration and taxes (Vedeld et al, 2004). Accordingly, value added is not just an element of income but it also represents the distribution of that income amongst the fundamental agents of the national economy: households (the recipients of the return to labor), nancial institutions (interest charges), government administration (taxes), and enterprises (gross or net pro t) (Tallec and Bockel 2005;Vedeld et al, 2004).
Socioeconomics and institutional characteristics
The average age of the smallholders was 47.58 years with standard deviation of 11.7391 (Table 1). The result also indicated that the small holder producers were within the age range of 22 and 75 years.
Regarding to the family size, the respondent households had an adult equivalent that ranging from 1.2 to 10.5 and the average adult equivalent ratio is found to be 5.21persons. The mean land size was 2.99 ha total farm land. The experience of the smallholder producers in dairy production ranges from 1 to The survey result showed in Table 2 that 83.33% of the farmers had got extension service and it also revealed that 71.67% of the farmers was participating in the training on dairy production and processing from OoARD, DA's and NGOs (). 88.33% of the respondents had not access to credit. Among the small holder farmers, 38.33% had not a contractual agreement, while more than 61% of the smallholder farmers had not a contractual agreement this indicates there is a high demand of milk. More than half (56.67%) of smallholder farmers were member of dairy cooperatives.
Dairy Value Chain
The simpli ed value chain map of milk originating from Debark district is illustrated in Fig. 1. There is a complex chain that the woodlot products ows and different direct and indirect actors were involved until it delivered to the nal consumer. The value chain map shows the multiple direct and indirect actors involved and interacting in the dairy value chain.
Diverse group of actors including the producers, traders and support and regulatory service providers are involved in the value chain to give services like concentrate feeds (industrial by-products) and green fodder Arti cial Insemination (AI), veterinary and credit services/ nance, land and labor which are supplied by smallholder farmers, District Agriculture O ce, veterinary clinics and NGOs (International Livestock Development Project ). The indirect actors mainly serve support and regulatory services along the value chain. These were the actors who enabled the value chain to be performed successfully in the way that facilitating by providing extension services, credit access, training, market information and trade license.
Farmers-are dairy producers who are actively engaged in production and marketing of dairy products and they are major actors and have performed many functions starting from dairy cow selection, providing feedings, building shelter up to milking, processing the milk traditionally and selling it to the market. They sold the surplus milk produced to the local markets, either as liquid milk or in the form of butter or cheese.
Village collectors-are the local traders who collected milks from smallholder farmers. They sold the outputs directly to the cooperatives, hotels and consumers in the district. Collecting the raw milk from milk producer farmers in the study area is the major functions of village traders. Most of milks produced are purchased from farmers by village traders.
Dairy cooperatives-They collected milk from village collectors and its members. In the process of milk collection and bulking, lactometer test is applied to assure the quality of milk and it is one of the guarantees to keep the market since consumers are relied on. They add a value like change the raw milk in to yoghurt and butter.
Semi processors-In the district there was only one semi processing actor it had many roles in milk value chain. Smallholder farmers and producers are a source of milk for this semi processor company. In the process of milk collection, they conduct lactometer test to check the quality of milk. They did semi processing activities like change the form of the milk, packing and pasturalizasion. They delivered the dairy products to hotel and cafes, and consumers in the district.
Hotel and cafes-they are the last link deliver the dairy products to the consumers in Debark District. They got the milk from all actors involved in milk value chain. They sold all milk products directly to consumers.
Consumers-These are individual households, living in the area and restaurant and hotels. They bought the commodity for their own consumption. They bought from Farmers, semi processors, cooperatives, and hotel and cafés.
Cost, Bene t Share and Value Added along Dairy Value Chain
The average pro t margin was computed by estimating the cost and revenue of actors in the value chain. Financial performance or pro tability is analyzed by considering the average cost and revenue of milk production and marketing along the value chain actors starting from Debark district. In this study, the cost bene t share and value added along the value chain actors was done for the longest and larger quantity of milk transmitted channel. Because in this channel many actors are involved and larger amount of milk transacted. Source: Own survey data, 2018 As displayed in Table 3, the labor cost (opportunity cost of labor) and production material cost accounts to about 7.1% and 90.8% of the total production and marketing cost at farm gate level, respectively. Transportation cost accounts to 1.9% of the costs at farm gate level. The largest share of the production and marketing cost at farm gate level was taken by production cost. For the smallholder farmers, dairy production is economically pro table, average net annual income earned was 59 birr per liter and making about 59.27% of pro t margin.
The traders which are village collectors, semi processors and hotel & cafés earned a positive net pro t margin. From all actors involved in the dairy value chain actor's hotel & cafés got highest pro t margin and they are more bene ciary in milk marketing activity along the dairy value chain in Debark district. 3.4 Value addition and its proportion Distribution of value added to different actors in the value chain is presented by the Table 2; the total average value added was estimated at about 30.66 birr per litter. The largest proportion of value was created by semi processors and accounted at about 48% of the total average value added by the traders and smallholder farmers or producers. Accordingly, hotels and cafés (42%) constitute the next largest proportion of total value creation. The value created by the village collectors and small holder farmers was about 21 and 17% respectively of the total average value added.
Price and commercialization margin
The Table 5 showed the distribution of commercialization margins among dairy marketing actors in the value chain. The total commercialization margin and the price used by all marketing actors was estimated 17 birr per liter.
Conclusions And Recommendations
Dairy production and marketing is a major source of livelihood for different actors in the area. All actors in the value chain had positive pro t and value added. So, dairy production and marketing was pro table. Semi processors were created larger proportion of value addition and they were more bene ciaries in dairy marketing than other actors involved in the value chain. Small holder farmers and actors in the value chain face many challenges like price uctuation, lack of supply, poor infrastructure like electricity, lack of awareness regarding arti cial insemination and seasonal change were some of the challenges faced by value chain actors in production and marketing of milk.
In order to improve dairy value chain, there should be creating awareness on the nutritional value of milk and milk products, Smallholder milk producers must be organized into dairy cooperative groups and subsequently into dairy unions to increase milk production, the volume of milk consumed and marketed, and commercializing the subsistent type of smallholder milk production system Declarations Zegeye Yigezu. 2003. Imperative and challenges of dairy production, processing and marketing in Ethiopia. In: Jobre, Y. and Gebru, G. (eds), Challenges and opportunities of livestock marketing in Ethiopia.
Proceedings of the 10th annual conference of the Ethiopian Society of Animal Production (ESAP) held in Addis Ababa, Ethiopia: ESAP. Figure 1 Milk value chain map in Debark district
|
2020-10-28T18:56:21.941Z
|
2020-10-02T00:00:00.000
|
{
"year": 2020,
"sha1": "fe8191eceb3ba10e74af32aa06ee24b20d576bbd",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-84177/v1.pdf?c=1602013321000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e1ded5cadd0c10e788a24451404af6a468fca5ac",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
249265357
|
pes2o/s2orc
|
v3-fos-license
|
THE EFFECT OF FINANCIAL COMPENSATION FOR FARMLAND ACQUISITION ON HOUSEHOLD WELFARE: THE YOGYAKARTA INTERNATIONAL AIRPORT DEVELOPMENT CASE
ARTICLE INFO Background Problems: Land acquisition is a major issue in development policy, and compensation is often described as being inadequate; meanwhile, adequate compensation is the key element of fairness. Main Objectives: The objective of this study is to examine the impact on household welfare of financial compensation for farmland acquisition for new airport development. Novelty: This study utilizes land acquisition for the new Yogyakarta International Airport (YIA) because this area provides a reasonable case for evaluation. Research Methods: A quasi-experimental design is used to draw a causal relationship. A questionnaire survey has been conducted with 452 households, consisting of 207 households in the treatment group and 245 households in the control group. Finding/Results: On average, the financial compensation for the farmland acquired for the new airport development increased a household’s total annual income by as much as 32.06%, especially the income that was generated from self-owned business and farmland activity, and it also increased their total annual expenditures by as much as 26.55%, especially those related to food, energy (LPG and fuel), vehicles, internet and phone, religion, social relationships, and insurance. Conclusion: This study highlights that financial compensation for farmland acquisition for tertiary industry, specifically a new airport development, has a positive impact on both the total annual income and the total annual expenditures. Article information: Received on 29 March 2021. Received in revised form 16 March 2022. Received in revised form 1 April 2022. Accepted 4 April 2022
INTRODUCTION
Large-scale land acquisition has drawn substantial attention in recent years, and it has become one of the most important issues in development policy. An important part of land acquisition is financial compensation which is defined as the amount of money paid to those whose land and livelihood are taken away due to the acquisition; this amount has been described as being insufficient. The issue of insufficient compensation might lead to massive political and social tension (Ghatak & Mookherjee, 2014). Meanwhile, adequate financial compensation is one of the key elements to ensuring that the affected landowners perceive land acquisition as being fairly compensated (Holtslag-Broekhof, van Marwijk, Beunen, & Wiskerke, 2016). Although the compensation mechanism has been improving, there are still many cases in which the losses are higher than the compensation given, which makes the affected landowners worse off and thus unintegrated from the development activities (Rao, 2019). Rousseau (2020), in Southwest China, found that the financial compensation for land acquisition for hydropower dam construction did not account for the nonland-aspect losses suffered by the affected villagers even though the calculation of the compensation followed the legal guidelines.
The phase of economic development is characterized by industrialization, natural resources extraction, urbanization, and infrastructure development. These characteristics require a relatively huge space to take place, and largescale land acquisition is a way to make it happen (Ghatak & Ghosh, 2011). Irawan, Hartono, Irawan, & Yusuf (2012) analyze the impact of infrastructure on several economic parameters in Indonesia using the computable general equilibrium (CGE) model and found that the refinements to any type of infrastructure are expected to boost economic growth, increase government earnings and factors' income, and alleviate the poverty.
The new Yogyakarta International Airport (YIA) development project was a part of the National Strategic Project, according to the Republic of Indonesia Presidential Regulation Number 58 of 2017 Concerning the Implementation of National Strategic Projects. In 2012, the Indonesian government planned to build 45 new airports, including YIA, within 10 years to support the rapid growth of the national aviation industry and to address the problem of airport overcapacity in Yogyakarta (Rachman, Satriagasa, & Riasasi, 2018). Yogyakarta had been facing a rapidly growing number of visitors because it was one of the nation's favorite tourist destinations; in fact, the region was ranked as the second-most popular destination out of all Indonesia's provinces (Kadarisman, 2019).
According to inventory data obtained from the Ministry of Agrarian Affairs and Spatial Planning, the construction of YIA involved five affected villages: Jangkaran, Sindutan, Palihan, Kebon Rejo, and Glagah ( Figure 1). The total land area was 585.18 hectares, and it consisted of 3,497 land plots. Additionally, the amount of money provided by the government to acquire these lands was equal to IDR 4.15 trillion (USD 296.43 million); thus, the affected households would receive a certain amount of money as compensation for the government's acquisition of their land.
The land procurement for the YIA project was carried out in accordance with Law No.2/2012, which, it was claimed, had a fairer paradigm than the previous applicable law in terms of compensation. Law No.2/2012 uses the specific term "fair and worthy compensation", while the previous law only referred less specifically to "giving compensation". This research also attempts to evaluate the implementation of Law No.2/2012 by estimating 190 Purbawa the impact of financial compensation on household welfare such that the fairness and worthiness of the compensation can be implied. The evaluation of the implementation of this new law on land procurement is extremely important because the Indonesian government has plans to build other transportation infrastructure, including more new airports. Therefore, in the near future, the government will require much more land to provide space for its infrastructure development.
LITERATURE REVIEW
Land acquisition for agricultural investment tends to have a positive effect on the affected households. Bottazzi, Crespo, Bangura, & Rist, (2018) found that the large-scale land acquisition for a sugar cane plantation by a biofuel firm in Sierra Leonne decreased the food production and yields of the farmers; however, it increased the farmers' total revenue and their spending on food consumption. Another study simulated the scenario of large-scale land transactions for agricultural investment in Ethiopia; the findings showed that such transactions might cause the affected poor people to suffer because of the loss of forestland. However, losses are traded off against the advantages obtained from investments such as business opportunities and job creation (Baumgartner, von Braun, Abebaw, & Müller, 2015). Stickler (2012) studied largescale land acquisition for agricultural investment in Uganda and found that such acquisition should have brought about economic development, such as income generation and job opportunity creation, for the affected communities, specifically the landowners; however, in fact, the outcome of the land acquisition could not be determined due to the unavailability of data.
Besides land acquisition for agricultural investment, another purpose of acquisition is for industrialization, which is usually characterized by infrastructure development. China has experienced a huge amount of land acquisition for urbanization and industrialization purposes, which has had negative effects on the health of farmers who lost their land since the acquisition affected both their income and their psychological wellbeing (Wang, Li, Xiong, Li, & Wu, 2019). Meanwhile, Ty, Van Westen, & Zoomers (2013) found that affected households were worse off after land acquisition because of the unfair compensation and resettlement for construction of a hydropower dam in Vietnam. The farmer households showed a decline in their food expenditures after the resettlement because the land had often been appraised as costing less than the market price. In West Bengal, India, land acquisition for a steel industry plant development reduced the monthly total revenue of the affected farmer households by 50%, and only a small number of the households were capable of sustaining their income by generating off-farm income (Shee & Maiti, 2019). Land acquisition for a oil and gas industry development in Uganda had also had a negative impact on most of the affected people because their livelihoods has been uprooted as they faced food security issues, cultural shocks, and reductions in social services (Ogwang & Vanclay, 2019).
There has been changes in the livelihood patterns of the affected households as a result of the YIA development related to compensation. Some households have used their compensation for livelihood improvement and sustainability, while some households have failed to use the compensation to improve their livelihoods (Rijanta, Baiquni, & Rachmawati, 2019). Another study was conducted to compare the compensation value to the property value for aquaculture. The compensation value was almost nine times higher than the aquaculture property value (Rachman et al., 2018). Edita (2019) stated that the YIA project caused the affected people to suffer due to resettlement, displacement, loss of farming jobs, and poor compensation. Furthermore, the expectation that the YIA project would reduce the economic gap in Yogyakarta seems to have been impossible to achieve; even worse, the economic gaps now tend to be more severe because those who have not been able to adjust to the urbanization created by the airport's existence are still marginalized, while those who obtain advantages from the existence of YIA have tended to improve their livelihoods.
According to several previous studies on land acquisition, especially in developing countries, there is a tendency for land acquisition for industrial infrastructure to bring about negative impacts, whereas land acquisition for agriculture investment usually brings about positive impacts. By using another type of industry for a case study on the purpose of land acquisition, namely, the transportation industry-which is categorized as being in the tertiary sector-this study aims to examine whether the financial compensation for land acquisition for industrial infrastructure always has negative impacts on the welfare of households, especially on their income and expenditures.
Since it is unclear whether transportation facilities have a positive impact on welfare with regard to the compensation given, and the studies that have focused on this area are still very limited, the objective of this study is to examine the impact of financial compensation for farmland acquisition for a new airport development on the welfare of households in terms of income and expenditure.
To attain the research objective, this study utilizes the land acquisition for a new airport development in Indonesia, specifically Yogyakarta International Airport (YIA). This area provides a promising case for evaluation because the transportation industry has different characteristics from the other industries examined in previous studies. The transportation industry is a tertiary industry, while the previously studied cases of land acquisition around the world have been mostly for primary and secondary industries. Moreover, agricultural investment obviously has a direct association with farmers' livelihoods, whereas the existence of the new transportation infrastructure seemingly has no direct connection to the welfare of farmers. To date, the previous studies that have been conducted regarding the YIA development are 192 Purbawa mostly qualitative, and it is still unclear whether such compensation has a negative or positive impact on the livelihoods of households. Once again, it is important to reconfirm the effect of financial compensation for farmland acquisition on household welfare.
METHOD, DATA, AND ANALYSIS
To draw causal inferences from the financial compensation of land acquisition on households' welfare, this study uses a quasi-experiment. In a quasi-experiment, the treatment groups can be assigned (other than by the researcher) by selfselection or based on a policymaker's judgement. In this study, the government determined the airport location, and the households who had farmland inside the area of the planned airport were the treatment group, and those who had farmland outside the airport planned area were the control group. The treatment assignment occurred in September, October, and November 2016.
Data
A household questionnaire survey was conducted to collect primary data. This study took three villages (Figure 2) out of the five affected villages as the sample: (1) Jangkaran, (2) Sindutan, and (3) Palihan. The survey was conducted on 452 households, with 207 households comprising the treatment group and 245 households comprising the control group.
The survey was conducted from 7 February 2020 until 9 March 2020. The location of the study was in Temon Sub-district, Kulonprogo District, Special Province of Yogyakarta. The full sample of observations was taken only from (1) Jangkaran and (2) Sindutan; (3) Palihan could not be fully observed because of time constraints for the survey. However, the maximum number of observations that could be obtained during the survey time frame in this study were acquired.
Figure 2. The Village Samples and Number of Observation
Before conducting the survey, the treatment group was identified by using textual data from the 2018 New YIA Land Procurement Implementation Result from the Ministry of Agrarian Affairs and Spatial Planning, while the control group was identified using data and information from the village government offices. The treatment group was defined as the households who had farmland within the area for the planned airport , whereas the control group was defined as the households who had farmland outside that area. The survey also involved almost all the heads of Dukuh (a village consists of several Dukuh) to show the exact respondents' locations and to further confirm whether a household had the criteria to be included in the treatment or control groups. Treatment assignment was defined as those who obtained financial compensation for their farmland taken by the government, and the outcome variables were the annual income and expenditures.
Analytical Method
This is a quasi-experimental study; it is a research method that can be used to test the causal consequences of a long-term treatment. All experiments aim to determine whether a treatment has made a difference to a particular outcome instead of explaining why the difference occurred. A quasi-experiment is different from a controlled or randomized experiment. In a quasi-experiment, the treatment groups can be assigned by self-selection or on a policymaker's judgment other than the researcher. Nevertheless, the nature of all experiments, including quasi-experiments, suggests a more causal description than a causal explanation (Shadish, Cook, & Campbell, 2002). This study has only a one-time point of observation, which is the observation of the outcome variables for after the treatment assignment only.
To strengthen the causal inference, an assumption that must be held in this study is that all the dimensions of the household characteristics and farmland plots within these three adjacent villages are not systematically different. To prove this assumption, the variables for the pretreatment assignment between the treatment and control groups are supposed to be in balance. The impact of financial compensation for farmland acquisition is presented as an average treatment effect by analyzing the differences in the outcome variables (household total annual income and expenditure) between the treatment and control groups. A standard ttest of the mean is used to capture the difference in the outcome variables as the main research findings.
RESULT AND DISCUSSION
The pretreatment variable balance was checked by analyzing the descriptive statistics between these groups in 2016 as shown in Table 1 below.
There was a significant difference in the farmland plots between the treatment and control groups. On average, the treatment group had approximately 0.6 more land plots than the control groups. This possibly occurred because the partially treated households considered counting their fragmented land plots as different plots when they were asked about how many land plots they had before the land acquisition; however, from the perspective of farmland size, there was no significant difference between these groups before the treatment occurred. There are significant differences in only two out of the 21 characteristics: the vocational school level education of the household head and the number of land plots they had. Generally, the balance check table shows that on average, the treatment group is, broadly speaking, not very different from the control group. Unfortunately, the study was unable to control the location of farmland ownership. The treated households were more likely to have farmland parcels in the southern part of the rural area, while the control group mostly owned the farmland in the northern part of the area. The farmlands in the southern part were mostly dryland farms (used to cultivate watermelon and chili), while the farmlands in the northern part were mostly wetland farms (used to cultivate rice). This is the typical limitation of a quasiexperimental design where the treatment assignment is determined beyond the researcher's judgment.
Effect on Household Income
Based on Table 2, this study finds the average effects of financial compensation for farmland acquisition for the new airport development are as follows: 1) An increase in the household total annual income in 2019 by around IDR 16.3 million (32.06%); 2) A positive impact on farmland income amounting to around IDR 6.2 million (58.13%) annually; 3) A positive impact on the income from selfowned businesses which increased by IDR 12.2 million (88.93%) annually. The income that is generated from the self-owned businesses shows the highest positive impact of financial compensation compared to other income sources; 4) A negative impact on the income from transfers amounting to a decrease of almost IDR 2 million (62.93%) annually. Finding 1 is in contrast to the previous study by Shee & Maiti (2019), which found that land acquisition for industrialization reduced the monthly total income of the affected farmer households by 50%. However, the finding supports the previous study by Bottazzi et al., (2018), which found that land acquisition for agricultural investment increased the income of the affected households. Finding 2 might have been caused by the treatment group having additional financial sources that came from the compensation that was used to improve the inputs in their farmland activities or they might have bought more productive farmlands to substitute those lost. This finding provides new evidence that financial compensation for farmland acquisition can have a positive impact on farmland income. Finding 3 shows that the treated households responded by engaging in local business opportunities brought about by airport development. This finding supports the previous study by Baumgartner et al., (2015), which stated that large-scale land transactions can bring about advantages such as business opportunities and job creation. Finding 4 shows that the treated households seem to be less likely to engage in urban activities in allocating their resources. Instead, they are more likely to prefer allocating their resources to local self-owned businesses than allocating them to urban activities. This is supported by the previous finding 3 on income from self-owned businesses, which experienced the largest positive impact.
Effect on Household Expenditure
Based on Table 3, this study finds, on average, the effects of financial compensation for farmland acquisition for the new airport development are as follows: 1) An increase in the households' total annual expenditure in 2019 by IDR 7.6 million (26.55%). However, it had no significant effect on education and health expenditure; 2) A positive effect on food expenditure amounting to around IDR 1.6 million (14.95%). This was the biggest positive impact compared to other expenditure items; 3) A positive impact on LPG expenditure amounting to IDR 131,000; 4) An increase in all annual expenses related to vehicles (fuel, maintenance, and tax) by around IDR 2.3 million; 5) A positive effect on electricity consumption and annual spending on internet and phones which has amounted to around IDR 778,000; Notes: significance level: *0.1 **0.05 ***0.01 6) An increase in annual spending on religious matters and social relationships by around IDR 1 million and on insurance by around IDR 102,000; 7) A reduction in spending on the water by around IDR 55,000 which is the only annual expenditure that shows a negative impact.
Finding 1 provides new evidence that financial compensation for farmland acquisition has a positive impact on households' total annual expenditures. However, in terms of health expenditure, it does not support the previous study by Wang et al., (2019), which found that land acquisition harms the health status of the affected household in China. Finding 2 corresponds to the study by Bottazzi et al., (2018) who found that land acquisition for agricultural investment increases food consumption significantly. Meanwhile, this finding contrasts with a study by Ogwang & Vanclay (2019) who found that land acquisition harms food security. Finding 2 also supports the study by Susilo (2010), who found that income is one of the main factors that affect the food security in rural area of Yogyakarta Province. He concluded that an increase in income induces the probability of improving food security by 1.09 times.
Finding 3 supports finding 2, these two findings show that a positive impact on food expenditure is indicated by the expenditure on LPG because more LPG is needed to process more food. Finding 4 indicates that the treatment group has more resources to increase their mobility than the control group because they have more vehicles and engage in higher spending on them. This finding supports the previous study on the YIA by Rijanta et al., (2019) which stated that there was a tendency for population mobility to increase in the affected villages since many private vehicles were being purchased using the compensation money. Furthermore, they stated that this mobility increase emerged as a response to the change in rural dynamics that is brought by the new airport development.
Finding 5 implies that the treatment group had more money from the increase in income or from the compensation and this was spent on purchasing new electronic gadgets and appliances for daily or business purposes. It seems that the affected households have tended to improve their lifestyle in terms of communication and technology. Finding 6 shows that the affected households were more generous than the control group because the social relationship and religious spending were mostly in the form of charity. Additionally, the treatment group is more aware of insurance, which is possibly caused by them having their income increased and having more resources to be allocated to insurance and to donations to charity. Finding 7 implies that the treated households had to pay less to obtain clean water compared to the control group. The treatment group could be less dependent on the water company because they had more money to install artesian wells or other clean water sources, while the control group still depended on the water company for clean water; thus, they had to pay monthly bills.
A possible reason behind these findings is that the households in the treatment group seem to be trying to meet the rising demand created by the airport's existence as a way to sustain their livelihoods. The existence of a new airport has a positive impact on economic development through the passenger volume and the number of flights (Bilotkach, 2015). Rijanta et al., (2019) classified seven types of livelihood change seen in the affected households as they adjusted to the new circumstances. The compensation money was used for: 1) Broadening their economic base, such as purchasing substitute land, investing in new buildings, and buying new vehicles; 2) Enhancing asset utilization, such as renovating their current buildings for commercial purposes or buying agricultural machinery; 3) Diversifying asset utilization, such as creating new businesses or getting new jobs; 4) Depositing the compensation money in the bank; 5) Investing in new ventures, such as building boarding rooms or rented houses; 6) Speculating, such as buying new land parcels in the city for future profit; 7) Increasing the spatial mobility; this is because some in the treatment group had to look after their businesses or work outside the village.
These types of livelihood change seen in the households in the treatment group indicate that they utilized the financial compensation wisely because most of the actions they took were for sustaining their livelihoods.
There was a significant increase in the land price around the airport after the issuance of the airport location permit (Guild, 2019). It seems that the existence of the new airport has caused the land value around the location to rise. This advantage affects the entire residents living near the airport as their property values soar. A study by Andini & Falianty (2022) found that the effect of property prices is positive and significant on the stability of the financial system. So, the result of this study is also in response to the previous study by Purbawa (2021) who argues that the YIA development brought a positive impact in economic growth and urbanization.
CONCLUSION AND SUGGESTION
On average, the financial compensation for farmland acquisition for the new airport 198 Purbawa development has had a positive impact on the households' total annual income amounting to 32.06%, especially for income that is generated from self-owned business and farmland activity. Meanwhile, it has had a negative impact on the income that is generated from the transfers. On the expenditure side, it has had a positive significant impact on the households' total annual expenditures amounting to 26.55%, especially expenditures on food, energy (electricity, LPG, and fuel), vehicles, internet and phone, religion, social relationships, and insurance. Meanwhile, it has had a negative and significant impact on expenditure on water. This study highlights the fact that financial compensation for farmland acquisition for tertiary industry and the new airport development has had a positive impact on both the total annual income and the total annual expenditures. It also demonstrates that an increase in total income is associated with an increase in total expenditure. According to the findings in this study, the tertiary industry seems to experience an agglomeration effect, and the financial compensation examined in this study implies a fair compensation that conforms to what has been specified in the new policy.
One obvious limitation of a quasiexperimental study is its randomization which is often considered to be a non-randomized design (Harris et al., 2006). Although, in this study, the balance check showed the households in the treatment group are not so different from the households in the control group, the bias remains, especially from the perspective of the policymaker or government. The treatment group is selected based on the policymaker's judgment, and this study presents the agriculture land parcels that were owned by the treatment household as being concentrated in the southern part of the affected rural area. Furthermore, this study is unable to present the outcome variables in two different time points: pre-treatment and post-treatment; instead, it has only a posttreatment measurement. The pre-treatment measurement could not be presented because it might be difficult for respondents to recall their income and expenditure in the past five years, therefore it was not possible to apply the difference-in-difference technique in this study. Since the balance check shows there is no difference between the treatment and control group, it is unnecessary for the propensity score matching technique to be applied. Many unobservable variables may affect the differences between the treatment and control group and, unfortunately, these cannot all be observed. However, this is the most optimal study that can be conducted in view of all the constraints.
Considering the study limitations mentioned above, especially the inability to apply the difference-in-difference technique, it is essential that future studies conduct a preliminary study first to establish a baseline before the main survey is carried out. After conducting the preliminary study, then the pre-treatment and after-treatment observations of the two groups (control and treatment) can be made. This sort of study requires multi-year observations to achieve more robust result. It is also suggested that future studies investigate further how the revenue from the land sale is used and it changes the household living arrangements in terms of supporting the result, because the changes to the welfare of the treated households can be explained by looking at how they utilized the compensation money and coped with the land acquisition.
|
2022-06-02T15:18:52.879Z
|
2022-05-24T00:00:00.000
|
{
"year": 2022,
"sha1": "d6b483abe81798014381556e63e532fa39b2179c",
"oa_license": "CCBYSA",
"oa_url": "https://jurnal.ugm.ac.id/v3/jieb/article/download/1499/1631",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b89c4fbb77e48c16af5f2782e8b56e24f327371b",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
40344862
|
pes2o/s2orc
|
v3-fos-license
|
Morphological characteristics and genetic evidence reveals a new species of Manihot (Euphorbiaceae, Crotonoideae) from Goiás, Brazil
Abstract During botanical expeditions between 2010 and 2015, as part of a taxonomic study of Manihot in the Midwest region of Brazil, approximately 500 specimens of the genus were collected. Some of these specimens presented similarities to Manihot irwinii. However, after careful morphological analyses, associated with genetic evidence, we propose here Manihot pulchrifolius as a new species. The new species is described, illustrated, and compared to Manihot irwinii, its most similar species. Furthermore, geographic distribution, conservation status, and period of flowering and fruiting of the novel species are also provided.
Introduction
Manihot Mill. encompasses over 100 Neotropical species, and therefore stands out as one of the largest genera of Euphorbiaceae in Brazil, with ca. 80 species (Silva 2014). In the Cerrado Biome, over 50 species of Manihot have already been documented, among which 40 are endemic . Nevertheless, recent studies of the genus in the Chapada dos Veadeiros region, in the state of Goiás, Brazil, revealed some new species , Silva 2014, Silva and Sodré 2014, Silva 2015a, b, Silva et al. 2016a, which demonstrates that there is still a lot to discover about this genus, whose taxonomy remains relatively poorly known in the Cerrado Biome. Some species of Manihot endemic to the Cerrado Biome have leaves that are considerably diverse morphologically, a fact that has aroused the interest of botanists and geneticists (Duputié et al. 2011, Silva et al. 2016b During botanical expeditions to the Serra Dourada State Park, in the state of Goiás, Brazil, since October 2010, as part of a floristic survey of Euphorbiaceae, approximately 500 specimens of Manihot were collected, some of them showing similarities to M. irwinii D.J. Rogers & Appan regarding habit and foliage type. After careful morphological analyses of these collections, associated with genetic evidence, we propose herein the new species Manihot pulchrifolius. A detailed description, comments on flowering, fruiting, distribution, environmental preferences, conservation status, and comparisons with morphologically similar species are provided.
Morphological Studies
The description of the new taxa was based on observations of populations in the field since 2010, analyses of available specimens from herbaria UFG, NY, K, RB, and UB (acronyms follow Thiers continuously updated), and review of the literature (Rogers andAppan 1973, Allem 1989). The terminology used to describe the types of inflorescences and leaves follows Rogers and Appan (1973). The illustrations were based on fresh material fixed in alcohol 70% during collection in the field. Holotypes of the new species are deposited at UFG, and isotypes are going to be sent to NY, K, RB, and UB. Photographs of natural populations were taken in the field. The conservation status of the species follows IUCN (2016).
Data analysis
The genetic diversity of the populations studied was assessed based on estimates of the average number of alleles per locus (A), rarefied allelic richness (AR), observed heterozygosity (Ho), expected heterozygosity under Hardy-Weinberg equilibrium (He), and intrapopulation fixation index (f ). The genetic structure of the populations was evaluated according to Weir and Cockerham (1984). These analyses were conducted using the package Hierfstat for the statistical software R (Goudet 2005). The genetic structure was assessed by a Bayesian approach, conducted using the software STRUCTURE 2.3.4 (Pritchard et al. 2000), assuming a model that allows mixing alleles between populations for four independent runs, with K values ranging from one to ten. The tests were performed using the Markov Chain Monte Carlo (MCMC) method, with periods of burn-in of 10,000 and 1,000,000 replicas. The average of likelihood values for each K for all runs was determined by the statistical ΔK developed by Evanno et al. (2005).
The pattern of differentiation among populations was evaluated by calculating the genetic distance between pairs of populations, based on estimates of pairwise by fixation index (F ST ). To visualize the pattern of differentiation among populations, the genetic distance matrix was subjected to a cluster analysis using the unweighted pair- group method with arithmetic averages (UPGMA). To assess the degree of representativeness of the dendrogram, the cophenetic correlation coefficient was estimated with 10,000 permutations. These analyses were conducted using R Hierfstat (Goudet 2005) and adegenet (Jombart 2008 Diagnosis. Shrubs up to 2.5 m tall, erect, glabrous; young branches and young leaves reddish to purplish, green-vinaceous to violet; adult leaves 5-lobed at the plant base, 3-lobed along the stem, or rarely unlobed near inflorescence; long racemes or panicles (up to 27 cm long), erect to pendent, axes reddish to purplish; calyx of staminate flowers reddish or purplish with yellow margins, filaments pubescent; bracts and bracteoles of flowers of both sexes reddish to purplish; fruits dark green with violet to purplish wings.
Distribution and Ecology. Manihot pulchrifolius is endemic to the state of Goiás, where it was found growing in Serra Dourada (Figure 3), one of the most beautiful and preserved mountainous areas in the state. This mountain range encompasses the Serra Dourada State Park, an area of over 30,000 hectares protected by law since 1965. The species grows in Cerrado sensu stricto, on rocky outcrops, rocky slopes, and Cerrado rupestre, in clayey, clayey-stony, and sandy soils, or even on rocky crevices, between 900 m and 1,000 m.
Phenology. The species has been collected with flowers and fruits from November to July. However, the flowers are more usual from January to March, whereas the fruits are more abundant from April to July.
Etymology. The specific epithet "pulchrifolius" alludes to the beautiful foliage of the species, especially in the leaf flushing stage, when the leaves are reddish or purplish to green-vinaceous to violet.
Conservation status. Given that the populations have more than 50 individuals and the vegetation where they grow is commonly found in the central part of the state of Goiás (in the municipality of Goiás and neighboring municipalities), we consider M. pulchrifolius as Least Concern (LC) according to IUCN (2016).
Remarks
Manihot pulchrifolius was identified by Rogers and Appan (1973) Rogers and Appan (1973), probably because they were not aware that M. irwinii is a little known species, scarcely represented in Brazilian herbaria (UB, CEN, HPB, and UFG), and with distribution restricted to the Serra dos Pireneus and neighboring areas where it grows in open areas of Cerrado sensu stricto in clayey soils. However, in the last two years, morphological and genetic studies developed by the authors of this paper have shown that populations from Serra dos Pireneus and the Serra Dourada State Park previously identified as M. irwinii present differences regarding leaf, inflorescence, and flower morphology, as well as genetic structure as described below. Therefore, we concluded that the specimens from the Serra Dourada State Park belong to a new species, herein named M. pulchrifolius. Both species share a shrubby habit, leaves with lobes overlapping basally, midrib veins thickened, secondary veins subparallel, flowers of both sexes with sepals pubescent internally, and winged fruits.
Systematically, the new species can be situated in Manihot section Quinquelobae Pax according to Silva (2014), by having a shrubby habit, leaves widely spaced along the branches, basal petiole attachment, deeply lobed leaf blade, lobes of various shapes (but not linear), monoecious racemose or panicled inflorescences, foliaceous or setaceous bracts and bracteoles, winged or wingless fruits. However, since M. quinquelobae belongs to a polyphyletic group (Silva et al. unpublished), we prefer not to ascribe M. pulchrifolius to any sections of the genus.
Genetic studies
A total of 50 alleles were found for the seven loci evaluated in populations of M. irwinii sensu lato, ranging from three to eleven alleles per locus. The populations studied showed high and similar genetic diversity, which was also observed in other species of the genus (Roa et al. 2000). Only the population of the municipality of Corumbá de Goiás presented genetic diversity values significantly smaller than the others, suggesting that the population may be suffering from a fragmentation of its habitat since it grows in an area surrounded by agriculture and disturbed by anthropic actions related to tourism. The inbreeding values estimated in the four populations were not significant, indicating adherence to the frequencies expected by Hardy-Weinberg equilibrium for the evaluated loci ( Table 2).
The population genetic structure analysis showed an estimated value of θ of 0.363, indicating that 36.3% of the genetic diversity is in the component between populations. This level of genetic structure is considered very strong (Wright 1978), especially taking into account the geographical distance among the assessed populations ( Figure 3) and the fact that M. irwinii sensu lato is probably allogamic (Loveless and Hamrick 1984).
The estimated values of F (0.359) and f (-0.006, not significant) show that the observed genetic structure is related to the effect of genetic drift and low gene flow between populations and not to the reproductive system of the plant. A strong pattern of genetic structure was also observed in the analysis using the Bayesian approach, which was determined in the formation of two genetic groups (K = 2, Figure 4). The first group determined by STRUCTURE is restricted to the population found in Serra Dourada, municipality of Mossâmedes, whereas the second group contains the other three populations studied ( Figure 5). The pattern of attribution of the individuals to the groups was very similar to the patterns observed for different species of the genus Quercus that present lack of gene flow between groups (Valencia-Cuevas et al. 2015). This pattern of genetic differentiation between populations can be observed in the dendrogram constructed from the F ST pairwise matrix, which clearly points out the separation from the population collected in Serra Dourada in a distinct group ( Figure 6). This result, associated with the strong genetic structure detected, suggests that populations of M. irwinii sensu lato are quite distinct from each other, particularly from M. pulchrifolius (population from Serra Dourada), which seems to be genetically isolated.
|
2018-04-03T04:46:08.218Z
|
2017-03-17T00:00:00.000
|
{
"year": 2017,
"sha1": "8e935c93994e05f1b87fdc81ae542e38ffecf0b0",
"oa_license": "CCBY",
"oa_url": "https://phytokeys.pensoft.net/article/11738/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49d211caf8ae113f2c4421bc61a247168aca2f13",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
245355566
|
pes2o/s2orc
|
v3-fos-license
|
Editorial: Young People's Sexual and Reproductive Health in Sub-Saharan Africa (SSA): Bridging the Research-to-Practice Gap
Sexual and reproductive health (SRH) among young people, aged 10–24 years, in Sub-Saharan Africa (SSA) remains a major public health challenge, with evident gaps in access to SRH services and increased risk of poor SRH (1). For example, less than 20% of young people in SSA are aware of their HIV status, despite the region accounting for almost 90% of the world’s HIV cases among adolescents and youth (2). Point-of-care tests exist and have the potential to revolutionize the prevention and care of HIV and other STIs, thus interrupting transmission and preventing the sequelae of untreated infections (3, 4). However, the awareness and uptake of such SRH preventive services remain sub-optimal among young people (5). While this research-to-practice gap is widely known, there is limited discussion on how it can be bridged. The collection in this Frontier Research Topic begins to partly remedy the problem by recounting collective efforts to promote young people’s SRH in SSA. Here we share our learnings with the hope of advancing the discussion on how to bridge the research-to-practice gap. We summarize the articles in this special issue in three main (and overlapping themes).
INTRODUCTION
Sexual and reproductive health (SRH) among young people, aged 10-24 years, in Sub-Saharan Africa (SSA) remains a major public health challenge, with evident gaps in access to SRH services and increased risk of poor SRH (1). For example, less than 20% of young people in SSA are aware of their HIV status, despite the region accounting for almost 90% of the world's HIV cases among adolescents and youth (2). Point-of-care tests exist and have the potential to revolutionize the prevention and care of HIV and other STIs, thus interrupting transmission and preventing the sequelae of untreated infections (3,4). However, the awareness and uptake of such SRH preventive services remain sub-optimal among young people (5). While this research-to-practice gap is widely known, there is limited discussion on how it can be bridged. The collection in this Frontier Research Topic begins to partly remedy the problem by recounting collective efforts to promote young people's SRH in SSA. Here we share our learnings with the hope of advancing the discussion on how to bridge the research-to-practice gap. We summarize the articles in this special issue in three main (and overlapping themes).
KNOWLEDGE AND UPTAKE OF SEXUAL AND REPRODUCTIVE HEALTH SERVICES
Two authors highlight knowledge and uptake of Youth Sexual and Reproductive Health services with helpful insights. Yuya et al. assessed the level and predictors of knowledge of reproductive rights among Haramaya University students in Ethiopia. The study revealed that lack of awareness/information on reproductive health issues and absence of reproductive health services utilization were important predictors of participants' knowledge of reproductive rights. This warrants further investigations on whether knowledge of reproductive rights can promote uptake of sexual and reproductive health services. Lawrence et al. shared the work undertaken by the team as part of the Public Health Adolescent Services Evaluation, which is a national evaluation of adolescent HIV services in Kenya. The evaluation revealed a preference for adolescent autonomy in seeking sexual and reproductive health services and contacting healthcare workers for sexual and reproductive health information. This study confirms the value of adolescent and youth-friendly health services and the need for contextually appropriate strategies to enhance uptake and adoption.
IMPLEMENTATION SCIENCE AND YOUNG PEOPLE'S SEXUAL AND REPRODUCTIVE HEALTH
Echoing the call for Bridging the Research-to-Practice gap, three articles highlight the application of implementation science in young people's sexual and reproductive health. Mgoli Mwale et al. evaluate the sustainability of the Community Score Card to improve Adolescent Sexual and Reproductive Health in Ntcheu, Malawi. Their work provides compelling evidence of the importance of agents and context in the sustainability of interventions. They specify real-world facilitators, barriers, and opportunities in promoting sustainability within resourceconstrained settings. Libous et al. offer a new generalizable approach to guide intervention adaptation that builds on existing infrastructure, culture, and resources to inform implementation strategies. Their work focused on the adaptation of a traumainformed cognitive behavioral therapy intervention addressing mental and sexual health for adolescents and young adults living with HIV guided by the ADAPT-ITT framework. Obiezu-Umeh et al. presented compelling findings from a systematic review of 23 articles on the implementation outcomes and implementation strategies of youth-friendly sexual and reproductive health services in sub-Saharan Africa. The authors underlined the need to develop rigorous studies further to understand better which and what combination of implementation strategies would lead to more gain in promoting the uptake and sustainment of youth-friendly sexual and reproductive health services. These articles provide concrete suggestions for community and young people's engagement in intervention development, adaptation, and implementation processes, to foster co-creation and ownership within communities. These are critical for the adoption, scale-up, and sustainability of interventions as we seek to bridge the research-to-practice gap.
CONCLUSION
Individually, these contributions address an array of critical issues. Collectively, they provide momentum on the importance of contextually appropriate, youth-friendly, and youth-engaged approaches to enhance the uptake, scale-up, and sustainability of young people's SRH interventions in SSA. We acknowledge that articles presented in this collection are not exhaustive of the work in the field. Nevertheless, the findings, insights, and perspectives discussed in each paper could inspire new research and interventions.
This special issue continues to highlight substantial structural, social, and individual barriers that limit the uptake of SRH services. Nonetheless, a few articles highlight the importance of active community engagement and active youth participation to circumvent these barriers. Community engagement and youth participation can enhance the uptake, appropriateness, and sustainability of SRH intervention and can help move us closer to the goal of bridging the research-to-practice gap in promoting SRH among young people in SSA. Participatory approaches such as crowdsourcing that involve community members in solving a problem and then publicly sharing innovative solutions could provide an opportunity to co-create with young people and community members (6,7). This can improve the appropriateness and acceptability of SRH interventions for young people in SSA and, ultimately, sustainability. We, therefore, call for more community-engaged and participatory approaches in the area of SRH for young people in SSA. In closing, we hope this collection will become a reference for continuing research in bridging the knowledge-to-do gap in sexual and reproductive health among young people in SSA.
|
2021-12-22T14:25:06.926Z
|
2021-12-22T00:00:00.000
|
{
"year": 2021,
"sha1": "67fa7d069231378a2c5582d923fe5a3b7d556fe7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "67fa7d069231378a2c5582d923fe5a3b7d556fe7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256439507
|
pes2o/s2orc
|
v3-fos-license
|
Implant-Supported Overdentures: Current Status and Preclinical Testing of a Novel Attachment System
Numerous attachment systems exist for implant-supported overdentures, with each having specific limitations in terms of retention, cost, wear, maintenance and cleanability. A retrospective analysis of patients restored with implant-supported overdentures using bars, telescopic crowns and Locator-type attachments was performed and the patients were interviewed. An in vitro strain gauge study compared telescopic crowns, Locator-type attachments and a novel flexible attachment system employing a shape memory alloy (NiTi) with respect to peri-implant strain development during insertion, loading and removal of an overdenture. A significantly lower number of attachment-related complications was observed in bars as compared to telescopic crowns (p = 0.00007) and Locator-type attachments (p = 0.00000), respectively. Greater overall patient satisfaction was noted in bar-retained restorations while Locator-type attachments led to lower levels of satisfaction regarding prosthesis retention. In vitro, telescopic crowns caused maximum strain development during prosthesis insertion and loading, while during removal this was observed in Locators with white retentive inserts. NiTi attachments caused significantly lower strain development during insertion as compared to telescopic crowns (p = 0.027). During loading, NiTi attachments caused significantly lower strain development than Locators with blue retentive inserts (p = 0.039). During removal, NiTi attachments caused significantly less strain development as compared to Locators with white retentive inserts (p = 0.027). Positional discrepancies between male and female attachment parts affected the retention and reaction force between both components, which may be minimized by using the novel NiTi attachment system. This may be beneficial in terms of component wear and implant loading.
Introduction
While not yet constituting the standard of care [1] in all countries, implant-retained removable prostheses have been shown to have a positive effect on functional and patientcentered outcomes [2][3][4]. In this context, a large body of literature comparing various implant and attachment configurations [5] does exist, at least partially reporting contradictory results.
There seems to be a consensus that despite the overall good performance, technical complications in such restorations are common [6], do occur more frequently as compared to fixed restorations [7,8], and correlate with the time in service [9]. Despite that, patients have been shown to have an unsatisfactory level of knowledge about potential complications [10].
Attachment-related problems, in particular the loss of retention due to wear at the retentive interface [9,[11][12][13], as well as prosthesis fractures, predominantly occurring in the region of the attachment, seem to constitute the complications with the greatest incidence [14,15]. While some studies suggest that only minor differences in prosthodontic maintenance and peri-implant condition exist between different attachment systems [3,16],
Retrospective Analysis of Complications
Electronic patient charts were filtered for all patients wh retained overdenture during the years 2006 to 2020 in the De Saarland University Dental School, Homburg, Germany. Pati ing in the jaw were excluded. All maintenance interventions d ment system were recorded. Furthermore, only patients who type attachments or telescopic crowns were included ( Figure Figure 1. Prototype attachment system employing a cylindrical male attachment part connected to an abutment base via a 0.8 mm NiTi wire. (a) Attachment system mounted on an implant shoulder; (b) A horizontal force acting on the male attachment part causes a deflection relative to the base, which is supposed to compensate for misalignments of implants and transfer inaccuracies between patient situation and laboratory cast.
Retrospective Analysis of Complications
Electronic patient charts were filtered for all patients who had received an implantretained overdenture during the years 2006 to 2020 in the Department of Prosthodontics, Saarland University Dental School, Homburg, Germany. Patients with any teeth remaining in the jaw were excluded. All maintenance interventions directly related to the attachment system were recorded. Furthermore, only patients who had received bars, Locator-type attachments or telescopic crowns were included ( Figure 2). (b) A horizontal force acting on the male attachment part causes a deflection relative to the base, which is supposed to compensate for misalignments of implants and transfer inaccuracies between patient situation and laboratory cast.
Retrospective Analysis of Complications
Electronic patient charts were filtered for all patients who had received an implantretained overdenture during the years 2006 to 2020 in the Department of Prosthodontics, Saarland University Dental School, Homburg, Germany. Patients with any teeth remaining in the jaw were excluded. All maintenance interventions directly related to the attachment system were recorded. Furthermore, only patients who had received bars, Locatortype attachments or telescopic crowns were included ( Figure 2). For comparative statistical analysis, the total numbers of complications recorded for the total numbers of implants used with a specific attachment system were considered. Neither the number of implants in a specific jaw nor the timepoint of the occurrence of a complication were considered. Assuming a binomial distribution of complications, Fisher's exact test was applied for pairwise comparisons followed by the Holm-Bonferroni correction to compensate for multiple testing. All calculations were performed using the R software package (R, The R Foundation for Statistical Computing, Vienna, Austria; www.R-project.org; accessed on 16 December 2022) with the level of significance set at α For comparative statistical analysis, the total numbers of complications recorded for the total numbers of implants used with a specific attachment system were considered. Neither the number of implants in a specific jaw nor the timepoint of the occurrence of a complication were considered. Assuming a binomial distribution of complications, Fisher's exact test was applied for pairwise comparisons followed by the Holm-Bonferroni correction to compensate for multiple testing. All calculations were performed using the R software package (R, The R Foundation for Statistical Computing, Vienna, Austria; www.Rproject.org; accessed on 16 December 2022) with the level of significance set at α = 0.05.
Patient Survey
All patients included in the retrospective chart analysis described above were requested to answer a series of questions as part of a regular recall appointment. Using a semi-structured questionnaire [10], patients were asked to rate their satisfaction with their prosthesis overall and the retention in particular on a scale from 1 (low) to 5 (high). In addition, they were asked to name the most significant problem experienced with their prosthesis, whether or not they had been informed about maintenance costs, and if they had been willing to pay a higher fee for initial treatment if complications could have been avoided.
Statistical analysis (R, The R Foundation for Statistical Computing, Vienna, Austria; www.R-project.org; accessed on 16 December 2022) of patients' responses on satisfaction with overall treatment and with retention of the prostheses was based on pairwise comparisons using the Kruskal-Wallis rank sum test. Correction for multiple testing was performed using the Bonferroni method and the level of significance was set at α = 0.05 for all operations.
Testing of a Novel Attachment System
A clinical situation of an edentulous mandible with two interforaminal implants (Straumann Standard Plus 4.1 × 10.0 mm, Straumann AG, Basel Switzerland) was used for creating a realistic patient model out of denture resin (ProBase Cold, Ivoclar Vivadent, Schaan, Liechtenstein) including a gingival mask [25] with an approximate thickness of 2-3 mm (Adisil blau, Siladent, Goslar, Germany).
Five open-tray impressions were made using the respective screw-retained transfer copings, custom trays (PalatrayXL, Kulzer GmbH, Hanau, Germany) and polyether impression material (Impregum, 3MEspe, Seefeld, Germany). Definitive casts (Fujirock EP, GC Europe) incorporating implant analogs were then fabricated following standard laboratory procedures. On each master cast, three mandibular prostheses were fabricated using telescopic crowns, Locators (RN LOCATOR abutment tissue cuff height 1.0 mm, Straumann AG) and NiTi-based attachments (prototype, not commercially available), respectively.
The primary cylindrical telescopic crowns were fabricated on the basis of screwretained burn-out plastic copings (RN synOcta Plastic Coping for synOcta 1.5; for crown, Straumann AG) using dental training alloy (Phantom-Metall NF, Dentsply Sirona Deutschland GmbH, Bensheim, Germany) employing standard casting and milling procedures. The secondary telescopic crowns were made from pattern resin (Pattern resin, GC Europe, Bad Homburg, Germany) and mounted in the prostheses using composite resin (Rebilda DC, Voco, Cuxhaven, Germany).
The male parts of the prototype NiTi-based attachments were assembled out of single components by a proprietary welding process allowing the maintenance of the superelasticity of the material (Admedes, Pforzheim, Germany). The NiTi rod connecting the base and retentive element had a diameter of 0.8 mm (Figure 3). Silicone rings were used as female parts and mounted in the prostheses using autopolymerizing resin (ProBase Cold, Ivoclar Vivadent, Schaan, Liechtenstein). Linear strain gauges (LY11-0.6/120, 120Ω reference resistance, Hottinger Baldwin Messtechnik GmbH, Darmstadt, Germany) were attached to the model material mesially and distally adjacent to the implants [22] utilizing a measurement amplifier (Quantum X, Hottinger Baldwin Messtechnik GmbH, Darmstadt, Germany) and analyzing software (jBEAM, AMS GmbH, Chemnitz, Germany) for recording strains in the surroundings of the supporting implants at a sampling rate of 50/s. For determining the level of misfit strain generated by prosthesis insertion [22], the strain gauges were set to zero and the prostheses were manually positioned, ensuring the engagement of the attachments by applying finger pressure. Strain development in the peri-implant region as a consequence of masticatory loading of the prostheses was measured with the patient model positioned in a universal testing machine (Z020, Zwick/Roell, Ulm, Germany), applying a static load of 50 N ( Figure 4a) in the second premolar/first molar region. As a final step, the prostheses were removed from the supporting implants in the universal testing machine (Figure 4b), recording maximum separation force and strain development in the peri-implant area ( Figure 5). For the Locator prostheses, regular housings for retentive elements (Zest Dental Solutions, Carlsbad, CA, USA) were positioned in the prostheses following the manufacturer's guidelines using autopolymerizing resin (ProBase Cold, Ivoclar Vivadent, Schaan, Liechtenstein). Female elements varying in retention force (blue 0.7 kg; pink 1.4 kg; white 2.3 kg) were positioned in the metal housings using the manufacturer's service tool.
Linear strain gauges (LY11-0.6/120, 120Ω reference resistance, Hottinger Baldwin Messtechnik GmbH, Darmstadt, Germany) were attached to the model material mesially and distally adjacent to the implants [22] utilizing a measurement amplifier (Quantum X, Hottinger Baldwin Messtechnik GmbH, Darmstadt, Germany) and analyzing software (jBEAM, AMS GmbH, Chemnitz, Germany) for recording strains in the surroundings of the supporting implants at a sampling rate of 50/s. For determining the level of misfit strain generated by prosthesis insertion [22], the strain gauges were set to zero and the prostheses were manually positioned, ensuring the engagement of the attachments by applying finger pressure. Strain development in the periimplant region as a consequence of masticatory loading of the prostheses was measured with the patient model positioned in a universal testing machine (Z020, Zwick/Roell, Ulm, Germany), applying a static load of 50 N (Figure 4a) in the second premolar/first molar region. As a final step, the prostheses were removed from the supporting implants in the universal testing machine (Figure 4b), recording maximum separation force and strain development in the peri-implant area ( Figure 5).
Comparative statistical analysis (R, The R Foundation for Statistical Computing, Vienna, Austria; www.R-project.org; accessed on 16 December 2022) was based on absolute mean strain values recorded at the four sensors mesially and distally adjacent to the implants. In subsequent order, the statistical tests applied were Shapiro-Wilk normality test, Levene's test for homogeneity of variance, Kruskal-Wallis test for nonparametric one-way analysis, and pairwise comparisons using Nemenyi's all-pairs test. Bonferroni correction was carried out in order to compensate for multiple testing and the level of significance was set at α = 0.05 for all operations. engagement of the attachments by applying finger pressure. Strain development in the peri-implant region as a consequence of masticatory loading of the prostheses was measured with the patient model positioned in a universal testing machine (Z020, Zwick/Roell, Ulm, Germany), applying a static load of 50 N (Figure 4a) in the second premolar/first molar region. As a final step, the prostheses were removed from the supporting implants in the universal testing machine (Figure 4b), recording maximum separation force and strain development in the peri-implant area ( Figure 5). Comparative statistical analysis (R, The R Foundation for Statistical enna, Austria; www.R-project.org; accessed on 16 December 2022) was ba mean strain values recorded at the four sensors mesially and distally adj plants. In subsequent order, the statistical tests applied were Shapiro-Wilk Levene's test for homogeneity of variance, Kruskal-Wallis test for nonpara analysis, and pairwise comparisons using Nemenyi's all-pairs test. Bonfe
Retrospective Analysis of Complications
During the period from 2006 to 2020, a total of 78 edentulous jaws had been restored with implant supported overdentures and were included for analysis ( Table 1). The mean number of implants used per jaw ranged from 3.4 for Locator-type attachments to 4.2 for bar constructions and 4.5 for telescopic crowns. Only one implant was lost in the group of bar-retained overdentures, while three implants were lost in Locator-type prostheses, and a total of five implants were lost in the group of telescopic crowns. Only nine attachment-related complications were observed in bar-retained restorations, which was significantly lower as compared to telescopic crowns, where 34 complications occurred (p = 0.00007). The greatest number of complications occurred with Locator-type attachments, requiring 56 interventions, which was significantly more as compared to telescopic crowns (p = 0.0004) and bars (p = 0.00000), respectively.
The most frequent complication in telescopic crowns was the decementation of primary crowns, followed by an adjustment of friction, and the greatest number of complications recorded in one jaw was sixteen. Apart from that, in one patient with telescopic crowns, an extremely high frequency of denture tooth fractures occurred. The replacement of retentive clips was the most frequent intervention in bar-retained overdentures, but no cumulation of complications occurred in these patients. The replacement of plastic inserts and loosening of male attachment parts were the most frequent complications observed in Locator-type attachments, and the greatest number of complications recorded in one jaw was ten. Despite using single standing Locator-type attachments, one patient was not able to adequately clean the supporting implants, requiring repetitive professional cleanings.
In summary, despite comparable patient characteristics, the use of Locators as attachment systems led to the greatest number of attachment-related complications as compared to individually fabricated bars or telescopic crowns.
Patient Survey
Only 2 out of 19 patients having received telescopic-crown-retained overdentures participated in the survey, while 7 out of 25 patients with Locator-type attachments and 5 out of 21 patients with bar-retained prostheses could be interviewed.
Out of the 14 survey participants only 4 had been informed upfront about the maintenance cost of their removable restorations, and 8 patients would have been willing to pay higher initial treatment costs if complications and maintenance needs could have been avoided or if the retention of the prosthesis could have already been optimized upon delivery. In the groups of patients restored with bars and telescopic crowns, only one patient complained about a somewhat bulky restoration, while the others claimed not to have any major problem with the restoration. Out of the seven patients restored with Locator-type attachments, five described a lack of retention or repeated changes in the female attachment parts as the main problem.
On a scale of 1-5, the mean overall satisfaction ranged from 3.50 for telescopic crowns to 3.57 for Locator-type attachments and 4.80 for bars ( Figure 6). While no statistically significant difference could be observed between the attachment systems employed (Table 2), a trend towards greater satisfaction with bar-retained restorations was noted (overall satisfaction: bar vs. telescope p = 0.052; bar vs. Locator p = 0.070). Similarly, no significant difference in satisfaction with regards to retention was observed between the different attachment systems. However, Locator-type attachments showed a trend towards lower levels of satisfaction, which in comparison with bars (p = 0.022) was statistically significant prior to the Bonferroni correction for multiple testing ( Table 2).
OR PEER REVIEW 8 of 15
Overall, the greatest consistency in the treatment outcome was seen in bar-retained restorations, which is evidenced by comparably low standard deviations for overall satisfaction and prosthesis retention. The greatest variation was observed in prostheses employing Locator attachments.
Retention
Level of significance set at α = 0.05; significant differences are marked with *.
Overall, the greatest consistency in the treatment outcome was seen in bar-retained restorations, which is evidenced by comparably low standard deviations for overall satisfaction and prosthesis retention. The greatest variation was observed in prostheses employing Locator attachments.
Testing of a Novel Attachment System
The means of the absolute strain values were calculated for each attachment system and for each loading scenario (insertion, loading, removal), which are shown in Figure 7, in addition to the mean retention forces. Prostheses retained by telescopic crowns showed maximum strain development in all loading scenarios with the exception of prosthesis removal, where prostheses with white Locator inserts exhibited a slightly larger mean value. Additionally, the standard deviations calculated in telescopic-crown-retained prostheses were much greater as compared to all other groups. The NiTi attachments showed the lowest mean values for all the parameters recorded. The Shapiro-Wilk normality test showed significant values for the loading scenarios of insertion (p = 0.012) and loading (p = 0.000), while no significant values were seen for removal (p = 0.354) or retention force (p = 0.629). Similarly, Levene's test for homogeneity of variances indicated a non-normal distribution of values and a non-homogeneity of variances for the loading scenarios of insertion (p = 0.016) and loading (p = 0.072), while no significant values were seen for removal (p = 0.066) or retention force (p = 0.472). The nonparametric one-way analysis applying the Kruskal-Wallis test showed significant values for all the parameters (insertion p = 0.012; loading p = 0.009; removal p = 0.012; retention force p = 0.000). The results of the subsequent pairwise comparisons are given in Table 3. Table 3. Results (p-value) of pairwise comparisons (Nemenyi's all-pairs test; Bonferroni correction) between the attachment systems tested for prosthesis placement (a), loading (b), prosthesis removal (c) and retention force (d). The Shapiro-Wilk normality test showed significant values for the loading scenarios of insertion (p = 0.012) and loading (p = 0.000), while no significant values were seen for removal (p = 0.354) or retention force (p = 0.629). Similarly, Levene's test for homogeneity of variances indicated a non-normal distribution of values and a non-homogeneity of variances for the loading scenarios of insertion (p = 0.016) and loading (p = 0.072), while no significant values were seen for removal (p = 0.066) or retention force (p = 0.472). The non-parametric one-way analysis applying the Kruskal-Wallis test showed significant values for all the parameters (insertion p = 0.012; loading p = 0.009; removal p = 0.012; retention force p = 0.000). The results of the subsequent pairwise comparisons are given in Table 3. Table 3. Results (p-value) of pairwise comparisons (Nemenyi's all-pairs test; Bonferroni correction) between the attachment systems tested for prosthesis placement (a), loading (b), prosthesis removal (c) and retention force (d). Telescopic crowns caused maximum strain development during insertion, reaching a mean value of 214.17 µm/m, which was significantly greater as compared to the NiTi attachments (45.94 µm/m; p = 0.027). During loading, maximum mean strains of 152.16 µm/m were again seen in telescopic crowns, which were significantly greater as compared to Locators with blue retentive inserts (58.86 µm/m; p = 0.009). While the difference between telescopic crowns and the NiTi attachments during loading was not significant (p = 0.074), the comparison between the NiTi attachments and Locators with blue retentive inserts reached a significant value (p = 0.039). During removal, the prostheses with NiTi attachments caused significantly less strain development as compared to those with Locators with white retentive inserts (p = 0.027). An increase in retention force for Locator attachments from blue to pink to white as indicated by the manufacturer was observed, although the differences were not significant; however, a noticeable increase in strain development during prosthesis removal was seen when white retentive inserts were used. The NiTi attachments showed significantly lower retention values as compared to telescopic crowns (p = 0.021) and Locators with white retentive inserts (p = 0.004). The novel attachment system under all the loading scenarios led to the lowest strain recordings in the peri-implant area while maintaining prosthesis retention. During placement and removal, the flexibility of the male attachment part allowed for a common path of draw, while during loading, the settling of the prosthesis was accomplished by the flexing of the attachment system.
Discussion
Given the variety of treatment concepts, personal preferences, and socioeconomic settings, numerous authors have examined the clinical performance of attachment systems for implant-supported overdentures [3,5,[16][17][18]. As such, the retrospective clinical data presented here is rather confirmatory and seems to be in line with the current knowledge despite the inhomogeneity of the patients analyzed in terms of implant brand, distribution of implants, attachment system, and opposing dentition. With the focus of this paper being on the attachment system, only specific complications were reported, but of course unrelated problems such as fractures of denture teeth also occurred in the small cohort studied here. Furthermore, we did not report on biologic problems associated with implant therapy leading to destruction of both hard and soft tissues as well as a maintenance regimen [27][28][29].
The use of bars as the attachment system led to the lowest number of technical complications [20], and hence to the highest level of patient satisfaction with both the overall treatment and retention of the prostheses. Despite constituting a labor-and costintensive option [12,16,21] similar to bars, telescopic crowns led to a much greater number of complications, which was surpassed by prefabricated Locator-type attachments, with the loss of retention by far constituting the major problem [9,[11][12][13]. This also led to a significantly lower level of patient satisfaction [26].
From an industry perspective, replacing female retentive inserts might be seen as being positive and generating revenue. From a clinician's perspective, maintenance interventions are rarely profitable but often cause organizational problems with respect to patient scheduling, and the patients often are frustrated. Although a meaningful cost analysis was not possible based on the comparably small number of patients treated over a period of 15 years using a variety of dental laboratories and materials, it became obvious that patients had an unsatisfactory level of knowledge about potential complications and maintenance needs, including the associated costs [10,13]. Two clinical studies [4,12] have shown that maintenance costs reach considerable amounts relative to the initial treatment costs.
It may be seen as a further limitation of this retrospective clinical study that patients who had received implants supplementing natural abutments for the retention of removable restorations were not considered. Additionally, the questionnaire given to the patients was limited to technical aspects of the prostheses instead of using, e.g., the OHIP questionnaire [41].
Derived from conventional, tooth-supported, removable restorations, telescopic crowns are quite popular in some countries as attachment systems for implant-supported overdentures. Setting the retention of telescopic crowns is demanding, requires experience, and seems to differ drastically between tooth-supported and implant-supported restorations. On natural teeth, a drastic decrease in retention can be seen a few days after delivery following tooth movement directed by the removable prosthesis. Due to the lack of a periodontal ligament, such extensive movement seems not to occur when dental implants are being used as abutments for removable restorations [42]. Consequently, reducing the friction of telescopic crown attachments constituted a frequent problem in the patient cohort considered, which is in accordance with previous reports [17,19].
From a biomechanical perspective, the ability of attachment systems in transferring misfit loads and masticatory loads to the supporting implants and bone also has to be considered. Rigid attachment systems have been shown in independent in vitro studies to transfer potentially critical amounts of moment loads to the supporting implants during the masticatory loading of the removable prostheses [43,44]. Based on a clinical study using two implants to restore the edentulous maxilla and showing compromised outcomes, it may be inferred that mechanical overloading due to stiffness of the telescopic crowns used was a critical co-factor [45]. Similarly, a clinical study using four maxillary implants and Locator-type attachments reported an implant survival rate of only 86.2% after a one-year observation period, while the mean peri-implant bone loss was 1.01 ± 0.77 [41].
In an attempt to explain the clinical performance of telescopic crowns and Locatortype attachments as well as to search for solutions to the problems described above, the in vitro strain gauge study tried to compare these attachment systems. As hypothesized above, the stiffness of the attachment system obviously had an effect on the retention force and peri-implant strain development, as telescopic crowns and Locator attachments with white retentive inserts (greatest retention examined here) showed maximum values. As expected, the individually fabricated telescopic crowns showed the largest standard deviations, indicating that adjusting the retention is demanding in these attachments.
A trend of increased retention force was seen in the Locator-type attachments from blue to pink to white retention inserts, reflecting the manufacturer's information. Interestingly, this increase in retention force did cause an increase in peri-implant strain during prosthesis removal, but no clear trend was seen during prosthesis insertion and loading. It was noted that the retention force measured in Locator prostheses with blue and pink retention inserts was higher than the retention force expected based on the manufacturer's data (Table 4). It may be argued that positional mismatch between female and male attachment parts resulting from inevitable fabrication errors as well as the non-parallelism of the supporting implants may have caused this discrepancy. When using inserts with maximum retention, this effect may be hidden. Based on experience, patients remove their prostheses two to three times per day on average, making this a relevant loading scenario that may be detrimental not only to the attachment system but also to the implant. While wear at the retentive interface of the attachment system [24,25] was not tested as part of this study, it may be argued that changes in separating force may be caused by wear resulting from positional mismatch between male and female parts.
The novel NiTi-based attachment system showed retention values not significantly differing from Locator-type attachments with blue or pink retention inserts, but drastically reduced strain development during all loading scenarios, showing the lowest mean values for all parameters recorded. As it was not possible to experimentally determine the reaction force between the male and female parts of the attachment systems used here, the strains recorded in the surroundings of the supporting implants were used as a surrogate measure. Given the lower strain values in the NiTi-based attachments, it may be expected that wear phenomena of the attachment system can be reduced by providing sufficient lateral flexibility of the male part. Of course, additional and more sophisticated investigations addressing the long-term stability of such an attachment system, as well as its biocompatibility [33,34], are needed. Comparable to the situation of NiTi-based endodontic instruments, repeated deformation of the attachment may fatigue the material and ultimately lead to fractures. As it was the purpose of this study to test the basic applicability, fatigue testing has not yet been performed.
Besides the in vitro nature of the investigation based on one specific patient situation, several limitations have to be taken into account. The NiTi attachment represented the first prototype, requiring far too much space for clinical use, as the exact dimensions of the flexible element could not yet be properly set due to a lack of experimental data. While best representing clinical reality, finger pressure during prosthesis insertion could not be standardized and may have affected the strain readings. Similarly, despite a common vertical path of insertion having been established by adjusting the model base, it may be argued that the peak loading of implants was related to an offset direction of pull during prosthesis removal. Similar to variations in patient anatomy and implant positions, numerous loading positions do occur under clinical conditions. Based on a previous experiment [30], loading in the molar/premolar area seemed to be most relevant and hence was chosen as the sole loading scenario. Assuming that compressive and tensile strain are equally detrimental to the attachment system, implant components and alveolar bone, the means of the absolute strain values were calculated per attachment system. This approach not only led to high standard deviations affecting comparative statistics but also only allowed for comparisons on a relative scale.
|
2023-02-01T16:14:40.153Z
|
2023-01-28T00:00:00.000
|
{
"year": 2023,
"sha1": "1f006920503b09eb8cb96fa5665d9ea3fb61188a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/3/1012/pdf?version=1674902930",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "000b10762d73a8bd0bb968e03dbe1667ccecab24",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1013063
|
pes2o/s2orc
|
v3-fos-license
|
Low Dose Rabbit Anti-Thymocyte Globulin as an Induction immunosuppression in High-Risk Renal Transplant Recipients
High immunologic risk recipients pose a major challenge in transplantation. It is associated with early acute rejection post-transplantation and leads to decrease patient and graft survival. The need for effective immunosuppression early post-transplantation has resulted in emerging of induction protocols. Biologic agents used for induction are classified to monoclonal and polyclonal which are either depletive or non-depletive antibody therapies. Choosing the best agent and optimal dose remains a controversial issue and depends on many factors including the proper assessment of the immunologic risk, efficacy and safety of the used agent. There is no consensus for the optimal agent used as induction immunosuppression in kidney transplantation. rATG use is associated with decreased incidence of delayed graft function and acute rejection episodes.
Introduction
The number of high-risk patients coming for renal transplantation is a growing. Modern immunosuppression minimises the rate of acute rejection dramatically; however, it is not mandatory to improve patient and long-term allograft outcomes concurrently [1]. Defining patients as 'high risk for renal transplantation' is essential to improve transplant pharmacotherapy and depends on understanding variables of the recipient, donor and allograft [2]. There are several important reasons why induction therapy is utilised. First, the induction agents are highly immunosuppressive, allowing for a significant reduction in acute rejection rates and improved 1-year graft survival [3]. Second, due to their unique pharmacologic effect, these agents are often considered essential for use in patients at high risk of poor short-term outcomes [4].
The improvements in short-term outcomes gained from the use of induction therapies cannot be denied. Despite these advances, studies detailing the impact of induction therapy on long-term allograft function or survival are lacking. On the other hand, when using these potent immunosuppressive agents, the host defences are often profoundly impaired, which increases the risk of opportunistic infections and malignancy as we indicated in our earlier publication (Table 1) [2,5]. The definition of high-risk renal transplantation is now extended to include factors such as donor and recipient age, donor cause of death, time to transplant, as well as several other pre-and post-procurement variables that influence graft survival. Induction with biologic agents is used in renal allograft recipients for two reasons ( Table 2): Table 1: Induction immunosuppressive strategies in renal transplantation [85].
Conventional Drugs Induction Antibody Induction
Drugs used in the Protocol Initial high doses of CNI + antimetabolite + steroids. Depleting or non-depleting biological antibodies + lower doses of conventional drugs.
Indications
Low immunological risk transplantation High immunological risk transplantation.
Advantages -Cheap. -Less risk of infection.
-Easier to monitor.
-Potent and effective immune-suppression.
-Allow lower doses of maintenance immunosuppression (and even avoidance of some of them) Disadvantages -Not suitable in high risk cases. -Higher doses of drugs mean more adverse effects (e.g. DGF with CNI).
-Complex prescription and monitoring. (1) To augment the effect of initial immunosuppression in patients with high immunological risk [poor matching of human leukocyte antigen (HLA), sensitised, marginal donor transplant or retransplantation] or (2) To improve minimization protocol aimed at decrease the use of steroids or calcineurin inhibitors (CNI), or both (2).
Definition of Sensitization
Sensitization is defined as the presence of preformed alloantibodies in the serum of a prospective transplant recipient. In other words, it is pretransplant humoral alloimmunization. These alloantibodies are usually anti-HLA class I antibodies but may also include anti-HLA class II or non-HLA antibodies [6].
They formed in response to prior exposure to foreign antigens [7]. Of note, in addition to this humoral sensitization, there also appears to be donor-reactive T cell sensitization "cellular sensitization", which measured with a delayed type hypersensitivity assay [8]. To what degree this phenomenon may be present pretransplant and manifest post-transplant is not clear.
The importance of Sensitization
United Network for Organ Sharing (UNOS) data indicated that approximately 20% of patients on the transplant waiting list have a panel reactive antibody (PRA) of greater than 20% and in France, more than 50% of patients have reactive PRAs [9]. Hyper acute rejection (HAR) with immediate and universal graft loss historically was a common occurrence in these patients. This problem has been largely eliminated using sensitive cross match techniques. Still, sensitised patients wait longer for compatible allograft and are at increased risk for early acute humoral rejection and have worse short-term and long-term outcomes [10][11][12].
The presence of anti-HLA antibodies post-transplantation is associated with acute and chronic rejection as well as decreased graft survival in various organs transplanted, including kidneys [13]. Renal transplant patients with post-transplant HLA alloantibodies were 5-6 times more likely to develop chronic rejection [14,15]. Animal and in vitro human models suggested that a repair response to donor-specific antibodies might result in arterial thickening associated with chronic rejection [16]. To what extent the patients with pretransplant sensitization may be at risk of post-transplant alloantibody-induced graft pathology is not quantifiable. It is not clear as to what extent the post-transplant humoral mediated rejection involves a de novo versus anamnestic response.
Causes of Sensitization
The primary sources for sensitization of kidney transplant patients are pregnancy, blood transfusions, and prior transplants. These situations may expose the recipients' immune system to foreign antigens, especially HLA molecules. If the immune system is not suppressed, the patient will appropriately produce antibodies against these alloantigens. These sensitising events appear to have a cumulative and interacting impact on the PRA [17]. The rate of sensitization seems greater in retransplantation as compared to first transplantation. The sensitising effect of pregnancy appears to be more important in first transplants than in retransplants. The more transfusion numbers, the higher level of PRAs, and the effect seems to be modulated by sex and pregnancies [18]. It has hypothesised that pregnant women may sensitise at the time of delivery with exposure to parental HLA antigens expressed by foetal cells [19].
Limiting alloimmunizing events
Avoiding transfusions in patients who may eventually need kidney transplantations would lessen exposure to alloantigens [20].
Pretransplantation Crossmatching
It is used primarily to prevent hyperacute rejection, an immediate and irreversible HLA class I antibody-mediated rejection of the allograft. Sophisticated techniques of crossmatching have extended their use to identifying the presence of alloantibodies that do not produce hyperacute rejection but are associated with less fulminant adverse immunologic outcomes [21].
Immunosuppression
Aspects of immunosuppression of highly sensitised kidney transplant patients include desensitisation therapy prior transplantation, induction therapy with transplantation, maintenance therapy after transplantation, and rescue therapy in the event of acute humoral rejection [22]. The corresponding author uses ATG in highly sensitized patients and also in transplantation across positive crossmatch (in conjunction with plasmapheresis).
Strategy for rabbit Anti-Thymocyte Globulin (rATG) dose
The seven-day course consisted of rATG at 1.5 mg/kg intravenously intra-operatively, then daily for six days.
Dose preparation and Infusion
i. 100 mg IV hydrocortisone, 10 mg IV chlorpheniramine and 1 g oral paracetamol given 60 minutes before each dose.
ii. The initial main infusion should be 1.5 mg/kg dissolved in 500 ml of 0.9% normal saline (maximum concentration 1 mg/2 ml) infused centrally over 12 hours.
iii. Measure and record vital signs every 15 minutes for the first hour, every 30 minutes for next hour and then hourly until the infusion is complete.
Rabbit ATG in combination with maintenance therapy
Intravenous methylprednisolone (7 mg/kg) intraoperative then daily with rapid tapering till reach oral 20 mg prednisolone daily. Antimetabolites (such as azathioprine or mycophenolate mofetil) are discontinued during the rATG course and are recommenced only when total white cell count (WCC) > 4.0 x 10 9 /L. CNIs as maintenance immunosuppression are commenced during the postoperative period when the urine output is good and the serum creatinine drops to<3.5 mg/dl. Tacrolimus, initiated at of 0.1 mg/kg orally two times per day, adjusted to achieve target trough level depending on the patient's degree of immunologic risk and the time elapsed since the transplant. If cyclosporine is used then it is initiated at 4 mg/kg orally two times per day. Doses of cyclosporine are subsequently adjusted to attain target trough level of 150 to 300 ng/mL (whole blood by immunoassay) during the first three months post-transplant and 75 to 150 ng/ mL thereafter. As an alternative to measuring cyclosporine trough levels, C2 or peak level two hours post dose can also be used. Doses of cyclosporine are adjusted to target C2 levels to be 800 to 1400 ng/mL during the first three months' post-transplant and then 400 to 600 ng/mL after this period.
Non-immunosuppressive components
Both the donor and recipient should be tested for cytomegalovirus (CMV) status; valganciclovir used as primary prophylaxis therapy in patients at risk for CMV infection. Also, should administer prophylactic trimethoprim-sulfamethoxazole to prevent pneumocystis jiroveci pneumonia, sepsis, and urinary tract infection.
Other induction agents
Many agents are clinically available to target the components of alloimmunity that are amplified shortly after transplantation. Several agents have been studied in clinical trials in combination with standard maintenance regimens and shown to be efficacious, but few prospective studies have compared the prominent agents, and none of them has distinguished itself as clearly superior compared to the others. Most trials have used the surrogate endpoint of early acute rejection rather than a more definitive endpoint of patient or graft survival. Almost all trials have layered "specialised" induction agents on an installation base of bolus methylprednisolone. As with all forms of aggressive immunosuppression, induction therapy using rATGis associated with increased incidences of infectious and malignant complications [23]. The risks of post-transplant lymphoproliferative disease (PTLD) and death ae higher from malignancy when combined with conventional maintenance immunosuppression [24,25]. Induction therapy refers to the use of depleting rabbit anti-thymocyte globulin (r-ATG); Muromonab-CD3 (OkT3); anti-CD 52 (alemtuzumab); anti CD-20 (rituximab), or non-depleting antibodies (anti-CD 25 monoclonal antibodies/ IL-2 receptor antibodies) within the first 2-6 weeks after transplantation [26].
Mechanism of action of ATG
Rabbit ATG is a polyclonal lymphocyte depleting antibody that contains a wide variety of antibody specificities: immune response antigens, adhesion and cell trafficking molecules, and molecules involved in various pathways. The exact mechanism of immunosuppressive action not fully elucidated; appears to include clearance of peripheral antigen-reactive T lymphocytes (T cells) and modulation of T-cell activation, homing, and cytotoxicity. Although rATG has been shown to deplete a variety of immune cells, the primary mechanism of action is T-cell depletion. rATG also modulates cell surface expression of adhesion molecules and chemokine receptors, thus interfering with leukocyte-endothelial interactions that play a role in ischaemia/reperfusion injury (IRI) [27]. It is also has a wide range of anti-B cell actions as illustrated below in Figure 1.
Dosing
The use of rATG as induction therapy for the high-risk renal transplant recipient is off-label. So, no universal agreement on the dose for induction [27]. Strategy 2: Seven days' induction: 1.5 mg/kg administered intraoperative dose before anastomosis of transplant, followed by 1.5 mg/kg/dose for six consecutive days [29].There was no significant difference in acute rejection rate, patient and graft survival between 3 days and 7 days induction strategy. The three days' regimen is associated with shorter hospital stay and leads to more sustained lymphocyte depletion [29]. Strategy 4: Rabbit ATG total cumulative dose: 1-6 mg/kg/ dose for a duration ranging from one to ten days. By assessing differences between total cumulative doses in high-risk patients, there were no differences in immunosuppressive results between more or less than 7.5 mg/kg. Total dose is less than 7.5 mg/kg was associated with less infection and lymphoma [31]. Decreasing total cumulative dose to less than 3 mg/kg was associated with more rejection episodes [32].
Rabbit anti-thymocyte globulin induction versus no induction
Two short-term randomised trials report that recipients who had received a kidney from deceased donor had a reduced rejection rate with rATG [40,41]. A parallel six-month multicentre European trial included 555 recipients who had received a kidney transplant from a deceased donor. Patients were randomised to receive a CNI (either tacrolimus or cyclosporine), steroids, azathioprine, plus rATG induction (1.25 mg/kg for ten days), or tacrolimus, azathioprine, plus steroids, without an induction therapy [40]. Treated patients with rATG-tacrolimus had the lowest incidence of biopsy-proven acute rejection (BPAR) (15.1% vs. 21.2% rATG-cyclosporine and 25.4% tacrolimus-no inductions; P=ns). However, patients and graft survival rates, as well as serum creatinine levels, were similar in both groups at six months' post-transplantation. The patients receiving rATG experienced a significantly higher incidence of leucopenia, thrombocytopenia, serum sickness, fever, and CMV infection (P < 0.05) [40]. Even though, these trials have demonstrated that an ATG induction therapy can delay the initiation of CNIs and lower the incidence of early rejection episodes at the expense of reversible thrombocytopenia and leucopenia, and increased infections particularly CMV. These raises the question as to whether universal anti-CMV prophylaxis in this setting is required and, if it is, will it increase the financial costs incurred during the early post-transplant period [41].
Rabbit anti-thymocyte globulin induction versus nondepleting agents
A recent meta-analysis of six randomised studies that included a total 853 patients showed no differences between ATG (all preparations) or basiliximab regarding outcomes, which included BPAR, delayed graft function, graft loss, and patient death. The results were similar across the two treatment groups with regards to biopsy-proven rejection Conversely, a recent study that included de novo high-risk HLA-sensitized recipients who had received a kidney from a deceased donor showed different results. A total of 227 patients were randomly assigned to receive either rATG or daclizumab if they met one of the following risk factors; current panel reactive antibodies (PRA) at > 30%, peak PRA of > 50%, loss of a first kidney graft from rejection within two years of transplantation, or loss of two or three previous grafts. Maintenance immunosuppression based on tacrolimus, mycophenolate mofetil, and steroids. Compared with the daclizumab group, patients treated with rATG had a lower incidence of both BPAR (15% vs. 27.2%; P= 0.016) and steroid resistant acute rejection (2.7% vs. 14.9%; P= 0.002) at one year. However, one-year graft and patient survival rates were similar between the two groups. When those who had suffered a graft rejection compared to those who had not, the overall graft survival was significantly higher in the rejection free group (87.2% vs. 75%; P= 0.037). The authors concluded that, among high immunological risk renal transplant recipient, rATG was superior to daclizumab in preventing BPAR, but there was no significant benefit to one-year graft or patient survival rates [43].
Rabbit anti-thymocyte globulin induction versus other depleting agents
Alemtuzumab, a monoclonal humanised depleting antibody therapy targeting CD52 receptors on both B-cells and T-cells, has been used as off-label for induction therapy and to treat acute rejection post-transplantation. The typical induction dose in renal transplantation has not established yet. When comparing alemtuzumab to rATG as induction therapy in a randomised controlled trial, patient and graft survival rates were similar [44]. Patients were divided into two groups, high-risk group which conducted receiving induction by either alemtuzumab or rATG induction, while low-risk group to receiving induction by either alemtuzumab or basiliximab. At one-year followup, alemtuzumab group were superior to other groups when compared to rates of BPAR (5% versus 17%, P 0.001) [44]. At 3-year follow-up, alemtuzumab was significantly superior to basiliximab when concerning the rate of BPAR (P=0.003). When alemtuzumab was compared to rATG, there was no difference in outcomes (P =0.63) [44]. In prospective randomised trial comparing alemtuzumab versus rATG, the immunological risk and maintenance immunosuppression were comparable `and there was no difference with regards to graft survival alemtuzumab versus rATG (95% vs. 96%) [45]. Acute rejection episodes were significantly higher in the rATG group when compared to the alemtuzumab group (P=0.007), most of the acute rejection episodes in rATG group occurred in standard criteria donor. At three years, similar outcomes were reported from a larger study of the same group (n=222).
Safety and Monitoring
General tolerability: Rabbit ATG is relatively safe and well tolerated in renal transplantation. Adverse events of rabbit ATG differ in onset and severity, but in general, it is reversible and can be treated [47][48][49][50]. Comparing rabbit ATG with other induction agents, it was comparable to basiliximab concerning incidence of side effects. Also, when compared to daclizumab, no significant difference in side effects [51-53].
Haematological complications: Haematological abnormalities are common with the use of rabbit ATG due to the non-specific effect on all blood cells. Cross-reacting antibodies against nonlymphoid cells could develop and may lead to thrombosis, thrombocytopenia, neutropenia and haemolytic anaemia. Leukopenia is the most common haematological abnormality and could affect up to 60% of the patients during the period of induction [54].
Cytokine release syndrome: It is an acute and immediate complication of rabbit ATG. It characterised by the systemic inflammatory response as activated T lymphocyte produce a lot of cellular cytokines or tumour necrosis factor-α and interleukin-6. Clinically, it could be manifested by fever, chills, nausea, diarrhoea, vomiting, headache, rashes, tachycardia, dyspnoea and/or pulmonary oedema. Severe manifestation, though rare, could occur as cardio respiratory dysfunction or more rarely death [55]. The incidence of cytokine release syndrome could be decreased by use of premedication as corticosteroids, diphenhydramine and acetaminophen or by slowing the infusion rate. If any serious manifestation occurred, the infusion must stop and immediate use of epinephrine [55].
Serum sickness: It is delayed-onset, immunologic-mediated reaction directed to rabbit protein present in serum preparation. The onset usually occurs 6-21 days following rabbit ATG administration. Clinically, the patient may develop fever, arthralgia, rashes and lymphadenopathy. Jaw pain is a common manifestation and should increase suspicious in the patient treated with rabbit ATG. The onset of serum sickness could be preceded by pain, erythema, and pruritus at the injection site. Symptoms vary according to the intensity of antigen-antibody reaction [56]. Serum sickness is a self-limiting disease as it could subside when the antigen removed from circulation. Various treatment options could be used to hasten recovery. The first line of therapy is a high dose of systemic corticosteroid. Antihistamines, analgesics and antipyretics used as adjuvant to steroid. Therapeutic plasma exchange (TPE) could be used as rescue treatment if other measures fail. TPE use could remove serum immune complexes and decrease time to symptoms recovery [56-58].
Post-transplant diabetes mellitus (PTDM):
Post-transplant diabetes mellitus (PTDM) is a well-known complication that may follow renal transplantation. Many factors could contribute to increasing the incidence of PTDM including rabbit ATG induction in a proportion of patients. The rate varies between different groups (1.1-7.3%) and in part related to maintenance immunosuppression [59-61].
Infections
Bacterial infections: It is unclear whether rabbit ATG has impact or not on the incidence of bacterial infection in post-transplant period. Many factors may increase the incidence of bacterial infection as surgical complications, increased immunosuppression, urinary or vascular catheters. Most common reported infections post rabbit ATG induction are bacterial infections. Most common infection post kidney transplantation is urinary tract infections followed by wound infections [62][63][64][65][66][67].
Viral infections
Cytomegalovirus (CMV): Association between rabbit ATG and CMV infection is well established. The incidence and time of infection are dependent on the dose of rabbit ATG at induction period, serology of CMV of donor and recipient before transplantation and CMV prophylaxis status post transplantation. Many prospective randomised controlled trials could not detect any significant increase in the incidence of CMV infection after rabbit ATG induction when compared with other types of induction when adequate CMV prophylaxis was given. On the contrary, the rate of CMV infection was higher with rabbit ATG when compared to basiliximab when CMV prophylaxis not given [68][69][70][71].
BK virus (BKV):
The rate of BK viremia and BKV associated nephropathy linked to rabbit ATG induction in renal transplantation. Rabbit ATG induction is an independent risk factor for BKV associated nephropathy. BKV associated nephropathy affects 1-10% of renal transplant recipients and linked to the strength of the immunosuppression. Higher incidence of BK viremia is related to a higher dose of rabbit ATG [72][73][74][75].
Fungal infections:
In the absence of prophylaxis, invasive fungal infections in renal transplantation were reported with rabbit ATG induction. While the rate of invasive fungal infections are linked to rATG when it is used in the treatment of steroid resistant acute rejections, and the induction with rabbit ATG does not seem to increase the risk fungal infections [52, 61,[76][77].
Malignancy:
In general, the risk of malignancy including PTLD is increased post organ transplantation and linked to various mechanisms. However, the studies that reported lower incidence of malignancies including PTLD in patients who had induction therapy with rabbit ATG may be blighted by inadequate sample size, non-standardization of doses maintenance immunosuppression and inadequate follow-up period [78][79][80][81][82][83][84][85].
Conclusion
There is no consensus for the optimal agent used as induction immunosuppression in kidney transplantation. rATG use is associated with decreased incidence of delayed graft function and acute rejection episodes. Its use facilitates steroid minimising protocols without compromising long-term graft survival. The safety profile is acceptable and is comparable to other induction regimens using antibodies. There is no standardised protocol for the optimal dose of rATG, and there is wide variability between centres. In order to maximise allograft survival and patient outcomes, it is crucial to tailor the regimen for each patient.
|
2019-03-17T13:02:27.720Z
|
2017-04-05T00:00:00.000
|
{
"year": 2017,
"sha1": "5c8db500b0c7998e1df82e6e2042b22b10c24ec3",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/UNOAJ/UNOAJ-04-00134.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "16e9eb8168db9693451dd1d17461ee8c3c4fff68",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14602351
|
pes2o/s2orc
|
v3-fos-license
|
The cAMP effector EPAC activates Elk1 transcription factor in prostate smooth muscle, and is a minor regulator of α1-adrenergic contraction
Background Prostate smooth muscle tone is regulated by α1-adrenoceptor-induced contraction and cAMP-mediated relaxation. EPAC is an effector of cAMP, being involved in smooth muscle relaxation and cell cycle control outside the lower urinary tract. Here, we investigated the expression and function of EPAC in human prostate tissues from patients undergoing radical prostatectomy. Results mRNA and protein expression of EPAC was detected in all prostate tissues by RT-PCR and Western blot analysis. Immunoreactivity was observed in stromal cells, and colocalized with immunofluorescence for α-smooth muscle actin and calponin. Under normal conditions, noradrenaline- or phenylephrine-induced contraction of prostate strips in the organ bath was not affected by the EPAC activator pCPT (SP-8-pCPT-2′-O-Me-cAMPS.NA) (30 μM). However, when the cyclooxygenase inhibitor indomethacin (50 μM) was added, EPAC activators pCPT and OME (8-CPT-2′-O-Me-cAMP.Na) (30 μM) significantly reduced contractions by low concentrations of phenylephrine. These effects were not observed on noradrenaline-induced contraction. OME and pCPT caused phosphorylation of the transcription factor Elk1 in prostate tissues. Elk1 activation was confirmed by EMSA (electrophoretic mobility shift assay), where OME and pCPT incresed Elk1 binding to a specific DNA probe. Conclusions EPAC activation may reduce α1-adrenergic prostate contraction in the human prostate, although this effect is masked by cyclooxygenases and β-adrenoceptors. A main EPAC function in the human prostate may be the regulation of the transcription factor Elk1.
Background
Cyclic adenosine-3′,5′-monophosphate (cAMP) mediates smooth muscle relaxation in the prostate and other organs [1][2][3]. Prostate smooth muscle tone depends on β-adrenoceptor/cAMP-mediated relaxation and α1adrenoceptor-induced contraction, besides other mechanisms [4][5][6]. In patients with benign prostate syndrom (BPS), enhanced prostate smooth muscle tone and prostate enlargement may cause lower urinary tract symptoms (LUTS) [7][8][9]. Prostate tone and growth may be targeted by treatment with α1-blockers and 5α-reductase inhibitors, which are important therapeutic options for medical treatment of LUTS in patients with BPS [10]. Due to the high incidence of BPS and LUTS, together with the importance of smooth muscle contraction for therapy, the function of the adrenergic system in the prostate and its pharmacologic modulation are of high interest [10]. cAMP is produced by adenylyl cyclases, upon stimulation of β-adrenoceptors or with cyclooxygenase-derived prostaglandins [11,12]. After its formation, cAMP activates protein kinase A (PKA) to induce relaxation, but causes parallel interventions into gene transcription [1]. Alternatively to PKA activation, cAMP may activate "exchange proteins directly activated by cAMP" (EPAC) [11,13]. EPACs represent a group of cAMP effectors, which mediate cAMP effects independently from PKA [11,13]. Both isoforms, EPAC1 and EPAC2 were recently described from different cell types and organs, including smooth muscle outside the lower urinary tract [11,[13][14][15][16]. Motoric effects of EPACs in smooth muscle have been regarded just recently, using EPAC-specific activators [16]. Activation of EPACs by these activators caused relaxation of airway smooth muscle [16]. Besides these motoric effects, EPAC activation by cAMP or specific activators results in activation of different transcription factors [17], which is involved in EPAC-mediated regulation of cell cycle [11,13,17]. Previous studies suggested that Elk1 may be activated by cAMPdependent mechanisms in different organs and cell types [18][19][20][21][22][23]. Of note, EPAC and cAMP-dependent Elk1 activation are involved in hyperplastic alterations outside the lower urinary tract [13,18,19,24,25]. Although hyperlasia is of utmost importance for BPS, EPAC-driven Elk1 activation has not been investigated in the prostate.
Prostate smooth muscle tone is balanced by cAMPmediated relaxation and α1-adrenergic contraction, while prostate growth requires the activation of transcription factors [24]. Prostate growth depends on the concerted interaction between growth factors, hormones and G proteincoupled receptors, although little is known about their intracellular mediators [26,27]. Prostate growth and contraction were regarded as separate phenomenons for decades ("dynamic" and "static" component) [9]. However, it has been recently postulated that both components may be coupled to each other, although detailed mechanisms still remain an enigma [4,8,28].
In the lower urinary tract, expression and function of EPACs has not been investigated to date. Here, we investigated the expression of EPAC1 and EPAC2 in the human prostate, and studied the effects of EPAC activators on adrenergic prostate contraction and on the transcription factor Elk1.
Human prostate tissue
Human prostate tissues were obtained from patients undergoing radical prostatectomy for prostate cancer, but without previous transurethral resection of the prostate (TURP) (n=61). The research was carried out in accordance with the Declaration of Helsinki of the World Medical Association, and has been approved by the ethics committee of the Ludwig-Maximilians University, Munich, Germany. Informed consent was obtained from each patient. All samples were taken from the periurethral zone, and analyzed anonymously. These tissue samples did not exhibit histological signs of neoplasia, cancer, or inflammation. Most prostate tumors are located to the peripheral zone [29,30].
Sampling and in vitro stimulation
For analysis of EPAC expression, samples of prostate tissue were shock frozen in liquid nitrogen directly after prostatectomy and pathological examination. For myographic measurements of contractility, tissues were handled as described below. For in vitro stimulation with EPAC activators, prostate tissue specimens were prepared as small strips (2-3 mm × 1 mm) and allocated to three dishes of a 6-well plate containing Custodiol solution. During the experiments, plates were kept at 37°C under continous shaking. For stimulation with EPAC activators, 10 mM stock solutions were added in the required intervals and volumes to obtain a final concentration of 30 μM pCPT or OME, while another sample remained unstimulated. After 2 h, stimulated and unstimulated samples were simultaneously shock frozen in liquid nitrogen. Samples were stored at −80°C until Western blot analysis was performed.
Quantitative RT-PCR RNA from frozen prostate tissues was isolated using the RNeasy Mini kit (Qiagen, Hilden, Germany). For isolation, 30 mg of tissue was homogenized using the FastPrep ® -24 system with matrix A (MP Biomedicals, Illkirch, France). RNA concentrations were measured spectrophotometrically. Reverse transcription to cDNA was performed with 1 μg of isolated RNA using the Reverse Transcription System (Promega, Madison, WI, USA). RT-PCR for EPAC1 and EPAC 2 was performed with a Roche Light Cycler (Roche, Basel, Switzerland) using primers provided by SA Biosciences (Frederick, MD, USA) as ready-to-use mixes, based on the RefSeq Accession numbers NM_006105 for EPAC1 (human RAPGEF3), and NM_007023 for EPAC2 (human RAPGEF4). PCR reactions were performed in a volume of 25 μl containing 5 μl LightCycler ® FastStart DNA Master Plus SYBR Green I (Roche, Basel, Switzerland), 1 μl template, 1 μl primer, and 18 μl water. Denaturation was performed for 10 min at 95°C, and amplification with 45 cycles of 15 sec at 95°C followed by 60 sec at 60°C. The specificity of primers and amplification was demonstrated by subsequent analysis of melting points, which revealed single peaks for each target. The results were expressed as the number of cycles (Ct), at which the fluorescence signal exceeded a defined treshold.
Western blot analysis
Frozen prostate tissues were homogenized in a buffer containing 25 mM Tris/HCl, 10 μM phenylmethanesulfonyl fluoride, 1 mM benzamidine, and 10 μg/ml leupeptine hemisulfate, using the FastPrep ® -24 system with matrix A (MP Biomedicals, Illkirch, France). After brief centrifugation, supernatants were assayed for protein concentration using the Dc-Assay kit (Biorad, Munich, Germany) and boiled for 10 min with sodium dodecyl sulfate (SDS) sample buffer (Roth, Karlsruhe, Germany). Western blot analyses of samples were performed as previously described [31]. For detection, mouse anti EPAC1 (5D3) antibody, mouse anti EPAC2 (5B1) antibody (both from New England Biolabs, Ipswich, MA, USA), mouse anti phospho-Elk1 (serine 383) antibody (B-4), mouse anti Elk1 antibody (3H6D12), mouse anti pan-cytokeratin antibody (C11), mouse anti prostate specific antigen (PSA) antibody (A67-B/E3), or mouse anti β-actin antibody (all from Santa Cruz Biotechnology, Santa Cruz, CA, USA) were used. Blots were developed with enhanced chemiluminescence (ECL) using ECL Hyperfilm (GE Healthcare, Freiburg, Germany). Intensities of the resulting bands were quantified using Image J (NIH, Bethesda, Maryland, USA). In stimulation experiments with EPAC activators, samples without and with activator were compared on one blot, and subjected to semiquantitative quantification. For quantification, samples without activator were set to 100%, and data of stimulated samples from the same prostate were expressed as % of these unstimulated samples.
Immunohistochemistry
Sections (6-8 μm) from frozen tissues were stained by an indirect immunoperoxidase technique, as previously described [31]. For detection of EPAC1 and EPAC2, mouse anti EPAC1 antibody (5D3) or EPAC2 antibody (5B1) (New England Biolabs, Ipswich, MA, USA) were used in dilutions of 1:200. Biotinylated secondary horse anti mouse antibody (Vector Laboratories, Burlingame, CA, USA) and avidin-biotin-peroxidase complex (Vector Laboratories, Burlingame, CA, USA) were sequentially applied for 30 minutes each. Staining was performed with the AEC peroxidase substrate kit (Vector Laboratories, Burlingame, CA, USA). Finally, all sections were counterstained with hemalaun. Control stainings without primary antibodies did not yield any immunoreactivity.
Tension measurements
Prostate strips (6×3×3 mm) were mounted in 5 ml aerated (95% O 2 and 5% CO 2 ) tissue baths (Danish Myotechnology, Aarhus, Denmark), containing Krebs-Henseleit solution (37°C, pH 7.4). Preparations were stretched to 0.5 g and left to equilibrate for 45 min to attain a stable resting tone. After the equilibration period, maximum contraction induced by 80 mM KCl was assessed. Subsequently, chambers were washed three times with Krebs-Henseleit solution for a total of 30 min. Cumulative concentraction response curves for noradrenaline or for the α1-adrenergic agonist phenylephrine were created before and after addition of EPAC activators (30 μM pCPT, or 30 μM OME). In experiments including indomethacin, this was added already during the equilibration period, and applied throughout the entire experiment.
EMSA
Activation of Elk1 was investigated by non-radioactive electrophoretic mobility shift assay (EMSA). In this assay, the binding of Elk1 to a biotin-labelled, Elk1-specific DNA probe is determined. Assays were performed using a commercially available kit (Affymetrix, Santa Clara, CA, USA) according to the manufacturer's instruction. In brief, prostate tissues were homogenized as described for Western blot analysis, but not boiled with sample buffer. After protein determination, 20 μg of protein were incubated with biotinlabelled DNA probe with the sequence 5′TTTGCAAA ATGCAGGAATTGTTTTCACAGT′3. After incubation, samples were subjected to electrophoresis in native, nondenaturating acrylamide gels (6%), and subsequently blotted on nylon membranes, where detection for biotin was performed with peroxidase-coupled streptavidin in combination with ECL. Intensities of the resulting bands were quantified using Image J (NIH, Bethesda, Maryland, USA). Experimental conditions were approved by preparation of a negative control using an unlabelled probe provided by the manufacturer. This cold probe was added to a sample besides the labelled probe, resulting in competition and disappearence of bands.
Statistical analysis
Data are presented as means ± standard error of the mean (SEM) with the indicated number (n) of experiments. Twotailed student t test was used for paired or unpaired observations. P values <0.05 were considered statistically significant.
Quantitative RT-PCR
Expression of EPAC1 and EPAC2 mRNA was detected in prostate samples from all investigated patients (n=5). Average Ct was 26 ± 0.3 for EPAC1, and 25 ± 0.2 for EPAC2, while the housekeeping gene 18SrRNA was detectable with an average Ct of 11 ± 0.2 ( Figure 1A).
Western blot analysis of EPAC expression
Western blot analysis using isoform-specific EPAC antibodies demonstrated variable protein expression of EPAC1 and EPAC2 in prostate tissues of all investigated patients (n=5). Detected bands matched the expected sizes for both isoforms (96 kDa for EPAC1, 115 kDa for EPAC2) ( Figure 1B). The intensity of bands for EPAC1 and EPAC varied between different patients ( Figure 1B). The content of epithelial markers, pan-cytokeratin (37-55 kDa) and PSA (30 kDa) varied between prostates of different patients ( Figure 1B). The content of β-actin was similar in samples of different patients ( Figure 1B).
Double fluorescence staining
Fluorescence staining of prostate sections resulted in immunoreactivity for EPAC1 and EPAC2, and for the smooth muscle markers α-smooth muscle actin (αSMA) and calponin in prostate tissues from all investigated patients (n=5) (Figure 2). Virtually all αSMAand calponin-positive cells were immunoreactive for EPAC1 and EPAC2 (Figure 2). This colocalization was indicated by yellow color in merged pictures after overlay. No immunoreactivities were observed in control experiments, where the primary antibodies were replaced by PBS.
After double labelling for EPAC1 and EPAC2, immunoreactivity for EPAC1 was strongest in epithelial cells, but also observed in the stroma (Figure 3). In contrast, immunoreactivity for EPAC2 was strong in the stroma, but almost absent in epithelial cells (Figure 3). Colocalization of EPAC1 and EPAC2 was not observed (Figure 3). After double labelling for Elk1 and calponin, immunoreactivity for Elk1 was observed in the stroma (Figure 3). In epithelial cells, almost no Elk1 immunoreactivity was observed ( Figure 3). In merged pictures, yellow color indicating colocalization of Elk1 and calponin was weak, but detectable ( Figure 3).
Immunohistochemical staining
Immunohistochemical staining of prostate sections using EPAC1 and EPAC2 antibodies resulted in immunoreactivites in stromal cells (n=5 patients) ( Figure 4). In control experiments, where antibodies were replaced by PBS, no immunoreactivities were observed ( Figure 4).
In contrast, pCPT was without effects on noradrenalineinduced contractions, regardless whether indomethacin was added or not ( Figure 6A,B). Similarly, OME was without effect on noradrenaline-induced contractions in the presence of indomethacin ( Figure 6C).
Western blot analysis of Elk1 phosphorylation
Using a phospho-specific antibody, the effect of OME and pCPT on Elk1 phosphorylation was determined by Western blot analysis. Incubation of prostate tissues with OME or pCPT for 2 h significantly increased the phosphorylation state of Elk1. After incubation with OME (30 μM), Elk1 phosphorylation was 276 ± 33% of unstimulated controls (p<0.0004) (Figure 7). After incubation with pCPT (30 μM), Elk1 phosphorylation was 370 ± 56% of unstimulated controls (p<0.0007) (Figure 7). The content of Figure 5 Effects of pCPT and OME on phenylephrine-induced contraction of human prostate strips. Phenylephrine-induced contraction was measured before and after application of pCPT, OME, or water (solvent). In (B) and (C), the inhibitor of cyclooxyenases, indomethacin (50 μM) was added during the entire experiment. Data are means ± SEM from experiments with prostates from n=2 patients in (A), n=7 patients in (B), and n=8 patients in (C) (* p<0.05 vs. pCPT, ** p<0.02 vs. OME).
EMSA
Using an electrophoretic mobility shift assay (EMSA), we investigated Elk1 activation by EPAC activators. In this assay, the binding of Elk1 to the DNA sequence 5′ TTTGCAAAATGCAGGAATTGTTTTCACAGT′3 is assessed. Incubation of prostate tissues (n=5 patients) with pCPT or OME (30 μM, 2 h) resulted in binding of Elk1 to this sequence (Figure 8). DNA binding after incubation with pCPT was 264 ± 62% of the binding in unstimulated samples (p<0.04). Similarly, DNA binding after incubation with OME was 375 ± 110% of the binding in unstimulated samples (p<0.04). Figure 6 Effects of pCPT and OME on noradrenaline-induced contraction of human prostate strips. Noradrenaline-induced contraction was measured before and after application of pCPT, OME, or water (solvent). In (B) and (C), the inhibitor of cyclooxyenases, indomethacin (50 μM) was added during the entire experiment. Data are means ± SEM from experiments with prostates from n=6 patients in (A), n=7 patients in (B), and n=5 patients in (C).
Discussion
In the prostate and other organs, cyclic adenosine-3′,5′monophosphate (cAMP) is a second messenger mediating smooth muscle relaxation [1]. In addition to its role for smooth muscle tone, cAMP is involved in non-motoric functions, including regulation of gene transcription or cell cycle in many cell types and organs [17,24,25]. cAMPdependent effects may be mediated either by PKA, or by EPAC [11,13]. By PKA and EPAC, cAMP may be assorted to different intracellular compartments, and consequently to divergent cellular functions [11,13]. In smooth muscle outside the lower urinary tract, cAMP-dependent EPAC activation mediates relaxation and regulates cell cycle, besides its involvement in other functions [14][15][16]. Smooth muscle tone and growth are important factors contributing to the pathophysiology and therapy of LUTS in patients with BPS [7][8][9][10]. To the best of knowledge, the expression and function of EPAC in the prostate has not been investigated to date. Here, we studied EPAC expression and EPAC functions in human prostate smooth muscle, using EPAC-specific activators.
Using RT-PCR, Western blot analysis, and immunohistochemistry, we observed expression of EPAC1 and EPAC2 in prostate samples from all investigated patients. In Western blot analysis, EPAC expression levels varied together with the epithelial markers, PSA and pan-cytokeratin between prostates of different patients. Despite these variations, EPAC was detectable in all samples, indicating that a constitutive expression exists. Nevertheless, our analyses demonstrate that EPAC expression underlies regulation. The different content of epithelial markers may reflect different degrees of prostate hyperplasia. In fact, almost all patients undergoing radical prostatectomy show hyperplastic prostates, although to different extent [34,35]. Therefore, we assume that our findings reflect the situation in hyperplastic tissue, where the expression level of EPAC may vary with the degree of hyperplasia. A comparison to non-hyperplastic tissues was not possible, as these tissues are not available. The aim of our present study was to demonstrate a new principle of EPAC signaling in nonmalignant prostate tissue, independent of pathophysiological context. Immunoreactivity for EPAC1 and EPAC2 was located to stromal cells. To confirm that these cells are smooth muscle cells, we performed double immunofluorescence stainings of prostate sections. Indeed, immunoreactivity for both EPAC isoforms colocalized with αSMA, which is a common marker for smooth muscle cells.
Recently, different cell-permeable EPAC activators have been developed, which are indispensable tools for investigations of EPAC functions [32,33]. These activators, are analogs of cAMP, which do not activate PKA, but are resistant to hydrolysis by phosphodiesterases [32,33]. Although OME and pCPT are specific activators of EPAC, they do not discriminate between EPAC1 and EPAC2. In our organ bath experiments, activation of EPAC caused inhibition by low concentrations of phenylephrine, when cyclooxygenase activity was blocked by indomethacin. In experiments, where indomethacin was omitted or contraction was induced by noradrenaline, EPAC activation was without effects on contraction. In contrast to noradrenaline, which activates αand β-adrenoceptors, phenylephrine selectively activates α1-adrenoceptors. Of note, these effects were confirmed using two different EPAC activators, OME and pCPT. In conclusion, a contribution of EPAC to prostate smooth muscle tone may exist, althouth to minor extent. Cyclooxygenases and noradrenaline-induced βadrenoceptor activation cause cAMP production [11,12]. Under physiologic conditions, this may raise EPAC activity to a level, where further EPAC activation by OME or pCPT is in ineffective on prostate smooth muscle tone. When this background of cAMP was deleted in our experiments (by indomethacin and selective activation of α1adrenoceptors), the effect of EPAC activators on contraction became visible.
Relaxation in response to EPAC activators has been recently described from airway smooth muscle, where EPAC-mediated relaxation may exceed the effects in the prostate [16]. We assume that any difference to our study may be either explained by the divergent, organ-specific contractile systems in both organs, or by a tissue-specific equipment with different molecular EPAC effectors. Whether EPAC has a role in other smooth muscle types of the lower urinary tract, in particular within the bladder, may be subject of further studies.
Regulation of gene transcription by cAMP has been known since decades [17,36]. By interventions into transcriptional activity, cAMP is involved in various central functions, including cellular growth, differentiation and regulation of cell cycle [32,36]. In fact, different transcription factors were identified, which may be activated by cAMP and EPAC [17]. Although the focus of previous studies was on the regulation of CREB by cAMP, several studies suggested that cAMP activates Elk1 in different organs and cell types [18][19][20][21][22][23]. Therefore, we investigated whether EPAC activators may trigger Elk1 activation in the prostate. We observed that stimulation of human prostate tissues with EPAC activators results in activation of Elk1.
Elk1 is activated by a phosphorylation, resulting in binding of the factor to a specific DNA sequence within the promoter region of target genes [37,38]. Activation of Elk1 in our samples was confirmed by Western blot analysis using a phospho-specific antibody, and by EMSA, where the binding of transcription factors to a specific, biotinlabelled DNA probe is assessed. Among different functions of Elk1, Elk1-dependent proliferation, growth and differentiation have been described from smooth muscle cells and other cell types [38][39][40][41]. In the liver, cAMP-mediated Elk1 activation mediates hyperplasia of bile ducts [18,19]. In prostate cancer cells, Elk1 is involved in proliferation and tumor growth [42,43]. To the best of our knowledge, our study suggesting a link between EPAC and Elk1 activation is the first regarding Elk1 in non-malignant prostate cells, or linking EPAC to Elk1 activation in any cell type. Elk1-specific inhibitors, which may enable detailed studies on Elk1 function, have not been developed to date. We assume that EPAC uses different effectors besides Elk1 in the prostate. However, a role for Elk1 in the control of smooth muscle tone appears unlikely. Future studies may focus on the identification of Elk1 target genes in the prostate.
Non-motoric EPAC functions were studied in a panel of cell types, including smooth muscle cells outside the lower urinary tract. In airway smooth muscle cells, EPAC regulates the phenotype of smooth muscle cells, and inhibited growth factor-induced proliferation [14,15]. Apart from smooth muscle cells, the role of EPAC was studied in different cell types, with diverging results. In prostate carcinoma cells, an antiproliferative effect as well as EPACdriven proliferation was observed [44,45]. EPAC triggers proliferation in endothelial cells, macrophages, thyroid cells, or osteoblasts, [46][47][48][49]. Indeed, the opposing character of EPAC functions, in particular with regard to cell cycle regulation already attracted attention [32]. Interestingly, EPAC functions are often associated with the same biological processes, despite opposing effects [11].
Together, EPAC-specific activators induce activation of the transcription factor Elk1 in the human prostate. In contrast, EPAC-mediated relaxation of prostate smooth muscle may be at best minor. Nevertheless, cAMP is an important mediator causing prostate smooth muscle relaxation by PKA [1]. This may suggest possible connections between smooth muscle tone and growth in the prostate. Although such links have been proposed by various investigators, little is known about their intracellular mediators [4,8,[26][27][28]. In cardiomyocytes, EPAC activation causes hypertrophic responses, by intervention into transcription of hypertrophic genes [13,24,25]. In conclusion, a role of EPAC in prostate hyperplasia may be postulated.
Conclusions
Our findings point to a role of EPAC in transcriptional regulation in smooth muscle cells of the human prostate. EPACdependent regulation of prostate smooth muscle tone may be masked by cyclooxygenases and β-adrenoceptors. Together, EPAC may represent a missing link connecting the dynamic with the static component in BPH.
Competing interests
The authors declare that they have no competing interests.
|
2017-08-03T01:51:51.517Z
|
2013-07-02T00:00:00.000
|
{
"year": 2013,
"sha1": "bb210e1e563ffdfc9578e6d16128511e3d1df754",
"oa_license": "CCBY",
"oa_url": "https://jbiomedsci.biomedcentral.com/track/pdf/10.1186/1423-0127-20-46",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96bcce8d4617ef9801294ed79e73672653474e1e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
118426683
|
pes2o/s2orc
|
v3-fos-license
|
Nonlocality of three-qubit Greenberger-Horne-Zeilinger-symmetric states
Mixed states appear naturally in experiment over pure states. So for studying different notions of nonlocality and their relation with entanglement in realistic scenarios, one needs to consider mixed states. In a recent article [Phys. Rev. Lett. \textbf{108}, 020502 (2012)], a complete characterization of entanglement of an entire class of mixed three qubit states with the same symmetry as Greenberger-Horne-Zeilinger state known as GHZ-symmetric states, has been achieved. In this paper we investigate different notions of nonlocality of the same class of states. By finding the analytical expressions of maximum violation value of most efficient Bell inequalities we obtain the conditions of standard nonlocality and genuine nonlocality of this class of states. Also relation between entanglement and nonlocality is discussed for this class of states. Interestingly, genuine entanglement of GHZ-symmetric states is necessary to reveal standard nonlocality. However, it is not sufficient to exploit the same.
I. INTRODUCTION
Quantum nonlocality is an inherent character of quantum theory where Bell inequalities [1] are used as witnesses to test the appearance of the same. Recently analysis of quantum nonlocality has become an interesting topic not only from foundational viewpoint(See [2] and references therein) but also has been extensively used in various quantum information processing tasks, quantum communication complexity [3], randomness amplification [4], no-signaling [5], device-independent quantum key distribution [6], device-independent quantum state estimation [7,8], randomness extraction [9,10], etc. There exist several experimental evidences supporting the fact that presence of entanglement is necessary for nonlocality of quantum correlations. But determining which entangled state reveals nonlocality(i.e., violates bell inequality) is a difficult work. Any pure entangled state of two qubits violates Bell-CHSH inequality [11,12], the amount of violation being proportional to the degree of bipartite entanglement [13]. However no such conclusion holds for mixed bipartite entangled states as there exists a class of mixed entangled states which admits a local hidden variable model and this class cannot violate any Bell inequality [14,15]. Till date nonlocality in two qubit systems has been explored in details. However multiqubit case is much more difficult to analyze. There is an increasing complexity while shifting from bipartite to multipartite systems. This is mainly because of the fact that multipartite entanglement has comparatively much complex and richer structure than that of bipartite entanglement [16,17]. So any study related to multipartite entanglement or dealing with multipartite nonlocality requires a deeper insight of the physics of many-particle systems which in general differ extensively from that of single or two party systems. However, study of many-particle systems gives rise to new interesting phenomena, such as phase transitions [18] or quantum computing. In this context, it is quite interesting to study the relationship between entanglement and nonlocality for multipartite system. To extend the two qubit relationship between entanglement and quantum nonlicality, one needs to classify both entanglement and nonlocality in multipartite system. In particular, entanglement of any tripartite state can either be biseparable or genuinely entangled [16,17]. Nonlocal character of a tripartite system can be categorized broadly in two categories of standard nonlocality and genuine nonlocality [2]. In the former case, nonlocality is revealed in atleast one possible grouping of the parties whereas a state is said to be genuinely nonlocal if any possible grouping of parties reveal nonlocality. In [19],Śliwa gave the whole class of Bell inequalities which acts as a necessary and sufficient condition for detecting standard nonlocality. The relation between this notion of nonlocality and tripartite entanglement has been studied for three qubit pure states [20][21][22][23][24][25] where it has been shown that entanglement(biseparable or genuine entanglement) of pure state suffices to produce standard nonlocality. The notion of genuine tripartite nonlocality has been discussed in [26][27][28], which represents the strongest form of nonlocality for tripartite systems. There exists relation between genuine tripartite nonlocality and 3-tangle [29] (measure of genuine tripartite entanglement) which has been analyzed for some important classes of pure tripartite states [28,[30][31][32][33]. Interestingly, Bancal et al. conjectured that all genuinely entangled pure quantum states can produce genuine nonlocal correlations [28]. While tripartite nonlocality turns out to be a generic feature of all entangled pure states, the situation becomes much more complex when we consider mixed states as there exists genuine tripartite entangled state which admits a local hidden variable model [34,35]. In this context, it is interesting to characterize the state parameters for any class of tripartite mixed states on the basis of different notions of tripartite nonlocality and their relation with entanglement. Our paper goes in this direction. Recently, a new type of symmetry for three qubit quantum state was introduced [36], the so called Greenberger-Horne-Zeilinger (GHZ) symmetry. In [36], they provided the whole class of states which has this type of symmetry. This class of states is referred to as GHZ-symmetric states. A complete classification of different types of entanglement of this class of tripartite mixed states is made in [36]. In this work we have classified the GHZ-symmetric states on the basis of different notions of tripartite nonlocality so that one can use this class of state in different information theoretic tasks. This helps us to establish the relationship between entanglement and nonlocality for this class of tripartite mixed states. The relation implies that genuine entanglement is necessary to reveal any type of nonlocality(standard nonlocality or genuine nonlocality) for this class of states.
The paper is organized as follows. In Section II, we give a brief introduction to some concepts and results which we will use in later sections. Subsequently, in Section III, we obtain the condition for which GHZ-symmetric states reveal standard nonlocality by deriving the analytical expressions of maximum violation value of the two most efficient facet inequalities. In section IV, we deal with the classification of the class of states on the basis of genuine nonlocality. Section V shows how different types of entanglement are related with different notions of nonlocality for this class of mixed states. Finally we conclude with a summary of our results in Section VI.
II. BACKGROUND
A. GHZ-symmetric three-qubit states As an important class of mixed states from quantum theoretical perspective, GHZ-symmetric three-qubit states have been paid much attention [36][37][38][39]. In particular, in the eight dimensional state space of three qubit states, the set of GHZ-symmetric states defines a twodimensional affine section, specifically a triangle of the full eight dimensional set of states [40]. In this section, we review the properties of GHZ-symmetric three-qubit states [36]. GHZ-symmetric three-qubit states are defined to be invariant under the following transformations: (i) qubit permutations, (ii) simultaneous three-qubit flips (i.e., application of σ x ⊗ σ x ⊗ σ x ), (iii) qubit rotation about the z axis of the form U (φ 1 , φ 2 ) = e iφ1σz ⊗ e iφ2σz ⊗ e −i(φ1+φ2)σz . Here σ x and σ z are the Pauli operators. The GHZ-symmetric states of three-qubits can be written as: . The requirement ρ(p, q) ≥ 0 gives the constraints: This family of states forms a triangle in the state space and includes not only GHZ states, but also the maximally mixed state 1 8 (located at the origin, see Fig.1). Any point inside that triangle represents a GHZ-symmetric state. The generalized werner states are found on the straight line q = 2 q( magenta area in Fig.1) . It is of W type( grey area in Fig.1 FIG . 1: The triangle of the GHZ-symmetric states for three qubits [36]. The upper corners of the triangle are the standard GHZ states |GHZ+ and |GHZ− . Mixed state 1 8 is located at the origin. The black dashed line represents the generalized Werner state. We have indicated different types of three qubit entanglement: GHZ(green), W(grey), Biseparable(magenta), Separable(yellow).
B. Genuine multipartite concurrence (CGM )
In order to facilitate the discussion of our results, we briefly describe the genuine multipartite concurrence, a measure of genuine multipartite entanglement defined as [41]:C GM (|ψ ) := min j 2(1 − Π j (|ψ )) where Π j (|ψ ) is the purity of the jth bipartition of |ψ . The genuine multipartite concurrence of three qubit X states has been evaluated in [42]. It is given by: with w i = j =i a j b j where a j , b j and z j (j = 1, 2, 3, 4) are the elements of the density matrix of tripartite X states: In this section we provide a brief overview of the various notions of tripartite nonlocality and corresponding detectors of tripartite nonlocality for subsequent discussions. Consider a Bell-type experiment consisting of three space-like separated parties, Alice, Bob and Charlie. The measurement settings are denoted by x, y, z ∈ {0, 1} and their outputs by a, b, c ∈ {−1, 1} for Alice, Bob and Charlie respectively. The experiment is thus characterized by the joint probability distribution(correlations) p(abc|xyz). Now the correlations can exhibit different types of nonlocality. Any tripartite correlation p(abc|xyz) is said to be local if it admits the following decomposition: for all x, y, z, a, b, c, where 0 ≤ q λ ≤ 1 and λ q λ = 1. P λ (a|x) is the conditional probability of getting outcome a when the measurement setting is x and λ is the hidden variable; P λ (b|y) and P λ (c|z) are similarly defined. Otherwise they are standard nonlocal. We denote L 3 as the set of local correlations that can be produced classically using shared randomness. The local set L 3 was fully characterized by Pitowsky and Svozil [43] andŚliwa [19]. It has 53856 facets defining 46 different classes of inequalities that are inequivalent under relabeling of parties, inputs, and outputs [19]. Violation of any of these facet inequalities guarantees standard nonlocality. A tripartie correlation is local if it satisfies all the 46 facet inequalities. Inequality 2(we followŚliwa's numbering) is the Mermin inequality [44]: Note that it is possible to violate Mermin inequality maximally(i.e., M = 4) using |GHZ ± . However, in tripartite scenario, Svetlichny [26] showed that there exist certain quantum correlations which can exhibit an even stronger form of nonlocality. Such type of correlations cannot be decomposed in the following form: Here 0 ≤ q λ , q µ , q ν ≤ 1 and λ q λ + µ q µ + ν q ν = 1. The above form of correlations are not fully local as in Eq. (4), nonlocal correlations are present only between two particles (the two particles that are nonlocally correlated can change in different runs of the experiment) while they are only locally correlated with the third. If a correlation P (abc|xyz) cannot be written in this form then such a correlation is said to exhibit genuine tripartite nonlocality. In [28], this type of nonlocality is referred to as Svetlichny nonlocality. Focusing on these form of correlations(Eq.(6)), Svetlichny designed a tripartite Bell type inequality (known as Svetlichny inequality): where Thus violation of such inequality implies the presence of genuine tripartite nonlocality, implying in turn the presence of genuine tripartite entanglement. This inequality (7) is violated by GHZ and W states [30,31,45]. While Svetlichny's notion of genuine multipartite nonlocality is often referred to in the literature, it has certain drawbacks. As has been pointed out in [27,28], Svetlichny's notion of genuine tripartite nonlocality is so general that no restrictions were imposed on the bipartite correlations used in Eq. (6). They are allowed to display arbitrary correlations in the sense that there may be one-way or both way signaling between a pair of parties or both the parties may perform simultaneous measurements. As a result, grandfather-type paradoxes arise [28] and inconsistency from an operational viewpoint appears [27]. Moreover it is found that there exist some genuine nonlocal correlations which satisfy this inequality [27,28,33]. In order to remove this sort of ambiguity, Bancal et al. [28], introduced a simpler definition of genuine tripartite nonlocality which is based on no-signaling principle, in which the correlations are nosignaling for all observers. Suppose P (abc|xyz) be the tripartite correlation satisfying Eq.(6) with no-signaling criteria imposed on the bipartite correlations terms, i.e., and similarly for the other bipartite correlation terms P µ (ac|xz) and P ν (bc|yz). The above form of correlations are called N S 2 local. Otherwise, we say that they are genuinely 3−way NS nonlocal(N S 2 nonlocal). In [28], 185 Bell-type inequalities are given which constitute the full class of facets of N S 2 local polytope. Violation of any of these facets(Bell-type inequalities) guarantees N S 2 nonlocality. Svetlichny inequality constitute the 185-th class. Throughout the paper, we use this notion of nonlocality as genuine tripartite nonlocality.
III.STANDARD NONLOCALITY OF GHZ-SYMMETRIC STATES
We have already discussed in introduction that all tripartite pure entangle state exhibit standard nonlocality but this relation does not hold for mixed states. From that point of view and also from experimental perspectives, characterization of mixed states on the basis of their ability to generate nonlocal correlations is far more interesting compared to that of pure states. As already discussed before, we aim to characterize the state parameters of the mixed class of GHZ-symmetric three-qubit states on the basis of their nonlocal nature. In this section we not only classify this class on the basis of standard nonlocality but also derived the necessary and sufficient condition of detecting standard nonlocality.
A. Maximum violation of Mermin inequality
We have already mentioned that standard nonlocality of correlations can be detected if the correlations violate atleast one of the 46 inequivalent facet inequalities characterizing the local set(L 3 ). Among the 46 inequivalent facet inequalities, Mermin inequality is most frequently used. In [24], they gave a sufficient criterion to violate Mermin inequality for pure three qubit states. Here we find a necessary and sufficient condition to obtain a violation of Mermin inequality for three qubit GHZ-symmetric states. For this class of tripartite mixed entangled states the maximum value of M (Eq.(5)) with respect to projective measurement is given by 8|p|(see Appendix). Then the Mermin inequality in Eq.(5) becomes: Hence ρ(p, q) violates Mermin inequality if and only if |p| > 1 4 . Due to this restriction on p, together with state constraints Eq.(2), the other state parameter q gets restricted: q > 1 4 √ 3 . So standard nonlocality of any three qubit GHZ-symmetric states with |p| > 1 4 and q > 1 4 √ 3 is guaranteed via violation of Mermin inequality(see Fig.2(a)).
B.Efficiency of 15-th facet inequality over Mermin inequality
Mermin inequality, discussed in the last section, is not the most efficient detector of standard nonlocality. In this section it is argued that there exists another facet inequality which can be considered as a better tool for detecting standard nonlocality compared to use of Mermin inequality for doing the same. It is observed that the 15-th facet can be considered as an inequality which is more efficient than Mermin inequality.
The 15-th facet inequality is given by [19]: where The maximum value of L for three qubit GHZ-symmetric states with respect to projective measurement is given by . Using this, 15-th facet inequality(Eq.(11)) gets modified as . It follows that for every q > 3 148 (8 + 3 √ 3) there is atleast one p for which the GHZ-symmetric states violate 15-th facet inequality. In Fig.2(b) ]. This state does not violate Mermin inequality for any value of q but the same state violates 15-th facet inequality for q > 0.37861. This in turn points out that 15-th facet inequality is more efficient than the Mermin inequality over some restricted range of state parameters.
C. Necessary and sufficient detection criteria of Standard nonlocality By comparing the criteria necessary and sufficient for violation of each of the remaining 44 inequivalent facet inequalities(following procedure similar to that used for Mermin inequality, see Appendix) with that of Mermin inequality and the 15-th facet inequality, we have observed that region of standard nonlocality, as detected by any of the remaining 44 inequivalent facet inequalities forms subset of the region of standard nonlocality of Mermin and the 15-th facet inequality. So these two inequalities are the most efficient to detect standard nonlocality of this class of states. This in turn points out that the optimal region of standard nonlocality of GHZ-symmetric states is provided by the union of regions of standard nonlocality detected by Mermin and the 15-th facet inequality(see Fig.2(c)). So in totality the restricted state conditions for revealing standard nonlocality are given by: A state exhibits standard nonlocality under projective measurements if and only if it satisfies atleast one of the two sets of conditions((i) or (ii) of Eq. (13)).
IV. GENUINE NONLOCALITY OF GHZ-SYMMETRIC STATES
Genuine nonlocality is the strongest form of nonlocality. So for a tripartite correlation it is natural to ask whether all three parties are nonlocally correlated. Such correlations play an important role in quantum information theory, phase transitions and in the study of many-body systems [18]. Also, the presence of genuine nonlocality implies the presence of genuine entanglement. So after discussing about standard nonlocality, it becomes interesting to explore about genuine nonlocality of this class of states. To be specific, in this section we have derived necessary and sufficient criteria for detecting genuine nonlocality. 11)) emerges as a more efficient tool over Mermin inequality(Eq.5) for revealing nonlocal nature of GHZ-symmetric states. (c) The cyan areas give the optimal region of standard nonlocality of GHZ-symmetric states. The states characterized by the state parameters lying in this region, when shared between Alice, Bob and Charlie do not admit any local hidden variable model.
A. Maximum violation of Svetlichny inequality
As we have discussed before, if we consider that all correlations between the observers are no-signalling, then the set of 185 facet inequalities act as a necessary and sufficient condition for detecting genuine tripartite nonlocality. Among all of them, Svetlichny inequality is frequently used for the detection of genuine tripartite nonlocality. In [30], necessary and sufficient criteria for maximal violation of Svetlichny inequality are derived for some classes of tripartite pure states. Here we have derived the same for the class of GHZ-symmetric states. For this class of tripartite mixed entangled states, the maximum value of S(Eq.(7)) with respect to projective measurement is given by 8 √ 2|p|(see Appendix). Thus Eq.(7) gives, Hence Fig.3(a) we present the range of the state parameters of the three-qubit GHZ-symmetric states for which Svetlichny nonlocality is observed.
B. Efficiency of 99-th facet inequality over Svetlichny inequality As we have mentioned in Section II, the newly introduced weaker definition of genuine nonlocality(genuine 3-way NS nonlocality) gives advantage over Svetlichny's definition of genuine nonlocalty. So after completing the analysis of genuine nonlocality with respect to Svetlichny inequality, we search for an inequality which can be considered more efficient than Svetlichny inequality. In [33], we have shown that for detecting genuine nonlocality of some classes of tripartite pure entangled states, 99-th facet inequality is more efficient compared to Svetlichny inequality. Here also, for the class of GHZ-symmetric states, 99-th facet inequality emerges to be more powerful tool for detecting genuine nonlocality for some subclasses. The 99-th facet inequality is given by: where N S=| If projective measurement is considered, the maximum value of N S is given by 4q √ 3 +2 16q 2 3 + 4p 2 (see Appendix). Thus 99-th facet inequality in Eq.(15) becomes, A detailed comparison of the criteria required for violation of each of the remaining 183 facets(following same procedure as that for Mermin inequality) with that of Svetlichny and 99-th facet points out the fact that these two inequalities(99-th facet inequality and Svetlichny inequality) are the most efficient detectors of genuine nonlocality. This in turn points out the fact that the optimal region of genuine nonlocality is given by: . (17) Genuine nonlocality of any state, upto projective measurements is guaranteed if and only if it satisfies atleast one of the two possible sets of conditions((i) or (ii) of Eq. (17)).
V. RELATION BETWEEN ENTANGLEMENT AND NONLOCALITY
Entanglement of any state is necessary for nonlocality of the state. So after completing classification of GHZsymmetric states with respect to different form of nonlocality, we proceed to establish the relationship between nonlocality and entanglement of this class.
A. Relation between Biseparable entanglement and Standard nonlocality
Biseparable entanglement of a tripartite quantum state is necessary to produce standard nonlocality. Their relationship has been analyzed in [25] for three qubit pure states where it is shown that biseparable entanglement of tripartite pure quantum states also turns out to be sufficient to exhibit standard nonlocality. Here we analyze whether it is sufficient for this class of tripartite mixed quantum states to obtain standard nonlocality. The criterion of biseparability of this class of states is [36,39]: Interestingly, no biseparable GHZ-Symmetric state can reveal standard nonlocality. We present our argument below. We have already discussed that to detect standard nonlocality, Mermin and that makes violation of 15-th inequality impossible by any biseparable state belonging to this class. Hence no biseparable state is capable of showing standard nonlocality.
B. Relation between Genuine entanglement and Standard nonlocality
In general for any tripartite state, genuine entanglement is necessary to reveal genuine nonlocality. So for GHZ-symmetric states, as argued in the last section, genuine entanglement is necessary to reveal even the weaker notion of standard nonlocality. However one cannot claim it to be a sufficient criterion for revealing standard nonlocality for this class of states. We proceed with our argument below. For that we first consider genuinely ρ(p, q). The red regions give the optimal area where genuine nonlocality(GNL) is revealed with respect to projective measurement for GHZ-symmetric states. The cyan regions indicate the optimal area where standard nonlocality(NL) is revealed(except genuine nonlocality). As genuine nonlocality also implies standard nonlocality so red regions also give the region of standard nonlocality. Genuinely entangled but local states(GL) are represented by pink regions. Clearly, no nolocal region lies within biseparable region(Magenta). entangled states. Such states are restricted by [36,39]: As we have discussed earlier the locality criteria are: Clearly the conditions(Eq. (18) and Eq. (19)) are feasible with the restricted range of the parameter q given by 4 . This in turn proves the existence of genuinely entangled local states(see the pink region of Fig.4). So any GHZ-symmetric state is genuinely entangled but local if it satisfies Eq.(18) and Eq. (19). So strongest form of entanglement i.e., genuine entanglement turns out to be insufficient to generate even the weaker form of nonlocality i.e., standard nonlocality. Hence we are able to present a class of genuinely entangled three qubit states which does not violate a complete set of facet inequalities for standard nonlocality. Recently a similar type of result has been presented in [35], for some other class of states. In this context it will be interesting to study variation of standard nonlocality with the amount of genuine entanglement. Since Mermin and 15-th facets are the most efficient bell inequalities to detect standard nonlocality, so now we deal with the variation of violation of these facet inequalities with the amount genuine entanglement(C ρ(p,q) GM ). Since the three-qubit GHZsymmetric states belong to the class of tripartite X states, their amount of entanglement can be measured by Eq.(3). So For the state ρ(p, q), one has M max = 4(C ρ(p,q) GM ). Hence, GHZ-symmetric states violate Mermin inequality if (C ρ(p,q) GM As we have proven in Section.III, GHZ-symmetric states with |p| > 1 4 and q > 1 for any fixed value of q. Also for each C ρ(p,q) GM > 0 there is a GHZ-symmetric state(i.e., a value of q)which violates Mermin inequality(See Fig.5.(a)). Similarly, using Eq.(20), we have , which increases monotonically with C ρ(p,q) GM for any fixed value of q. GHZsymmetric states violate 15-th facet inequality if and only with state parameter q for standard nonlocal states. Precisely, in Fig.(a) and Fig.(b), we have considered C ρ(p,q) GM of the states whose standard nonlocality is guaranteed by violation of Mermin and 15-th facet inequality respectively. Interestingly, for any arbitrarily small value of C ρ(p,q) GM there exists atleast one GHZ-symmetric state which exhibits standard nonlocality.
C. Relation between Genuine entanglement and Genuine nonlocality
Till date no relationship between genuine entanglement and genuine nonlocality has been proved even for three qubit pure quantum states. However, recently a conjecture has been reported in [28], which states that all three qubit genuinely entangled pure states exhibit genuine nonlocality. More recently this conjecture is proved for some important class of pure states [33]. But no such straightforward conclusion can be drawn for any class of three qubit mixed states as there exist genuinely entangled mixed states which do not exhibit genuine nonlocality [34]. Here we obtain the relationship between above two phenomena for three qubit GHZ-symmetric states. In this context, we have presented a subclass of states of GHZ-symmetric class of mixed tripartite states which is genuinely entangled yet fails to violate any of 185 facet inequalities and thereby is not genuinely nonlocal (see subsection(E)). However, the discussion of genuine nonlocality(Section IV) points out that out of the 185 facet inequalities, Svetlichny inequality and the 99-th facet inequality are the two most efficient detectors of genuine nonlocality for this class of states. In this section we have studied the variation of violation of these two efficient bell inequalities with the amount of entanglement content C ρ(p,q) GM . Using Eq. (20), the maximum violation value of Svetlichny inequality(Eq.(14)) becomes: ). The algebraic expression clearly points out the relation between genuine nonlocality and entanglement(see Fig.6.(a)). It is already argued in Section IV, that for violation of Svetlichny inequality, the state parameters get restricted as |p| > 1 2 √ 2 and q > Clearly for any arbitrary value of q, the amount of genuine nonlocality(N S max − 3) increases monotonically with the amount of entanglement C ρ(p,q) GM . Interestingly, for any positive value of C ρ(p,q) GM , there exists a subclass which violates 99-th facet inequality(see Fig.6(b)). To be precise, there exists a subclass of GHZ-Symmetric states which is genuinely nonlocal for any amount of C ρ(p,q) GM . Fig.(a) and Fig.(b) respectively. It is interesting to note that for any positive value of C ρ(p,q) GM , there exist some states whose genuine nonlocality is observed via violation of 99-th facet. However for the states whose genuine nonlocality is guaranteed by the violation of Svetlichny inequality, no such conclusion can be made. In fact for violation of Svetlichny inequality, the range of C ρ(p,q) GM
D. Genuinely nonlocal subclass
The genuinely nonlocal subclass is obtained for a fixed value of one of the two state parameters. Putting q = √ 3 4 in Eq.(1), we get This subclass of GHZ-Symmetric states is genuinely entangled, the amount of entanglement given by(Eq.(20)): The optimal region of standard nonlocality of this subclass is detected by 15-th facet inequality: Clearly for any nonzero value of p, L max > 4. The relation between entanglement(C ρ(p,q) GM ) and standard nonlocality is given by: Eq.(25) points out that the amount of standard nonlocality(L max − 4) increases monotonically with amount of entanglement(C ρ(p, √ 3
) GM
). Clearly any arbitrary amount of entanglement is sufficient for violation of the 15-th facet inequality(see Fig.7). Similar sort of analysis can be made when we consider the stronger notion of genuine nonlocality. 99-th facet inequality is the most efficient detector of genuine nonlocality for this subclass: Using Eq.(23), the above inequality gets modified as: Clearly for any arbitrary amount of C ρ(p, √ 3
) GM
, this subclass can reveal genuine nonlocality(See Fig.7). Interestingly, comparison between the N S max of this class ρ(p, √ 3 4 ) and that of the pure class of generalized Greenberger-Horne-Zeilinger states(GGHZ) [30,32,33] points out that for these two classes, genuine nonlocality varies similarly with that of their corresponding entanglement content though one of these classes is pure(GGHZ) whereas the other one is mixed [33]. for this mixed subclass of GHZ-symmetric states is same as that of the curve showing variation of GNL with C ρ(p,q) GM for pure generalized GHZ State [33].
E. Genuinely entangled but not Genuinely nonlocal subclass
In this subsection we present a subclass of GHZsymmetric states which is genuinely entangled but satisfy all the 185 facet inequalities detecting genuine nonlocality. For that we first consider Eq.(18) which gives the criterion of genuine entanglement: |p| > 3 8 − √ 3 2 q. Now any subclass having state parameters p and q restricted by this criterion cannot reveal genuine nonlocality if it cannot violate neither Svetlichny inequality nor 99-th facet inequality, i.e., if p ≤ 1 2 √ 2 and criteria for satisfying 99-th facet inequality: 4q √ 3 + 2 16q 2 3 + 4p 2 ≤ 3 , q ≤ 1 28 (8 √ 5 − 5 √ 3). Clearly these three restrictions together give a feasible region in state parameter space (p, q) and that any GHZ-symmetric state having state parameter lying in this feasible region fails to reveal genuine nonlocality inspite of being genuinely entangled.
V. CONCLUSION
In summary, the above systematic study exploits the nature of different notions of nonlocality thereby giving the necessary and sufficient conditions for detecting nonlocality of an entire family of high-rank mixed three-qubit states with the same symmetry as the GHZ state. Generally Mermin inequality(which is a natural generalization of CHSH inequality) is used to detect standard nonlocality. However we have showed that this inequality is not the most efficient Bell inequality for this class of three qubit mixed states as for some restricted range of state parameters, 15-th facet inequality gives advantage over Mermin inequality. Our findings confirm that the nonlocality conditions given by 15-th facet inequality and Mermin inequality are the best detector of standard nonlocality for this class of states. Analogously genuine nonlocality of the class is discussed. For detection of genuine nonlocality 99-th facet inequality and Svetlichny inequality turn out to be the most effective tools. Further comparison between these two inequalities points out that 99th facet inequality is even far better than Svetlichny for some restricted subclasses of this class though the latter is extensively used for detection of genuine nonlocality. Besides, our result illustrates the relationship between entanglement and nonlocality of this class of three qubit mixed states. Interestingly no biseparable state is capable of revealing standard nonlocality. This in turn points out the necessity of genuine entanglement of this class for this purpose. However for revelation of standard nonlocality existence of genuine entanglement is not sufficient. This fact becomes clear from the existence of genuine entangled local subclass of GHZ-symmetric states. It will be interesting to explore the presence of hidden nonlocality(if any) [46][47][48] of this class of states. Also one may try to activate nonlocality of this class of states by using it in some suitable quantum network [49,50]. Besides GHZsymmetric class of states form a two dimensional affine subspace of the whole eight dimensional space of three qubit states [40]. So a study analyzing the relation between entanglement and nonlocality of tripartite states from other subspaces, or if possible characterization of the whole space itself can be made in future.
|
2016-08-14T21:18:29.000Z
|
2016-08-14T00:00:00.000
|
{
"year": 2016,
"sha1": "3c7e6b2891c82bda2d055e4a56c55ce7d444a062",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1608.04140",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3c7e6b2891c82bda2d055e4a56c55ce7d444a062",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
1621371
|
pes2o/s2orc
|
v3-fos-license
|
Atypical transcription of microRNA gene fragments
MicroRNAs (miRNAs) are short (∼22 nt) RNAs that impact gene expression by sequence-specific interactions with messenger RNA or promoter sequences of genomic DNA. Ectopic expression of miRNAs can be accomplished by placing fragments of the corresponding miRNA precursor under the control of RNA polymerase II or III (RNAP II/III). Here, we report that, in the absence of exogenous promoters, DNA fragments incorporating miRNA precursors can be delivered directly into a variety of human cells and give rise to the corresponding mature miRNA. Notably, the transcription of these miRNA DNA fragments appears resistant to conventional inhibitors of RNAP I/II/III activity. Taken together, our findings suggest the existence of a previously unrecognized atypical transcription program for miRNA precursor sequences.
INTRODUCTION
Several hundred genes encoding miRNAs are currently known for the human genome (1,2). Genes encoding miRNAs are under the control of RNA polymerase II or III (RNAPII/III) (3,4). According to the current model of miRNA biogenesis, promoter occupancy leads to the generation of a long primary transcript (pri-miRNA) that is cleaved by the nuclear RNAase III enzyme Drosha into a precursor-miRNA (pre-miRNA) (5). The latter is exported to the cytoplasm where it is cleaved by the enzyme Dicer to yield the mature, single-stranded miRNA of 19-22 nt in length, the end effector of gene expression (6)(7)(8)(9). Here, we report on the ability of a subgroup of miRNA precursors of 200-400 nt in length to self-transcribe in the absence of exogenous promoters. In what follows, we examine the conditions under which this phenomenon occurs and present our findings from experiments with several miRNA/cell-line combinations.
Nucleic acid constructs
Polymerase chain reaction (PCR) amplification for the creation of the various miRNA amplicons were performed in a 50-ml reaction mixture containing forward and reverse primers at 2 mM, 0.5 U of pfu DNA polymerase (Stratagene) and dNTP at 200 mM. Constructs created in this manner are denoted as Amp miRNA-XX . Human genomic DNA from HEK 293T cells (0.2 mg) was used as the sole template for creation of Amp miRNA-XX species. Primer sequences and the nucleotide length of the resulting amplicons are shown in Supplementary Table S1. The PCR reaction condition was 94 C for 3 min, 36 cycles of 94 C for 30 s, 60 C for 40 s and 72 C for 50 s and 72 C for 5 min. The PCR products were then separated by electrophoresis in a 1.5% Tris-acetate-EDTA (TAE) agarose gel, excised and gel-purified using a commercial kit (Qiagen). RNAP II-and RNAP IIIpromoter-driven miRNA-143 expression vectors were created by digesting the pri-miRNA-143 amplicon with BamH1 and EcoR I prior to cloning into pcDNA (Invitrogen) and pSIREN-RetroQ (Clontech) vectors, respectively. The chimeric miRNA-143/125a amplicon was generated by placing miRNA-125a pri/pre sequence in a 3 0 Pst1 site in the backbone of miRNA-143. The backbone of miRNA-143 and 30a was altered such that mature and (*) species were replaced by those encoding the guide and passenger strands of siRNA targeting the transactivator (tat) protein of HIV-1. These two latter constructs were directly synthesized as mini genes. Sensors for assessing miRNA/siRNA activity were created by inserting the respective antisense target sequences of small RNA species in the 3 0 -UTR of the Renilla gene of psicheck-2 reporter vector (Promega) which harbors Firefly luciferase as an internal control. Amp miRNA-143 harboring a single nucleotide mutation in position 16 of the mature miRNA was created using the Quick-Change Site-Directed Mutagenesis kit (Stratagene). Constructs were verified by DNA sequencing. The Sanger miRBase Release 14.0 (September 2009) was used as a reference for all miRNA nomenclature/ sequences. Short-hairpin RNA targeting RNAP I and mitochondrial spRNAP-IV was created by inserting the respective siRNA sequences into a generic shRNA expression vector. All siRNA sequences are shown in Supplementary Table S1. MiRNA amplicons were incubated at 37 C for 1 h with proteinase K (600 mAU/ml Qiagen) followed by enzymatic inactivation by incubation at 75 C for 20 min. Amplicons subsequently underwent agarose gel purification prior to cellular transfection. Biotinylated constructs were created by PCR by employing biotinylated primers. Amplicons were then gel purified and equal molar amounts were incubated with HEK 293T cell lysates. Streptavidin bead preparation, immobilization of amplicons and release of immobilized biotinylated molecules were performed according to the manufacturer's instructions (Dynabeads Õ MyOne TM Streptavidin T1). Antibody targeting RNAP II (Ab 8WG16) was used to visualize association with the respective biotinylated constructs.
Cell culture and transfections
Human cell lines used in this study included HEK 293T, Huh-7, HeLa, HCT116 and PBMC. Huh7, HEK 293T and HeLa cells were cultured in DMEM/EMEM media, respectively, supplemented with 10% (v/v) FBS and 2 mM L-glutamine. HCT116 cells were maintained in McCoy's 5A media supplemented with 10% (v/v) FBS. PBMC, obtained from an anonymous donor through the Rhode Island Blood Bank, were maintained in RPMI. Vector, PCR amplicon and siRNA transfection was performed by using Lipofectamine 2000 (Invitrogen) following the manufacturer's protocol for all cell lines. PBMC was transfected using the Amaxa system (Amaxa Biosystems). The absolute amount of DNA transfected was 2 mg/well (six-well plates) and 0.4 mg/well (24-well plates). The relative ratio of transfected product (ug) was 1:1:0.25 for vectors, siRNA and amplicons, respectively. POLR3A and irrelevant siRNA were obtained from Santa Cruz and were initially transfected into HEK 293T cells. Cells were subsequently re-seeded (at 24 h) and then transfected with the various products (at 48 h) prior to mature miRNA quantification by real-time (RT)-PCR at 72 h. Actinomycin D and a-amanitin (Sigma) were used at final concentrations of 2 mg/ml and 50 mg/ml, respectively. All experiments involving miRNA quantification were performed in duplicate, independent experiments and two replicate measurements were associated with each experiment.
miRNA detection and quantification
Northern blot. Ten to twenty micrograms of total RNA extracted from HEK 293T cells using Trizol (Invitrogen) was resolved on 15% SequaGel (National Diagnostics) and transferred onto a GeneScreen Plus Hybridization Transfer Membrane (PerkinElmer) in 0.5 Â TBE buffer. The membrane was hybridized with (g-P 32 ) labeled miRNA-specific antisense LNA (Exiqon) probe overnight at 42 C in ULTRAhyb Ultrasensitive Hybridization Buffer (Ambion). The membrane was washed sequentially with 2% SSC containing 0.1% SDS three times and 0.1% SSC containing 0.1% SDS three times, 10 min for each time at room temperature, and then exposed to Kodak film overnight at -80 C. The same membrane was stripped with 0.1% SDS for 5 min by microwave oven and blocked with hybridization buffer and then hybridized with the U6 snRNA probe (TGTGCTGCCGAAGCGAG CAC) which served as a loading control.
RT PCR. TaqMan real time RT-PCR detection kits were used to quantify mature miRNA levels in accordance with manufacturer's instructions (Applied Biosystems) and 18S RNA was used for normalization.
RNA cloning. The miRCat-33 kit (IDT) was used to clone mutated mature miRNA-143 from HEK 293T cells transfected with the miRNA-143 mutant amplicon. Briefly, total RNA was extracted from cells at 48 h after transfection using Trizol (Invitrogen). Fifty micrograms of total RNA was loaded on 12% denaturing PAGE gel (7 M Urea) running in 1 Â TBE at 125 V for 1.5 h. Subsequently, the small RNA fraction was gel excised and ligated to a 3 0 linker, re-purified and finally ligated to a 5 0 linker. The ligated product was amplified, subcloned into TOPO TA Cloning vector (Invitrogen) and then sequenced.
Luciferase assays
Cells were seeded into 96-well plates 1 day before transfection. One-hundred nanograms of psicheck reporter and 25 ng Amp miRNA-XX or 100 ng plasmid were transiently transfected into Huh7 or HEK 293T cells. After 48 h, luciferase activity was measured using the Dual-Glo luciferase assay kit (Promega). Renilla luciferase activity was normalized with firefly luciferase activity. All sensor assays were performed as three independent experiments. For each sensor experiment, a control employing an empty vector construct (Ø) was used and corrected luciferase values were averaged, arbitrarily set to a value of '1' and served as a reference for comparison of fold-differences in experimental values. 5 0 Rapid amplification of cDNA ends 5 0 rapid amplification of cDNA ends (RACE) was performed for identifying the primary transcripts of Amp miRNA-143-mut using the protocol suggested by the manufacturer (Invitrogen). Briefly, 1 ug total RNA was extracted from transfected HEK 293T cells and converted into cDNA using miRNA-143 specific reverse primer (5 0 -accaggggaacttgtgtagag-3 0 ) and then purified by S.N.A.P. Column. After addition of oligo-dC tail to 3 0 end of the purified cDNA with TdT (Terminal deoxynucleotidyl transferase), PCR was performed with a kit -provided forward primer and nested miRNA-143 specific primer (5 0 -cacaagtggctgatagtatggagtc-3 0 ). Using a 1:100 dilution of the primary PCR product as template, a second PCR was carried out with the same forward primer and another nested miRNA-143 specific primer (5 0 -acttaccacttccaggctgatg-3 0 ). The products were sub-cloned into TOPO TA Cloning vector (Invitrogen) and then sequenced.
Statistical analysis
Mixed linear models (proc mixed, SAS version 9.2, SAS Institute, Cary, NC, USA) were used to test for differences among the various experimental conditions while accounting for replicates from each sample as having correlated error. For example, nucleic acid species copy numbers were log transformed (base 2) to stabilize variance and thereby ensure that fold changes could be modeled as additive. If conditions still had unequal variances despite log transformation, the variance was modeled for each condition individually within the model (heterogeneous variance model). The choice between homogeneous and heterogeneous variance models was based on which had the lowest Bayesian information criteria (BIC) value. The denominator degrees of freedom for models was based on the Satterthwaite method (10). Parameters were estimated based on residual estimation of maximum likelihood (REML). The P-values for follow-up comparisons were adjusted for alpha inflation from multiple comparisons using the Bonferroni method to maintain family wise alpha at 0.05.
Transcription of miRNA-143 in the absence of an exogenous promoter
Originally, we set out to study miRNA-143. Our interest in this miRNA stemmed from its low expression in a number of cancers and a desire to better understand its transcriptional control (Gao,J.S., manuscript in preparation) (11). For our experiments, we created a panel of expression vectors by employing a previously described strategy whereby a 316-nt sequence fragment that included the full-length precursor for miRNA-143 was placed under the control of a RNAP II or III (CMV or U6) promoter in a plasmid (12). In parallel, we also created PCR amplicons encompassing the RNAP II (CMV) or RNAP III (U6) promoter and the 316-nt fragment, mentioned above ( Figure 1a). As expected, both constructs led to mature miRNA-143 production upon transfection into HEK 293T cells as lipoplexes. As a control to the above experiments, we introduced the 316-nt long fragment as a PCR amplicon generated from human genomic DNA (Amp miRNA-143 ). To our surprise, this amplicon, which contained no known exogenous promoter element, led to the production of mature miRNA-143 as assessed by both northern blot and real time PCR, albeit at levels 100-fold lower than the RNAP II/III-driven constructs (Figure 1b and c).
In order to ensure that Amp miRNA-143 was indeed contributing to the increased levels of mature miRNA-143, we sought to distinguish the product derived from Amp miRNA-143 from any endogenous miRNA-143. To this end, we introduced a singlenucleotide mutation (T!C) at position 16 from the 5 0 end of the mature miRNA, a location that did not affect miRNA processing or production, as confirmed by northern blot (Supplementary Figure S1a and b). RNA was isolated from cells transfected with the mutated, promoter-less construct and small RNA species were gel-excised, ligated to 5 0 and 3 0 linker molecules, PCR-amplified and introduced into cloning vectors. DNA sequencing of clones confirmed the production of the designed, 'mutated' variant of mature miRNA-143 (Supplementary Figure S1c).
While northern blots revealed RNA production from our amplicons, we next sought to identify the sequence of the primary transcript of Amp miRNA-143 . Thus, we used RACE to analyze the 5 0 -end of the primary transcript in cells transfected with the amplicon. Importantly, for these experiments, we again worked with the mutated amplicon allowing us to differentiate RACE transcripts corresponding to endogenous miRNA-143 production from transcripts derived from Amp miRNA-143-MUT . These experiments revealed a complicated picture whereby five transcripts were detected ( Figure 1d). As expected, one of the five transcripts was identical to the known miRNA-143 precursor, whereas the others were slightly longer. Presently, it remains unclear whether the shorter RNA molecules detected in amplicon-transfected cells represent bona fide primary transcript variants or result from subsequent processing of the initial longer transcript. Nevertheless, the RACE results contributed further direct evidence that the miRNA amplicon was indeed transcribed into RNA.
Determining the minimal length of Amp miRNA-143 that preserves its ability to self-transcribe The amplicon used in the above-described experiments was 316 nt in length, i.e. substantially larger than the length of the corresponding miRNA-143 precursor sequence. Thus, we investigated whether shorter amplicons also had the ability to self-transcribe and lead to the biosynthesis of mature miRNA. A series of progressively shorter, PCR-generated variants of Amp miRNA-143 with truncations on both the 5 0 -and 3 0 -ends revealed that a minimum length of 102 nt was sufficient for production of mature miRNA-143 following transfection into HEK 293T cells as assessed by RT-PCR ( Figure 2b). Of the 26 nt preceding miRNA-143* in the known miRNA-143 precursor, a minimum of 12 nt directly adjacent to miRNA-143* was required in order for the (truncated) amplicon to be functional.
The functionality of the shorter molecules allowed us to determine whether synthetic oligonucleotides encoding miRNA-143 could also lead to mature miRNA synthesis upon cellular introduction. Oligonucleotides encoding truncated construct B (116 nt length) were commercially synthesized, annealed and transfected into HEK 293T cells. Surprisingly, Oligo miRNA-143-B led to mature miRNA-143 levels comparable to those associated with the PCR generated Amp miRNA-143-B (Figure 2c). All PCR-generated amplicons used human genomic DNA as a template and were gel excised and purified prior to transfection. To eliminate the theoretical possibility that bacterial proteins in the PCR reaction were transferred along with the resulting amplicons, we subjected Amp miRNA-143-B to proteinase K digestion prior to transfection. As seen in Figure 2c, such treatment had no effect upon its biosynthetic capacity. HeLa) that had been treated with Amp miRNA-143 readily confirmed its biosynthetic capacity in these cell types as well (Figure 3a). In each case, Amp miRNA-143 introduction was associated with a statistically significant increase in mature miRNA-143 levels compared to cells that had been transfected with an empty vector.
To determine whether atypical transcription was limited to miRNA-143, we generated and tested additional PCR amplicons comprising miRNAs with relatively low endogenous expression in HEK 293T cells: Amp miRNA-363 , Amp miRNA-145 and Amp miRNA-517a . In each case, the respective amplicons produced the corresponding mature miRNA in a statistically significant fashion just like Amp miRNA-143 as assessed by RT-PCR (Figure 3b).
We next created a PCR amplicon for miRNA-125a, which is expressed at relatively 'high' levels in HEK 293T cells. Here, RT-PCR confirmed that, as expected, amplicon transfection did not lead to any appreciable increase in the already high endogenous mature miRNA levels. However, when we introduced Amp miRNA-125a into Huh-7 cells that are characterized by low endogenous levels of miRNA-125a, we observed a significant (>1000-fold) increase in its expression (Figure 3c). We also made a similar observation for the endogenously abundant, liver-specific miRNA-122. Indeed, introduction of Amp miRNA-122 into hepatic cell lines did not lead to any appreciable increase in the levels of mature miRNA-122. However, introduction into HEK 293T cells again led to a >1000-fold increase in the measured levels of mature miRNA-122 (Figure 3c).
Given that Amp miRNA-143 exhibited transcriptional ability, we reasoned that we might be able to exploit the phenomenon and allow the transcription, in tandem, of other mature miRNAs. With that in mind, we introduced the precursor sequence of miRNA-125a at the 3 0 -end of the original Amp miRNA-143 and transfected the resulting construct into Huh-7 cells. Cellular introduction of this chimeric amplicon resulted in the synthesis of mature miRNA-143 as well as of mature miRNA-125a at nearly equivalent levels (Figure 3d). Finally, northern blot was used to verify mature miRNA production in cells All three miRNA constructs produced statistically significant levels of mature product compared to Ø (each adjusted P < 0.001); however, none of the three was significantly different from one another in terms of biosynthetic potential. Figure S2).
Functionality of Amp miRNA-XX
While quantitative assays revealed the biosynthetic activity of Amp miRNA-XX , we next sought to assess functional capacity. We created miRNA sensors in which the antisense sequence of a particular mature miRNA was placed in the 3 0 -UTR of the gene encoding Renilla luciferase. In the presence of mature miRNA, levels of Renilla activity decrease through the classical RNAi pathway. Co-transfection of Amp miRNA-143 with its sensor led to a statistically significant reduction in Renilla activity, albeit at levels below those attained with CMV-driven miRNA-143 expression vector (Figure 4a). Similar experiments employing the shorter variants Amp miRNA-143-A/B/C/D/E/F revealed their functional Luciferase-based miRNA sensor assays were used to compare the functional activity of various miRNA expression units. We created reporter constructs, in which the exact target sequence of a given miRNA was introduced into the 3 0 UTR of the gene encoding Renilla luciferase and quantified the relative reduction in luciferase levels compared to control experiments involving the same sensor but an empty vector (Ø). An internal firefly luciferase gene served to normalize data. All sensor assays were performed as three independent experiments and data are shown as mean reduction +/-SD compared to control conditions. (a) Functional activity of Amp miRNA-143 , Amp CMV-miRNA-143 and Pla CMV-miRNA-143 . Co-transfection of the miRNA-143 sensor and Amp miRNA-143 led to 57% decrease in luciferase activity, a statistically significant level (P = 0.02) approaching that achieved in transfections using a CMV-driven miRNA-143 expression plasmid (81% reduction) or amplicon (70% reduction) in HEK 293T cells. (b) Functional activity of shorter amplicons Amp miRNA-143-A/B/C/D/E/F . All constructs retained functionality as assessed by sensor assays. Shown are statistically significant (adjusted P < 0.05) reductions in normalized relative light units (RLU) in cells transfected with the amplicons (A-F) in comparison to cells transfected with an empty vector (Ø). (c) Sensor assays were performed for an additional nine Amp miRNA-XX . In each case, sensor activity was decreased in a statistically significant manner (adjusted P < 0.05) compared to experiments involving an empty vector (Ø). (d). The ERK5 protein is a target of miRNA-143. Introduction of Amp miRNA-143 into HEK 293 T cells was associated with a reduction in ERK5 protein levels as determined by western blot. RNAP II-driven plasmid and an empty vector (Ø) served as positive and negative controls, respectively. activity as well (Figure 4b). Finally, we used the sensor assay to examine the activity of nine additional Amp miRNA-XX , all of which were expressed at relatively low levels in HEK 293T cells. In each case, transfection of Amp miRNA-XX with its sensor led to a statistically significant reduction in Renilla activity (Figure 4c). Next, we asked whether cellular protein levels could be downregulated by exogenously introduced Amp miRNA-XX . Previous work has identified ERK5 as a target of miRNA-143 (13). Treatment of HEK 293T cells with Amp miRNA-143 led to the relative reduction of ERK5 protein levels as seen in Figure 4d.
Of the 12 miRNA with functional Amp constructs, we noted that four have had previously detailed characterization of their promoter regions (miRNA-26a1,107,122,517a) (14,15). Significantly, in each case, the corresponding Amp miRNA-XX species did not include these promoter motifs yet still led to mature miRNA production/function upon cellular introduction.
On the atypical transcriptional nature of Amp miRNA-143
To date, RNAP II/III promoters have been implicated in the transcriptional control of miRNA genes, with the majority of the studied genes being under the control of RNAP II (4,11,(14)(15)(16). In an attempt to gain a better understanding of the transcriptional machinery involved in the processing of Amp miRNA-143 , we first sought to identify which of the currently known transcriptional programs mediated endogenous miRNA-143 production in human cells.
We used chromatin precipitation (ChiP) to look for the physical presence of RNAP II on the 2.5-kb genomic region immediately upstream of miRNA-143*. These experiments utilized HCT116 cells that produce high levels of mature miRNA-143. PCR primers amplifying $100-200-nt fragments of the putative promoter region were immunoprecipitated with IgG against RNAP II (anti-pol II:8WG16) and revealed RNAP II localization at $1 kb upstream relative to the pre-miRNA-143, i.e. in an area well outside of the sequence captured by Amp miRNA-143 (Figure 5a).
Next, we made use of the limited number of drugs that are available for differentiating between the RNAP II and III mammalian transcriptional programs. The mushroom toxin a-amanitin is known to adversely affect RNAP II/III activity in a dose-dependent manner with RNAP II activity being inhibited at far lower concentrations of drug. We thus treated HEK 293T cells with a-amanitin and transfected, in turn, with either the RNAP II/ III-driven miRNA-expressing plasmids or Amp miRNA-143 . In the case of the plasmids, quantification of mature miRNA-143 levels 8 h post-transfection revealed differential sensitivity to a-amanitin, with RNAP II activity being inhibited by >90% and RNAP III activity being inhibited by 40%. However, and importantly, when we transfected with Amp miRNA-143 , a-amanitin had no effect on the amplicon's biosynthetic activity and the production of mature miRNA-143 (Figure 5b). We next assessed the effect of a-amanitin on endogenous miRNA-143 expression. Treatment of HCT116 cells with drug led to a >90% reduction in pri-miRNA-143 levels compared to control cells that had been treated with vehicle alone, as expected from our ChiP experiments that localized RNAP II to the miRNA-143 gene locus. These data in sum raised the intriguing possibility that RNAP II was not associated with the transcription of Amp miRNA-143 . To formally investigate this possibility, we created biotinylated versions of Amp miRNA-143 with and without RNAP II/ III promoter sequences and incubated them in HEK 293T cell lysate. Streptavidin coated bead based retrieval of the molecules allowed us to localize RNAP II to Amp RNAP II-miRNA-143 but not to Amp RNAP III-miRNA-143 or Amp miRNA-143 (Figure 5c).
Next, we used actinomycin, a general and more potent inhibitor of RNA transcription that acts by intercalating between successive GC base pairs and preventing RNAP mediated elongation of the nascent transcript (17). In this case, both RNAP II-and RNAP III-driven plasmids were exquisitely sensitive to actinomycin with >97% reduction in their respective biosynthetic function. However, Amp miRNA-143 was less sensitive exhibiting only 50% reduction in function (Figure 5d).
The extremely compact nature of Amp miRNA-143 and its products led us to hypothesize that components of RNAP III machinery may be involved in its transcription (18). Furthermore, RNAP III has been recently implicated in the cytosolic transcription of a poly(dA-dT) template into 5 0 -ppp RNA (19). Thus, we treated cells with short interfering RNA (siRNA) targeting the largest subunit of RNAP III (POLR3A) and measured the effect on the output of our RNAP III-driven plasmid and Amp miRNA-143 , respectively (20). As seen in Figure 5e, siRNA-mediated silencing of POLR3A inhibited the RNAP III-driven plasmid but had no effect on Amp miRNA-143 in two independent experiments.
Further support for the lack of involvement of RNAP III in Amp miRNA-XX transcription derived from more detailed experiments involving miRNA-517a. As mentioned, Amp miRNA-517a was associated with mature miRNA production after introduction into HEK 293T cells. The genetic organization of miRNA-517a is somewhat unique in that it is a substrate for RNAP III with a concise organization including the traditional A and B boxes required for RNAP III docking (Figure 6a) (4). Importantly, our initial Amp miRNA-517a was 262 nt in length and did not contain either of the known A or B boxes located upstream of its pre-miRNA sequence. We created a series of progressively longer variants of the original Amp miRNA-517a , the longest of which contained both the A and B boxes. Interestingly, segmental truncations of Amp miR-517a , with partial or complete removal of the A and B box motifs, also led to detectable levels of functional mature miRNA-517a, albeit at levels far lower than the two amplicon variants (#3 and #4) that included both boxes (Figure 6b and c). Not surprisingly, cells treated with a siRNA targeting POLR3A exhibited a severely compromised ability to transcribe the miRNA-517a variants that contained either or both of the A and B boxes. However, in the presence of the We assigned RNAP II promoter occupancy to the miRNA-143 gene in HCT116 cells by chromatin immunoprecipitation using antibodies specific to RNAP II. Sheared chromatin from HCT116 cells that had been cross-linked with formaldehyde was immunoprecipitated with anti-RNAP II antibodies. Cross-links were removed and the DNA was purified. The promoter region of miRNA-143 was arbitrarily deconstructed into six segments of $100-200 nt each. Specific PCR primers were designed to amplify each component and revealed relative enrichment in sector D, corresponding to nucleotide position -978 with respect to the pre-miRNA start site. All values are relative to nonimmune IgG and normalized to an intergenic control region. Antibodies to GAPDH served as a positive control and revealed $30-fold enrichment in the respective promoter region. The corresponding location of Amp miRNA-143 is shown and lies outside the region of RNAP II enrichment. (b) We used various inhibitors of RNAP to test their effect on the biosynthetic activity of Amp miRNA-143 as well as RNAP II/III-driven expression constructs. Cells were treated with a-amanitin (50 ug/ml) and actinomycin (2 ug/ml) and transfected with the various constructs prior to harvesting and mature miRNA-143 quantification by real-time PCR. Data are shown as fold reduction in mature miRNA (continued) siRNA targeting POLR3A, the same cells retained fully their ability to transcribe the shorter Amp miRNA-517a that lacked the A and B boxes (Figure 6d). Thus, although wild-type miRNA-517a is under the control of RNAP III through the A and B boxes, an amplicon that does not include A and B boxes appears to operate upon transfection in a manner similar to the other amplicons we described earlier.
Besides RNAP II/III, two other polymerases are operational in mammalian cells: RNAP I and the more recently described single polypeptide nuclear RNA polymerase (spRNAP-IV) (20). RNAi-based depletion of RNAP I or spRNAP-IV had no effect upon the biosynthetic activity of Amp miRNA-XX (Supplementary Figure S3).
Effect of Amp miRNA-XX on endogenous miRNA expression
Numerous studies have demonstrated that ectopic expression of small RNA molecules may perturb normal cellular function through multiple mechanisms. For example, ectopic expression of shRNA appears to deregulate endogenous miRNA expression in vitro and perhaps more importantly, in vivo (21,22). To determine whether Amp miRNA-XX species impacted endogenous miRNA expression, we introduced Amp miRNA-143 into HEK 293T cells and quantified the levels of miRNA-let7a, 125a and 125b. These three latter species are relatively abundantly expressed in HEK 293T cells. As seen in Figure 7, there was no significant difference in the endogenous expression of these three miRNA species in cells that had been transfected with Amp miRNA-143 (b) Relative amount of mature miRNA-517a produced upon cellular introduction of the various constructs was quantified by real-time PCR as compared to cells transfected with an empty vector (Ø). While A and B box inclusive amplicons produced the highest levels of mature miRNA, the 206-bp amplicon without A/B box retained the ability to produce mature miRNA. Experiments were performed in duplicate and constructs #1-#4 all produced significantly higher levels of mature product than Ø (P < 0.05). (c) The functionality of Amp miRNA-517a was assessed by sensor assays. While the RNAP III (U6)-driven expression vector retained the most potency ($80%), shorter amplicons with and without A/B boxes also reduced Renilla relative light units by $30-40%. All constructs significantly inhibited sensor activity (P < 0.005). (d) Effect of silencing RNAP III on Amp miRNA-517a amplicons. Cells were treated with anti-POLR3A or an irrelevant siRNA and transfected with amplicons prior to harvesting and quantifying mature miRNA-517a levels. RNAP III knockdown had no effect on Amp miRNA-517a biosynthetic activity (P = 0.17) yet significantly decreased the synthetic capacity of both RNAP III-driven expression vector (P = 0.0006) and amplicons harboring A/B boxes (P = 0.0001). compared to cells that had been transfected with an empty vector or a RNAP II-driven miRNA-143 expression plasmid.
Engineered miRNA-like molecules do not participate in atypical transcription
Our experiments suggested that the miRNA species examined above possessed either sequence and/or structural determinants that enabled the recruitment of an unconventional transcriptional machinery. To gain mechanistic insight into the perplexing nature of atypical transcription, we first compared the nucleotide sequence of the various miRNA associated with functional amplicons and could not identify a conserved sequence motif. All amplicons, however, shared two major attributes upon transcription: their hairpin-like RNA structure and subsequent stereotypic processing by RNAase III enzymes into shorter fragments of mature and (*) miRNA and possibly other 21-22-nt fragments, such as the recently characterized miRNA-offset RNA (23,24). We reasoned that as long as the hairpin structure and Dicer/Drosha processing sites were maintained, the primary sequence of any given Amp miRNA-XX species could be altered yet remain functional. Previously, we and others have demonstrated that the miRNA backbone can be modified to include sequences of a given siRNA and thereby serve as efficient vectors of delivery upon incorporation into RNAP III-driven expression cassettes (25). Thus, we introduced sequences encoding a previously validated siRNA targeting HIV-1 tat into the backbone of miRNA-143 (Figure 8a). Unexpectedly, when miRNA-143/tat was PCR-amplified and introduced as a promoter-less Amp, we could not detect tat siRNA expression. However, introduction of an RNAP III-driven expression cassette harboring miRNA-143/tat into HEK 293T cells led to tat shRNA detection, as assessed by northern blot (Figure 8b). Thus, the sequence replacement did not prevent the molecule's entry into and processing by the cellular RNAi machinery upon placement of the construct under the direction of a strong promoter. Similar negative results were encountered when tat siRNA sequences were embedded in miRNA-30a (data not shown). The only seeming distinction between functional Amp miRNA-XX versus non-functional Amp miRNA-/tat was the replacement of mature and (*) miRNA sequences with those encoding the tat siRNA guide and passenger strand, respectively. It was entirely unclear to us why engineered miRNA/tat molecules failed to undergo atypical transcription when delivered as Amp. We next modified the miRNA-143 backbone such that mature and (*) sequences were replaced with those encoding miRNA-145. Introduction of this hybrid molecule led to the production of mature, functional miRNA-145 in a statistically significant manner (Figure 8c and d). Thus, alteration of the miRNA backbone in and of itself does not appear to impact the biosynthetic potential of Amp as much as the choice of cargo.
DISCUSSION
In summary, we report our findings of atypical self-transcription upon cellular introduction for several DNA segments, collectively referred to here as Amp miRNA-XX , which contain the precursors for known human miRNA that are devoid of any known exogenous promoters. The majority of miRNA (8/12) that we studied were intergenic and thus expected to be transcribed with their host genes (15). A recent promoter analysis localized transcription start sites of miRNA-26a, 30a, 107 and 122 at several thousand nucleotides from the beginning of their respective pre-sequences. Our experiments, which included these and other miRNA, however, indicated that a cryptic transcriptional element was present in the sequence of the pri/pre-sequences of the intronic and intergenic miRNA genes examined. Our analysis also revealed altered sensitivity of Amp miRNA-XX to pharmacologic and/or genetic knockdown of RNAP I/ II/III, thus raising the possibility of an atypical transcriptional program involving pri-miRNA gene fragments. The absence of any apparent shared sequence features among the different amplicons that we employed suggests the formation of a likely novel scaffold that is able to recruit the necessary factors required for transcription. The precise components of this scaffold await identification but replacement of mature and (*) sequences of a bonafide miRNA with those encoding siRNA leads to a non-functional molecule. These results raise the intriguing possibility that mature and (*) sequences are perhaps themselves involved in enabling miRNA fragment transcription but this conjecture clearly needs vigorous investigation.
From the experiments described earlier, we conclude that at least some of the miRNA precursors that are currently known in the literature have the ability to self-transcribe their own sequence fragments, and can do so in a number of different human cell types. Transcription efficiency is lower by comparison to that of plasmid systems or PCR amplicons employing RNAP II/III promoters; consequently, the phenomenon can be observed readily only against a backdrop of minimal expression for the studied miRNA.
To date, means of expressing miRNA in cells include their delivery in plasmid or viral vectors as well as direct delivery as synthetic molecules. Our findings suggest that it might be possible to engineer expression of at least a handful of miRNAs simply by introducing the corresponding miRNA precursor fragments in the form of synthetic or PCR-generated DNA automatons devoid of traditional promoters into targeted cells in vitro or entire organisms in vivo.
|
2014-10-01T00:00:00.000Z
|
2010-01-21T00:00:00.000
|
{
"year": 2010,
"sha1": "461df66e2ca50ce11235c08f91c8c45663f0035e",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/38/9/2775/16771394/gkp1242.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1655c7f2b5fbae05c091fc3450982571933870c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
51875359
|
pes2o/s2orc
|
v3-fos-license
|
Unsupervised Domain Adaptive Re-Identification: Theory and Practice
We study the problem of unsupervised domain adaptive re-identification (re-ID) which is an active topic in computer vision but lacks a theoretical foundation. We first extend existing unsupervised domain adaptive classification theories to re-ID tasks. Concretely, we introduce some assumptions on the extracted feature space and then derive several loss functions guided by these assumptions. To optimize them, a novel self-training scheme for unsupervised domain adaptive re-ID tasks is proposed. It iteratively makes guesses for unlabeled target data based on an encoder and trains the encoder based on the guessed labels. Extensive experiments on unsupervised domain adaptive person re-ID and vehicle re-ID tasks with comparisons to the state-of-the-arts confirm the effectiveness of the proposed theories and self-training framework. Our code is available at \url{https://github.com/LcDog/DomainAdaptiveReID}.
Introduction
To re-identify a particular is to identify it as (numerically) the same particular as one encountered on a previous occasion [24]. Image/video re-identification (re-ID) is a fundamental problem in computer vision and re-ID techniques serve as an indispensable tool for numerous real life applications, for instance, person re-ID for public safety [34], travel time measurement via vehicle re-ID [5]. The key component of re-ID tasks is the notion of identity, which makes re-ID tasks quite different from traditional classification tasks in the following ways: Firstly, determining the label involves two samples, i.e., there is no label when an individual sample is given; secondly, in re-ID tasks the samples in test sets belong to a previously unseen identity while in classification tasks the test samples all fall into a known class. Take the person re-ID task as an example, our target is to spot a person of interest in an image set, which do not have a specific class and is not accessible in the training set.
In many practical situations, we face the problem that the training data and testing data are in different domains. Going back to the person re-ID example, data from a new camera is placed in a new environment, i.e., a new domain is added, which are too costly and impractical to be annotated, a serviceable re-ID model should have a satisfactory accuracy on unlabeled data. Unsupervised domain adaptation means that learning a model for target domain when given both a fully annotated source dataset and an unlabeled target dataset. Existing algorithms for unsupervised domain adaptive re-ID tasks typically learn domain-invariant representation or generate data for target domain through some newly designed networks, which are practical solutions but lack theoretical support [14,30,6]. Meanwhile, current theoretical analysis of unsupervised domain adaptation are only concerned with classification tasks [2,3,17], which is not suitable for re-ID tasks. A theoretical guarantee of the domain adaptive re-ID problem is in need.
In this paper, we first theoretically analyze unsupervised domain adaptive re-ID tasks based on [3], in which three assumptions are introduced for classification. It is assumed that the source domain and the target domain share a same label space in [3]. However, in re-ID tasks, the notion of label is defined for pairwise data and the label indicates a data pair belongs to a same ID or not. We adapt the three assumptions for the input space of pairwise data. Moreover, instead of imposing the assumptions on the raw data as [3], we assume the resemblance between the feature space of two domains. The first assumption is that the criteria of classifying feature pairs is the same between two domains, which is referred to as covariate assumption. The second one is Separately Probabilistic Lipschitzness, indicating that the feature pairs can be divided into clusters. And the last assumption is weight ratio, concerning the probability of existing a repeated feature among all the features from the two domains. Based on the three assumptions, we show the learnability of unsupervised domain adaptive re-ID tasks. Moreover, since our guarantee is built on well extracted features from real images, the encoder, i.e. feature extractor, is trained via a novel self-training framework, which is originally proposed for NLP tasks [19,20]. Concretely, we iteratively refine the encoder by making guesses on unlabeled target domain and then train the encoder with these samples. In the light of the mentioned assumptions, we propose several loss functions on the encoder and samples with guessed label. And the problem of selecting which sample with guessed label to train is optimized by minimizing the proposed loss functions. For the Separately Probabilistic Lipschitzness assumption, we wish to minimize the intra-cluster and inter-cluster distance. Then the sample selecting problem is turned into data clustering problem and minimizing loss functions is transformed into finding a distance metric for the data. Also, another metric for Weight Ratio is designed. After combining the two metrics together, we have a distance evaluating the confidence of the guessed labels. Finally, the DBSCAN clustering method [5] is employed to generate data clusters according to a threshold on the distance. With pseudo-labels on selected data cluster from target domain, the encoder is trained with triplet loss [32], which is effective for re-ID tasks.
We carry out experiments on diverse re-ID tasks, which demonstrate the priority of our framework. First the well studied person re-ID task is tested and we show the adaptation results between two large scale datasets, i.e. Market-1501 [33] and DukeMTMC-reID [25]. Then we evaluate our algorithm on vehicle re-ID task, in which larger datasets VeRi-776 [16] and PKU-VehicleID [15] are involved. To sum up, the structure of our paper is shown in Figure 2 and our contributions are as follows: • We introduce the theoretical guarantees of unsupervised domain adaptive re-ID based on [3]. A learnability result is shown under three assumptions that concerning the feature space.
To the best of our knowledge, our paper is the first theoretical analysis work on domain adaptive re-ID tasks. • We theoretically turn the goal of satisfying the assumptions into tractable loss functions on the encoder network and data samples. • A self-training scheme is proposed to iteratively minimizing the loss functions. Our framework is applicable to all re-ID tasks and the effectiveness is verified on large-scale datasets for diverse re-ID tasks.
Related work
Unsupervised domain adaptation has been widely studied for decades and the algorithms are divided into four categories in a survey [18]. Using the notions in the survey, our proposed method can be viewed as a combination of feature representation and self-training. Nevertheless, recently unsupervised domain adaptation is widely studied for the person re-ID task.
Unsupervised domain adaptation and feature representation. Feature representation based methods try to find a latent feature space shared between domains. In [26], they wish to minimize a distance between means of the two domains. In a more general manner, [22] and [4] try to find a feature space in which the source and target distributions are similar and the statistic Maximum Mean Discrepancy (MMD) is employed. Also, [10] utilize features that cannot discriminate between source and target domains. Our method and these methods have a same intuition that some features from the source and target domain are generalizable. However, unlike these methods, the process of approximating the intuition in our method is in an iterative manner and we do not directly optimize on the distribution of target domain features.
Unsupervised domain adaptation and self-training. Self-training methods make guesses on target domain and iteratively refine the guesses and are closely related to the Expectation Maximization (EM) algorithm [21]. In [27], they increase the weight on the target data at each iteration, which is actually altering the relative contribution of source and target domains. A more similar work is [1], in which the model is initially trained on source domain, and then the top-1 recognition hypotheses on the target domain are used for adapting their language model. In our algorithm, we do not guess the labels since different re-ID datasets have totally different labels (identities) and instead we perform clustering on the data.
Unsupervised domain adaptive person re-ID. Due to the rapid development of person re-ID techniques, some useful unsupervised domain adaptive person re-ID methods are proposed. [23] adopt a multi-task dictionary learning scheme to learn a view-invariant representation. Besides, generative models are also applied to domain adaptation in [6,31]. Wang et al. [30] design a network learning an attribute-semantic and identity discriminative feature representation. Similarly, Li et al. [14] leverages information across datasets and derives domain-invariant features via an adaptation and a re-ID network. Though all the above methods solve the adaptation problem, they are not supported by a theoretical framework and their generalization abilities are not verified in other re-ID tasks. Fan et al. [8] propose a progressive unsupervised learning method consisting of clustering and fine-tuning the network, which is similar to our self-training scheme. However, they only focuses on unsupervised learning, not unsupervised domain adaptation. In addition, their iteration framework is not guided by specific assumptions thus having no theoretical derived loss functions as ours.
Notations and Basic Definitions
In classification tasks, let X ⊆ R d be the input space and Y ⊂ R be the output space, and each sample from the input space is denoted by black lower case letters x ∈ X. We denote source domain as S and target domain as T , and both of them are probability distribution over the input space X. Moreover, the real label of each sample is denoted by a labeling function l : X → Y. However, the above notations could not be directly used to analyze the re-ID tasks, because there is no same identity in two domains, i.e. S and T do not have the same output (label) space. Fortunately, for re-ID tasks, by treating re-ID as classifying same or different data pairs we are still able to utilize the notations and former results with some simple reformulations.
Specifically, in re-ID tasks, we have a training set consist of data pairs, which means that the input space is (Z, Z) ⊆ R n×n , and the output space is Y = {0, 1}, where 1 means the identities in the pair are the same and 0 means different. Observing that in re-ID tasks, the two domain indeed have some overlapping cues, such as color of clothes, wearing a backpack or not in person re-id. That is, we can encode the original data from the two domains with some feature variables or latent variables, and then it is reasonable to assume that distribution of features from two domains satisfy some criteria just as the assumptions used in [3] for classification tasks. Formally, we denote the feature encoder as x(·) and x : Z → R d , and then the labeling function is l : X × X → {0, 1}, where X ⊆ R d is the extracted feature space. For simplicity, we denote l(x(z 1 ), x(z 2 )) = l(x 1 , x 2 ), where z 1 , z 2 means two different raw data. Note that the labeling function is symmetric, i.e. l(x 1 , x 2 ) = l(x 2 , x 1 ).
Assumptions and DA-Learnability
In this section, we first introduce some assumptions reflecting how the source domain interacting with target domain. Then with these assumptions we show the learnability of unsupervised domain adaptive re-ID.
The first assumption is covariate shift, which means that the criteria of classifying data pairs are the same for source domain and target domain. In other words, we have l S (x) = l T (x) for classification tasks, and similarly we can define the covariate shift for re-id tasks on the extracted feature space. Definition 1 (Covariate Shift). We say that source and target distribution satisfy the covariate shift assumption if they have the same labeling function, i.e. if we have l S (x 1 , Another assumption is inspired by the "Probabilistic Lipschitzness", which is originally proposed for semi-supervised learning in [28] and then investigated with application to domain adaptation tasks in [3]. This assumption captures the intuition that in a classification task, the data can be divided into label-homogeneous clusters and are separated by low-density regions. However, in re-id tasks, the labeling function is a multivariable function, thus the original Probabilistic Lipschitzness is not applicable. Note that the intuition of re-id tasks is that similar pairs can form as a cluster. That is, for an instance, the similar data can be divided into a cluster and the cluster is separated out from the data space with a low-density gap. Mathematically, we have the following definition. To ensure the learnability of the domain adaptation task, we still need a critical assumption concerning how much overlap there is between the source and target domain. We again follow the assumption used in [3] on the source and target distribution, which is a relaxation of the pointwise density ratio between the two distributions. Definition 3 (Weight Ratio). Let B ⊆ 2 X be a collection of subsets of the input space X measurable with respect to both S and T . For some η > 0 we define the η-weight ratio of the source distribution and the target distribution with respect to B as Further, we define the weight ratio of the source distribution and the target distribution with respect to B as Following the notations in [3], we also assume that our domain is the unit cube X = [0, 1] d and let B denote the set of axis aligned rectangles in [0, 1] d . For our re-ID tasks, the risk of a classifier h on target domain is Let the Nearest Neighbor classifier be h NN , then the following theorem implies the learnability of domain adaptive re-ID, of which the proof is included in supplemental materials.
Reinforcing the Assumptions
In previous section, we show that with some assumptions on the extracted feature space, unsupervised domain adaptation is learnable. Thus we are concerned with how to train a feature extractor, i.e. encoder, satisfying the mentioned assumptions. Briefly speaking, we first derive several loss functions according to the assumptions and then iteratively train the encoder to minimize the loss functions via a self-training framework.
Self-training framework. Assume that we have an encoder x and some samples D with guessed label l on target domain, and the loss function is L(x, D, l). In self-training, at first a x (i) is used to extract features from all available unlabeled samples, and the target now is minimizing the loss through selecting samples, that is min D,l L(x (i) , D, l). On the next round, with these selected samples, the encoder x (i) is updated by solving the minimization problem min x L(x, D (i) , l (i) ).
It is worthwhile to note that the covariate shift assumption only depends on the property of labeling function, thus in this section we only consider the proposed SPL and weight ratio.
Reinforcing the SPL
Recall that the original data is z ∈ Z and we wish to iteratively find a encoder x(·) such that in the feature space the SPL property is satisfied as much as possible. So we first need a definition to evaluate whether one encoder is better than another concerning the SPL property.
Definition 4. Encoder x a (·) is said to be more clusterable than x b (·) with respect to a labeling function l and a distribution D over Z, if there exists ∈ (0, 1), and λ ∈ {λ 1 , λ 2 } with λ 1 λ 2 < 0, such that P z1,z2∼D The above equation differs from the original SPL (1) for the reason that the original form is too strict to be satisfied. Now we can easily define a loss function where D means a set of samples and l is the guessed labeling function. However, directly performing optimization on the loss function is infeasible since the analytical form is unknown. To overcome the difficulty, we adopt intra-cluster distance and inter-cluster distance, We show that minimizing L intra and L inter is appropriate for being more clusterable through the following theorem. Theorem 2. For two encoders x a , x b , a distribution D and a labeling function l, then For proof we refer reader to the supplemental materials. Here, Definition 4 and Theorem 2 describe how to evaluate an encoder with a fixed distribution D and labeling function l. Obviously, we can fix the encoder and rewrite the results to evaluate the samples with guessed labels. For the sake of conciseness, the details are omitted. When D and l are fixed during the iteration procedure, minimizing L intra and L inter are straightforward. Contrastingly, we have to focus more on the strategy of picking out samples with guessed labels.
Selecting samples via clustering. In spite of the similarity between L intra and L inter , they do not share a same strategy regarding the sample selection step. For L intra , if all the data in T are encoded with a x, then for each pair (x i , x j ), it is natural to assume that a smaller x i − x j implies a higher confidence that l(x i , x j ) = 1. Likewise, a larger x i − x j implies a higher confidence that l(x i , x j ) = 0. But choosing a high confidence different pair as training data does not really improve the real performance, because the accuracy is more sensitive about the minimal distance of different pairs, i.e., inf i: . So rather than directly selecting different pairs, we treat the selected samples as a series of clusters and dissimilar pairs are selected on the basis of different clusters. That is to say, in order to minimize L intra and L inter simultaneously, we perform clustering on the data with guessed labels.
Distance metrics and loss functions. Up to this point, we are facing an unsupervised clustering problem, which is largely settled by the distance metric. In other words, designing a sample selecting strategy to minimize a loss turns into designing a distance metric between samples, and a better distance should lead to a lower L intra and L inter . It is a common practice in image retrieval that the contextual similarity [12] measure is more robust and beneficial for a lower L intra .
In our practice, we adopt the k-reciprocal encoding in [35] as the distance metric, which is a variation of Jaccard distance between nearest neighbors sets. Precisely, with an encoder x, all samples from T are encoded and with these features a distance matrix M ∈ R mt×mt is computed where M ij = x i − x j 2 and m t is the total number of target samples. Then M is updated by where the indices set I i is the so called robust set for x i . I i is determined by first choosing mutual k nearest neighbors for the probe, then incrementally adding elements. Specifically, denote the indices of mutual k nearest neighbors of x i as K k (x i ) and then for all
Reinforcing the weight ratio
As mentioned before, weight ratio is a crucial part to support the learnability of domain adaptation. Apart from directly define a loss based on the original weight ratio definition, a similar way as the SPL case is minimizing the loss where S is the source domain. The intuition here is to enhance the degree of similarity, which means that each target feature is close to some source features. We denote C B,η (S, T ; x) as the weight ratio when using x as the encoder, where B is defined in Section 3. The following theorem demonstrate that our L WR makes sense and the proof is in the supplemental materials. Theorem 3. For two encoders x a , x b , a distribution D, if η is a random variable and its support is a subset of R + , then However, unlike L inter and L intra , it is hard to optimize on x for L WR because of the infimum. On the other hand, selecting samples is easily done via giving more confidence to the sample with smaller inf zs∼S x(z d ) − x(z s ) . More specifically, for each x i from T , we search the nearest neighbor in S. The function measuring the confidence for each x i is denoted by where N S (x i ) means the nearest neighbor of x i in source domain S, and a smaller d W means a higher confidence. To transform d W and d J onto the same scale, we perform a simple normalization on d W , i.e., divided by where λ ∈ [0, 1] is a balancing parameter.
Overall algorithm
So far, general outlines of reinforcing the assumptions have been elaborated, except the details about the clustering method. In our framework, a good clustering method should possess the following properties: (a) it does not require the number of clusters as an input, because in fact a cluster means an identity and the number of identities is trivial and unknown; (b) it is able to avoid pairs of low confidence, that is allowing some points not belonging to any clusters; (c) it is scalable enough to incorporate our theoretically derived distance metric. We employ the clustering method named DBSCAN [7], which has stood the test of time and exactly have the mentioned advantages.
Now we provide some other practical details of our domain adaptive re-ID algorithm. At the beginning, an encoder x (0) is well trained on S and all the pairs are computed with Eqn. (11). Next, we describe how we set the threshold controlling whether a pair should be used to train. Intuitively, the threshold should be irrelevant to tasks since the scale of d varies from tasks. So in our method, we first sort all the distance from lowest to highest and the average value of top pN pairs is set to be the threshold τ , where N is the total number of possible pairs and p is percentage. On these data with pseudo-labels, the encoder is then trained with triplet loss [32]. Our whole framework is concluded in Algorithm 1.
Experiments
In this section, we test our unsupervised domain adaptation algorithm on person re-ID and vehicle re-ID. The performance are evaluated by cumulative matching characteristic (CMC) and mean Average Precision (mAP), which are multi-gallery-shot evaluation metrics defined in [33].
Compute M (i) on T (i) , S (i) ; 10 Select D (i) = DBSCAN(M (i) ; τ, N1); 11 Train x (i+1) on D (i) ; 12 end Parameter settings and implementation details. In all the following re-ID experiments, we empirically set λ = 0.1, p = 1.6 × 10 −3 , N 1 = 4 and N 2 = 20. Basically, the encoder is ResNet-50 [11] pre-trained on ImageNet. Both triplet and softmax loss are used for initializing the network on source domain, while only triplet loss is used for refining the encoder on target domain. More details about the network, training parameters and visual examples from different domains are included in the supplemental materials. Moreover, in the supplemental materials we also investigate other distance metrics and clustering methods. [33] and DukeMTMC-reID [25] are two large scale datasets and frequently used for unsupervised domain adaptation experiments. Both of the two datasets are split into a training set and a testing set. The details including the number of identities and images are shown in Table 1.
Person re-ID
Comparison methods are selected in three aspects. Firstly, we show the performance of direct transfer, that is directly using the initial sourcetrained encoder on the target domain. Also, the plain self-training scheme is compared as a baseline, which means sample selection only depends on their Euclidean distance. Secondly, our method is compared with three most recent state-of-the-art methods 2 : SPGAN [6], TJ-AIDL [30] and ARN [14]. We report the original results quoted from in their papers. Thirdly, we show the results of our methods with and without d W , which can be viewed as ablation studies. The results are shown in Table 2, from which we can observe the following facts: (a) The accuracy of self-training baseline is high and even better than two recent methods, indicating that our clustering based self-training scheme is fairly good; (b) The version without d W is better than self-training baseline, which shows the effectiveness of d J , and after incorporated with d W the final method achieves the highest accuracy, reflecting the advantage of d W . Thus our two assumptions are both useful according to the ablation studies. (c) Although the proposed d W is beneficial, the increase of accuracy brought by it varies from different tasks. We think this is related to the distribution of source and target domains. Please refer to for more discussion in B.3 on this problem.
Furthermore, we draw the mAP curves ( Figure 1) during the iterations of the adaptation task Duke→Market, in which self-training baseline, using distance without d W and λ = {0.05, 0.1, 0.5, 0.7} are compared. We can see that except the baseline, all the curves have a similar Figure 1: Convergence comparison tendency toward convergence. A subtle distinction is that after 18 iterations methods with smaller λ become unstable, while methods with larger λ move toward convergence.
Vehicle re-ID
We use VeRi-776 [16] and part of PKU-VehicleID [15] for vehicle re-ID experiments 3 , the details are included in Table 1. Unlike person re-ID, currently there are no unsupervised domain adaptation algorithms designed for vehicle re-ID. Thus, we use the existing solutions for person re-ID as comparisons 4 . As shown in Table 3, not only are the conclusions from person re-ID verified again, but also the generalization ability of our method is shown. We discover that the compared SPGAN generates quite presentable images and we put the images into supplemental materials, but their accuracy is still lower than the self-training baseline, not to mention our proposed method.
Conclusion and Future Work
In this work, we bridge the gap between theories of unsupervised domain adaptation and re-id tasks. Inspired by previous work [3], we make assumptions on the extracted feature space and then show the learnability of unsupervised domain adaptive re-id tasks. Treating the assumptions as the goal of our encoder, several loss functions are proposed and then minimized via self-training framework.
Though the proposed solution is effective and outperforms state-of-the-art methods, there are still problems unsolved in our algorithm. Firstly, with regard of the weight ratio assumption, we propose the loss function L WR , which is ignored when updating the encoder because of the intractable infimum. So designing another feasible loss function is an interesting direction of research. Another promising issue is to improve the data selecting step in the self-training scheme. We turn the data selecting step into a clustering problem, which can be thought of as a version with hard threshold. This suggest that there may be a better strategy which utilize the relative values between distances. We hope that our analyses could open the door to develop new domain adaptive re-ID tasks and can lift the burden of designing large and complicate networks.
A Theorems and Proofs
To prove Theorem 1, we first give a lemma on the upper error bound of R T (h NN ). Let B denote the set of axis aligned rectangles in [0, 1] d and, given some η > 0, let B η denotes the class of axis aligned rectangles with side-length η. For a sample set S from source domain, we have Lemma 1. Let the domain be the unit cube, X = [0, 1] d , and for some C > 0 and some η 0, let (S, T ) be source and target distributions over X satisfying the covariate shift assumption, with C B,η (S, T ) C, and their common re-id labeling function l : X × X → {0, 1} satisfying the φ-SPL property with respect to the target distribution, for some function φ. Then, for all m, and all (S, T ), Proof. A test pair (x 1 , x 2 ) gets the wrong label under two conditions: (a) at least one test data do not have a close neighbor with all the m training data; (b) (x 1 , x 2 ) have a close neighbor pair which have the opposite label. For (a), we can use the results from Lemma 7 and Theorem 8 in [3]. Specifically, let C 1 , C 2 , · · · , C 1 /η d be a cover of the set [0, 1] d using boxes of side-length η. We have If x is in the box C x , then the probability of (a) can be expressed as Observing that and P(C x ∩ S = ∅) = i:S∩Ci=∅ P(C i ), so (a) is bounded by 2 η d Cme . For (b), we denote the nearest neighbor to x in S is N S (x) and then (b) means in the box we have Seeing that Combining the two bounds together, we conclude our proof.
If we have a stronger weight ratio assumption, i.e. C B (S, T ) C, we get the following result of domain adaptation learnability. Proof. From the proof in Theorem 1, the error was bounded under two circumstances. As for (a), we apply Markov's inequality and get Then for (b), we just set 2φ( 1 Finally, setting the probability to be smaller than δ yields that if then with probability at least 1 − δ, the target error of the Nearest Neighbor classifier is at most .
B Additional Experimental Details and Results
We present the structure of the paper in Figure 2 and the most important contributions in our work are Theorem 2 and 3, both of which aim to turn the abstract and somewhat too theoretical assumptions into practical loss functions. Although Theorem 1 seems like a straightforward extension of previous work [3], it plays a fundamental role in the paper. Through the DA-learnability shown in Theorem 1, we can see that the three assumptions imposed on the distribution of two domains in Section 3 are sufficient for solving the domain adaptive re-ID problem. In other words, the sufficiency of reinforcing the three assumptions in Section 4 is shown via Theorem 1.
B.1 Visualization of datasets and results
To understand the variations between different domains more clearly, Figure 3 presents some samples from the datasets used in our experiments. These datasets all have their own special characteristics. For instance, people riding a bicycle are common in Market-1501, while these people are rare in DukeMTMC-reID. More importantly, the images in these re-ID datasets are heavily related to the cameras, which means that the images contain information closely knitted together with the camera, such as background, viewpoints or lighting condition.
Moreover, we present some generated samples of SPGAN for vehicle re-ID. As shown in Figure 4, their image-image translation indeed works but fails to produce satisfactory re-ID results as person re-ID. This indicates that either their proposed generative method is not suitable for unsupervised domain adaptive vehicle re-ID, or their parameters need careful tuned for a new task.
Original Generated Original Generated Figure 4: Sample images generated by SPGAN on vehicle datasets.
B.2 Encoder network
Basically, the encoder network is ResNet-50 [11] pre-trained on ImageNet and the whole network is presented in Figure 5.
Person re-ID. The size of input images is 256 × 128 × 3, so the output of conv5 is 8 × 4 × 2048 and a average pool layer is added after conv5 to have a output of size 1 × 1 × 2048. We denote the output of this layer as feat1. During training on the source domain, feat1 is connected to a fully-connected (fc) layer with output 2048, denoted fc0, then the 2048 fc layer is connected to a fc layer with output 751 (Market-1501) or 702 (DukeMTMC-reID). Let the output of finally fc layer be fc1. The loss functions are Softmax(fc1) and Triplet(feat1), which are added directly (without extra balancing parameter). The model is trained by Adam optimizer [13]. Training parameters are set as follows: batch size 128 (PK sampling with P=16, K=8); maximum number epochs 70; learning rate 3e-4.
When training with data from target domain, there is no fc1 layer and we use two triplet loss, that is Triplet(feat1) and Triplet(fc0). The trick of using two triplet losses comes from [29]. The model is trained by stochastic gradient descent and in each iteration step we perform data augmentation (random flip and random erasing) on the data. Training parameters are set as follows: batch size 128 (PK sampling with P=32, K=4); momentum 0.9; maximum number epochs 70; learning rate 6e-5. The networks are trained with two TITAN X GPUs.
Vehicle re-ID. All parameters including network architecture are same as person re-ID, except the size of input data. The input data here are resized to 224 × 224 × 3 and the output of conv5 is 7 × 7 × 2048.
B.3 More results
Effectiveness of d W . From Table 2, Table 3 and Figure 1, we observe that in a practical view, using d W actually is not appealing. We think the reasons are two folds. Firstly, the effectiveness of d W depends on the distribution of source and target domain. In Fig.(6), we design a simple example in 2D feature space to show the validity of d W . In the left figure, the grey points denotes the extracted feature from source space and the colored points denotes the features of target data with real label. In the middle figure, we show the pseudo labels generated with DBSCAN when setting λ = 0, i.e., not using d W . In the right figure, the results with d W is shown. Comparing the middle figure and the right figure, we can see that d W is important in such situation. The key idea in this demo is that those "easy" target points happen to be near the source data. Here, "easy" target points means the points belonging to the same ID are "close" in the extracted feature space with present encoder. This example can be also used for classification tasks since the Weight Ratio is a shared assumptions between our work and [3]. Secondly, d W is derived from L WR , but the potential value of L WR is not fully exploited in our algorithm. Thus, using d W in practical application is not appealing. However, when not using d W , the results are stable and good enough and already outperforms existing methods by large margin, which shows the power of the self-training scheme in domain adaptive re-ID problems. For real applications, if the computation resources are limited, we recommend just setting λ = 0 and not making the effort to search for an optimal λ.
Comparison of distance metrics. As for other contextual distance metrics, we test the performance of using the original Jaccard distance with or without d W (also set λ = 0.1). For the Jaccard distance, we first compute the k = 20 nearest neighbor set and then compute the distance between the sets. Another conclusion is that taking d W into consideration is also beneficial for Jaccard distance. However, both of the two distance metrics are worse than the self-training baseline, i.e., Euclidean distance. The reason is that the Jaccard distance only consider the nearest neighbor sets and therefore pairs without overlapping nearest neighbors will have a Jaccard distance 1, which is too strict to generate enough training pairs. The shortcoming also leads to a slow or even halted increasing of accuracy, for more details see the convergence comparison paragraph and Figure 1. As is shown in Table 4, k-reciprocal encoding employed in our method positively improve the performance of plain Jaccard distance.
Comparison of clustering methods. Due to the restrictions of a suitable clustering method, we only test a version with affinity propagation [9]. For task DukeMTMC-reID→Market-1501, we investigate the effectiveness of affinity propagation with other distance metrics. It is obvious that affinity propagation is not a proper clustering method for the reason that all data are used for clustering, which means it cannot avoid those pairs of low confidence. As shown in Table 5, a interesting fact is that with affinity propagation just using Euclidean distance is better than our proposed distance. The reason behind this phenomenon is that the number of IDs (clusters) generated by affinity propagation is much larger when using our proposed distance. In Figure 7, we show the number of IDs with respect to each iteration step. Using our distance leads to a larger number of clusters out of the reason that our distance will enlarge the gap between the dissimilar pairs, which is ought to be beneficial of getting rid of these helpless stray samples. However, affinity propagation is a clustering method that every sample is assigned to some cluster and therefore using our distance performs worse than Euclidean distance. Parameters analysis Among all the parameters in our algorithm, the most influencing parameters are the percentage p and the balancing parameter λ. Since the influence of λ has been reported, here we perform experiments with a series of different p from DukeMTMC-reID to Market-1501 and the results are shown in Table 6. As we can see from the table, even a small change (2 × 10 −4 ) of p has a discernible impact on the final accuracy. It is because that we use large scale datasets and the number of all possible pairs from target datasets is large. Take Market-1501 as an example, the number of training images is 12,936, so the number of all data pairs is over 8 × 10 7 . Thus a small change of p can cause a large change of the threshold.
Convergence comparison. In Figure 8, we use DukeMTMC-reID as source domain and Market-1501 as target domain and we first show the convergence results with different distance metric and clustering method in (a) and (b). Several conclusions can be drawn from the curves: First, we can see that the Jaccard distance based version becomes more stable after adding d W ; Second, the accuracy of the Jaccard distance based version almost stops increasing after 14 iterations, which is caused by the special property of Jaccard distance mentioned before; Third, using affinity propagation converges very fast and after about 8 iterations the accuracy stop increasing, which is caused by the inaccurate number of clusters and all the samples are used to train the network. Thus the loss functions fail to be minimized through sample selection step. Moreover, we show the results with different p in (c) and (d). It is obvious that all the curves have a similar convergence tendency, which demonstrates that our iteration process is robust with regard of the crucial parameter p.
|
2018-07-30T13:18:31.000Z
|
2018-07-30T00:00:00.000
|
{
"year": 2018,
"sha1": "42656cf2b75dccc7f8f224f7a86c2ea4de1ae671",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1807.11334",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b856c493c2e5cbb71791f56763886e5e0d40295c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
246593988
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of the Complete Chloroplast Genome of the Dragonhead Herb, Dracocephalum heterophyllum (Lamiaceae), and Comparative Analyses with Related Species
Dracocephalum heterophyllum (Lamiaceae: tribe Mentheae) is an annual aromatic herb native to East Asia with a long record of human uses, including medicinal, alimentary, and ornamental values. However, no information is available about its molecular biology, and no genomic study has been performed on D. heterophyllum. Here, we report the complete chloroplast (cp) genome of D. heterophyllum and a series of comparative genomic analyses between this and closely related species of Lamiaceae. Results indicated that the cp genome has a typical circular structure of 150,869 bp in length, consisting of a long single-copy (LSC) region with 82,410 bp, a short single-copy (SSC) region with 17,098 bp, and two inverted repeat (IR) regions of 51,350 bp. A total of 133 genes were identified, including 37 tRNA genes, 8 rRNA genes and 88 protein-coding genes, with a GC content of 37.8%. The gene content, organization, and GC values observed here were similar to those of other Dracocephalum species. We detected 99 different simple sequence repeat loci, and the codon usage analysis revealed a preferential use of the Leu codon with an A/U ending. Comparative analysis of cp genome sequences revealed five highly variable regions with remarkably higher Pi values (>0.03). The mean Ka/Ks between D. heterophyllum and three other Dracocephalum species ranged from 0.01079 (psbB) to 1.0497 (ycf2). Two cp genes, ycf2 and rps11, were proven to have high ratios of Ka/Ks, implying that cp genes may had undergone positive selection in the evolutionary history. We performed multiple sequence alignments using the cp genome of 22 species and constructed maximum likelihood (ML) and Bayesian trees, and found that D. heterophyllum were more closely related to D. moldavica and D. palmatum. In addition, the phylogenetic relationships between Dracocephalum and other members of Lamiaceae were consistent with previous results. These results are valuable for further formulating effective strategies of conservation and management for species in Dracocephalum, as well as providing a foundation for future research on the genetic resources of Dracocephalum.
Introduction
The mint family, Lamiaceae, comprise more than 7000 species from about 236 genera and seven subfamilies distributed worldwide [1]. Plants in this family include numerous herbs with medicinal and ornamental values, such as Agastache rugosa, Mentha canadensisis, Perilla frutescens, and Scutellaria baicalensis [2]. The genus Dracocephalum, commonly known as "Dragonheads", comprises over 60 species of aromatic perennial herbs related to mints (tribe Mentheae) distributed in the northern temperate parts of the world. In China, 32 species and seven varieties of Dracocephalum are reported, mainly distributed in Northwest China [3]. Dragonheads have a long record in human history as medicinal herbs, and have been part of the traditional Tibetan and Uyghur medicines for centuries [4]. Previous studies have reported that the main active component found in D. heterophyllum is effective calming the mind, conferring protection for hypoxic brain damage, is antibacterial, and can alleviate diseases such as high blood pressure, lymphadenitis, and cough [4,5]. Moreover, a previous study by Zhang et al. has shown that essential oils of D. heterophyllum possess antimicrobial and antioxidant properties, and therefore, can be used as natural preservatives for food, cosmetics, and in pharmaceutics [6]. D. heterophyllum can better satisfy food needs of most livestock with a high feeding value. Meanwhile, it is usually developed as an auxiliary bee plant with advantage of much honey production and less powder [7].
In angiosperm plants, the chloroplast is a kind of essential plastid with a characteristic of matrilineal inheritance. As a place in which photosynthesis is conducted, it plays a vital role in many biochemical pathways, such as the metabolism of starch, fatty acids, nitrogen, amino acids, and internal redox signals [8]. Chloroplast is considered a semiautonomous genetic organelle with an independent transcription and transport system [9]. The cp genome usually encodes 110-130 genes which varies in size from 75 to 250 kb [10,11], and which is divided into four parts: a long single-copy sequence (LSC, 80-90 kb), a short single-copy (SSC, 16-27 kb) sequence, and two reverse complement copies (IR, 20-28 kb) [12]. The frequent hybridization and variation of plants have made nuclear genomes are highly complex, thus, it is difficult to identify the orthologous genes [13]. In addition, evolutionary and phylogenetic analyses based on complete cp sequences can provide a greater amount of potential information of a higher quality than that obtained by analysis of one or more gene loci [14]. The existing studies have confirmed that cp gene order, gene content, and genome organization are highly conserved in plants [15]. Due to their conserved nature and comparatively high substitution rate, the cp genomes are widely applied in studies of species delimitation, systematics and evolution, and genetic engineering [16,17]. Since the complete cp genome of tobacco was sequenced successfully in 1986 [18], and the development of next-generation sequencing, more and more cp genome of plants has been sequenced. According to incomplete statistics, the complete cp genome sequences of more than 1100 plants have been collected in the NCBI GenBank database [19]. These data provide a fresh approach for phylogenetic analyses, genetic diversity evaluation, and molecular identification of plants [20][21][22][23].
D. heterophyllum has rich pharmacodynamic ingredients and some unique functions, which are worthy of development. In this contribution, we sequenced and analyzed the complete chloroplast genomes of D. heterophyllum using the Illumina NovaSeq6000 platform. We also downloaded the cp genomes of other Dracocephalum species from public databases. Our main goals were to characterize their structure, further perform genomic comparative analyses on these, and construct a phylogenetic tree, using whole cp genome sequences of Dracocephalum and the closely related species of Lamiaceae, to assess variations among their plastomes and confirm the evolutionary position of D. heterophyllum. Our results have multiple applications and can be used as a robust backbone for developing molecular markers to study the diversity of Dracocephalum chloroplast genomes, and to better understand the evolutionary relationships within this family. The findings of this research will provide guidance for the screening of quality germplasms and the design of conservation strategies for wild populations of D. heterophyllum.
Sampling and DNA Extraction
Fresh and healthy leaves of D. heterophyllum were collected in Gangcha County, Qinghai Province, China, at 3315 m elevation (37.20639°, 99.67556°). Leaves were rapidly stored in silica until dried. The specimens were deposited in the Herbarium of the Northwest Institute of Plateau Biology (HNWP), Chinese Academy of Sciences, Xining, Qinghai Province, China. The accession number is QTP-LJQ-CHNQ-026-1003. The total genomic DNA was then extracted from dried leaves using a modified CTAB method [24], and evaluated for quality and concentration using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).
Chloroplast Genome Sequencing and Assembly
The DNA samples for testing were fragmented into 300~500 bp randomly by an ultrasonic processor (Covaris, Woburn, MA, USA). Then, we followed these steps: end repair and phosphorylate, add A-tailing, ligate index adapter, amplify and so on. A pairedend library of 2 × 150 bp with an insert size of ~350 bp was constructed and sequenced through the Illumina NovaSeq6000 platform. A total of 8,324,187 bp (2.5 Gb) of raw reads were obtained. After low-quality reads and reads with adaptors were filtered and trimmed using Fastp program [25], approximately 2.43 Gb of clean reads remained for the D. heterophyllum sample. The cp genome was assembled from the high-quality clean reads by performing NOVOplasty2.7.2 [26] with default Kmer 33 using the cp genome of D. tanguticum (MT457746) as a reference and the rbcL coding sequence (NCBI accession number NC_000932.1) of Arabidopsis thaliana as a seed sequence.
Annotation and Analysis of Chloroplast Genomes
The online program GeSeq [27] (https://chlorobox.mpimp-golm.mpg.de/geseq.html, accessed on 28 September 2021) and PGA software [28] were used to annotate the cp genome sequence. We compared annotations from the two measures and made final adjustments manually in Geneious version 11.0.2 [29]. We checked the initial annotation, putative starts, stops, and intron positions through comparison with homologous genes in the homo-genera species D. tanguticum. Employing the program OGDRAW v1.3.1 [30], we drew a circular map of the cp genome of D. heterophyllum for the feature visualized. The cp genome sequence of D. heterophyllum was submitted to GenBank (Accession number OM201748).
Single Sequence Repeats (SSR) and Relative Synonymous Codon Usage (RSCU) Analysis
The online MISA program [31] was employed to detect SSRs using the following thresholds: ten, five, three, three, three, and three repeat units for mono-nucleotide, dinucleotide, tri-nucleotide, tetra-nucleotide, penta-nucleotide, and hexa-nucleotide SSRs, respectively. We also conducted a relative synonymous codon usage (RSCU) analysis based on the sequence of protein-coding genes (PCGs), to reveal variations between synonymous and non-synonymous codon usage, regardless of the composition of amino acids, using CodonW (v1.4.2; https://downloads.fyxm.net/CodonW-76666.html, accessed on 14 January 2022) with default settings and length over 200 bp.
Complete Chloroplast Genome of Comparison Analysis
Boundaries among the four main chloroplast regions (LSC/IRb/SSC/IRa) of D. heterophyllum and six other related species of Lamiaceae were compared using IRscope [32]. In addition, D. heterophyllum was compared with three different species of Dracocephalum (D. moldavica, D. palmatum, and D. tanguticum) using mVista with the shuffe-LAGAN Mode [33]. The nucleotide diversity of four species in Dracocephalum was calculated through sliding window analysis in DnaSP (version 5.1) [34] with a window length of 600 bp and a step size of 200 bp.
Analysis of Synonymous (Ks) and Non-Synonymous (Ka) Substitution Rate
To analyze synonymous (Ks) and non-synonymous (Ka) substitution rates, the cp genome sequence of D. heterophyllum was compared with four other species of Dracocephalum. Here, we selected and extracted 79 exons from PCGs shared by four species for synonymous (Ks) and non-synonymous (Ka) substitution rates analysis. Then, sequences were aligned by MAFFT (version 7) [35], and substitution rates per exon were estimated using DnaSP software [36]. Coding genes were detected based on the following comparisons: D. heterophyllum vs. D. moldavica; D. heterophyllum vs. D. palmatum; and D. heterophyllum vs. D. tanguticum.
Phylogenetic Analysis
To examine the phylogenetic position of D. heterophyllum within Lamiaceae, an evolutionary tree was constructed, based on 22 full cp genomes sequences from nine genera in Lamiaceae downloaded from NCBI, using Plantago depressa as an outgroup (Table S1). Meanwhile, a molecular phylogenetic tree was constructed using 79 shared protein-coding genes of 21 cp genomes, with the exception of Elsholtzia splendens. All sequences of cp genomes and shared protein-coding genes were aligned with MAFFT (version 7) [35], and phylogenetic analyses were conducted according to the maximum likelihood (ML) method under the best-fit substitution model GTR + F + G4 selected by ModelFinder [37], using IQ-TREE (version 1.6.11) [38]. The bootstrap probability of each branch was calculated using 1000 replications. Additionally, a Bayesian inference (BI) analysis was conducted by MrBayes (version 3.1.2) [39], based on 22 full cp genomes sequences with the same best-fit substitution model. Four Markov chain Monte Carlo (MCMC) analyses were run simultaneously for two million generations, sampling one tree every 1000 generations with 25 percent of trees discarded as burn-in. The remaining trees were used to construct a consensus tree and to calculate the posterior probability (PP) percentages of trees.
General Characterization of Chloroplast Genome
The assembled and annotated cp genome of D. heterophyllum was 150,869 bp in length and presented a typical tetra-partite circular structure with a pair of inverted repeats (IRs) of 51,350 bp each, which made the genome separate into two single-copy regions: a long single-copy (LSC) of 82,421 bp and short single-copy (SSC) of 17,098 bp ( Figure 1). The estimated GC content was 37.8%, which was similar to the values published in congeners (Table 1). GC content for each of the three main regions of the cp (IR, LSC, and SSC) was calculated to be about 43, 36, and 32%, respectively, which is congruent with previous reports in related species (Table 1). A total of 133 genes, including 37 tRNA genes, eight rRNA genes, and 87 proteincoding genes, were successfully annotated in the cp genome of D. heterophyllum ( Table 2). The number of genes predicted for D. heterophyllum was higher than those for D. palmatum (125 genes). The distribution of protein-coding sequences in the cp genome of D. heterophyllum reflects that 85 genes were located in the LSC region, 14 genes were in the SSC region, and 17 were duplicated in the IR regions. A total of 16 genes encoded one intron, including six tRNA and ten protein-coding genes, while two genes (clpP and ycf3) contained two introns, as previously reported for other Dracocephalum species. Among the 18 genes, the smallest intron was found in the trnL-UAA gene, being 495 bp in length, whereas the largest one was in trnK-UUU, with 2535 bp and containing the matK gene. We present a comparison of the structure and size of genes with introns in D. heterophyllum and closely related species in Table S2. Conserved open reading frame ycf1 (2×), ycf2 (2×), ycf3 2 , ycf4, ycf15 (2×) Note: 1 Gene containing a single intron; 2 Gene containing two introns; 2× showed genes duplicate.
Analysis of Simple Sequence Repeat (SSR) and Codon Usage
A total of 90 SSRs were found in the cp genome of D. heterophyllum, including six types of SSR, of which tri-nucleotide repeats were the most common (43, 47.7%), followed by mono-nucleotide (29, 32.2%) and compound-nucleotide (10, 11.1%) repeats (Table S3). Three additional types of SSRs were less abundant: di-nucleotide (three, 3.3%), tetra-nucleotide (three, 3.3%), and penta-nucleotide (two, 2.2%) repeats (Table S3, Figure S1). A detailed analysis of the frequencies of SSRs per genomic feature found that most were distributed in protein-coding (42) and intergenic (IGS; 36) regions, while ten of them were located in introns and two crossed the protein-coding-IGS boundaries. In addition, we also assessed the distribution of 90 SSRs located in the LSC, IRb, SSC, and IRa regions, which were estimated to be 59 (65.6%), 10 (11.1%), 11 (12.2%), and 10 (11.1%), respectively (Table S4, Figure 2). We also obtained 88, 91, and 87 SSRs in the cp genome of D. tanguticum, D. moldavica, and D. palmatum, respectively (Table S5). They all included six types of SSR; tri-nucleotide repeats were the most common (48.2, 53.8, 55.1%), and no hexa-nucleotide SSRs were found ( Figure 3, Table S5). A few tetra-nucleotides (2.2, 3.2, 2.2%) was detected in all three species, and only one penta-nucleotide (1.1%) and six di-nucleotides (6.8%) were found in D. moldavica and D. palmatum, respectively (Table S5). When combined, the number, which consisted of units, and distribution of SSRs in Dracocephalum showed high consistency. We detected the codon usage frequency for the cp genome of D. heterophyllum and three closely related species based on the sequence of protein-coding genes (CDS), whose length were 85,371, 80,154, 79,269, and 79,599 bp. A total of 28,457 codons and RSCU were estimated, comprising 64 different types, which included 20 amino acids and three stop codons in D. heterophyllum (Figure 4, Table S6), which were similar to the other three species. In all species, leucine was the most abundant amino acid with 2982, 2864, 2840, and 2843 instances and encoded by six codons, accounting for 10.5, 10.7, 10.7, and 10.7% of the total (Table S7). Both methionine (ATG) and tryptophan (UGG) had only one codon type with 656, 644, 642, 635, and 489, 475, 470, 468 instances, respectively, and presented no bias (RSCU = 1.00) ( (Table S7). Remarkably, all codons with values higher than one ended with A or U, except for trnL-CAA and trnS-GGA (Table S7).
IR Contraction and Expansion
We compared the cp genome of D. heterophyllum with six closely related species in Lamiaceae, including two congeners (D. moldavica, D. tanguticum) and four other genera (Elsholtiza densa, Mentha spicata, Ocimum basilicum, and Perilla frutescens var. hirtella). A detailed comparison was performed for four genomic boundaries (LSC/IRa, LSC/IRb, SSC/Ira, and SSC/IRb) of the seven species ( Figure 5). Our results showed that the lengths of LSC, IR, and SSC regions were similar among the genomes of Lamiaceae. While the IR organization was highly conserved, ranging from 50,234 to 51,394 bp, there were minor variances with expansions and contractions. Both psbA and rpl22 genes were entirely located within the LSC region, and the rpl2 gene was entirely located within the IR organization. The rps19 gene of E. densa was utterly inside the LSC region, but in the other six species it was positioned at the boundary between the LSC and the IRb. The pseudogene ycf1 and the gene ndhF were also positioned at the junctions of the IRa/SSC and IRb/SSC regions, respectively and their sequences showed length variabilities among species. Notably, the ndhF gene of three species in Dragonhead, and M. spicata, P. frutescens var. hirtella, overlapped with ycf1 in the IRb region. The trnH gene was consistently placed at the IRa/LSC border in all species except for M. spicata, which was contained far inside the LSC region. Instead, M. spicata was the only species with the rps19 gene positioned near the IRa/LSC border.
Evolutionary and Phylogenetic Analysis
Phylogenetic trees were constructed based on whole cpDNA sequences of 22 species from subfamilies Lamioideae (2), Nepetoideae (18), and Scutellarioideae (2) in Lamiaceae, using Plantago depressa (Plantaginaceae) as an outgroup. The ML and Bayesian trees showed two major monophyletic clades within Lamiaceae, receiving strong support from both bootstraps and Bayesian posterior probabilities (BP = 100%; PP = 1, respectively; Figure 9). The phylogenetic trees constructed with complete cp genome and CDS sequences have the same topology ( Figure S2). Within the tree, taxa from the same genus were clustered together. The D. heterophyllum was sister to a clade consisting of D. moldavica and D. palmatum and was clustered together with D. tanguticum. The largest monophyletic clade was for the subfamily Nepenthoideae, which included 16 species from six genera distributed in three tribes, most of them strongly supported. Tribe Mentheae comprised three genera, with Dracocephalum being a sister to Mentha with Salvia at the base of this relationship. The second subclade in Nepetoideae formed by Elsholtzieae and Ocimeae received moderate statistical support (BP = 52%; PP = 0.741). However, Elsholtzia and Perilla were strongly supported as sisters (both in tribe Elsholtzieae), while Ocimum was monophyletic (tribe Ocimeae). Meanwhile, subfamilies Lamioideae and Scutellarioideae were strongly supported as being sisters, despite lacking resolution with respect to their relationship with Nepetoideae. Genera within each of these two subfamilies (i.e., Pogostemon, Stachys, and Scutellaria) were monophyletic, and the overall structure of the phylogeny is in agreement with current classification schemes of Lamiales.
Organization and Features of cp Genomes
Our study examined the features, content, and organization of the cp genome of D. heterophyllum and compared it with the cp genomes of four other species of Dracocephalum that have been previously published, showing that all of them presented the typical quadripartite structure found in vascular plants [40]. The cp genomes of the five Dracocephalum species ranged from 149,868 (in D. moldavica) to 150,976 bp (in D. taliense), indicating that they are very conserved, only exhibiting minor differences that altered their sizes. Intriguingly, the cp genomes of Dracocephalum had similar sizes as those of distant species within Lamiaceae, such as Elsholtzia densa [41] (149,095 bp), Stachys byzantine, [42] (149,749) and Scutellaria baicalensis [43] (151,817 bp). The phenomenon of minimal differences in cp size within a genus and a family has been previously reported in the genera Sorghum (Poaceae) [44] and Ilex (Aquifoliaceae) [45], and in the subfamily Coryloideae of Betulaceae [46], among others. The size variations of cp genomes above are usually the result of the expansions of the IR regions during evolution [47].
Regarding GC content, the overall amount detected in the cp genome of D. heterophyllus is 37.8%, identical to the values reported for the four other congeners evaluated. The GC content within the IRs region (43.2%) was higher than the estimate for the other two regions (LSC, 35.7%; SSC, 31.8%), which might be related to the presence of high GC nucleotide percentages in the genes rrn4.5, rrn5, rrn16, and rrn23, as previously reported [8,48,49].
Regarding gene estimates, our analysis detected 133 genes in the cp genome of D. heterophyllum, including 37 tRNA, 8 rRNA, and 88 protein-coding genes. The number and arrangement of genes were generally consistent when compared to those of D. moldavica, D. taliense, and D. tanguticum. However, some differences among them were detected. Gene number (133 genes) and cp genome arrangement between D. heterophyllum and D. tanguticum were identical, but comparisons with the other two species showed minor differences: D. moldavica had 132 genes, due the loss of rps2, while D. taliense contained 134 genes because of a duplication of rps19. In fact, the duplicated rps19 pseudogene has also been reported in other species of Lamiaceae [50,51]. However, there have been no reports for rps2 missing in other species of Lamiaceae. Thus, these minor variations of gene content in the cp genome of Dracocephalum were caused by evolutionary events of gene deletions and insertions. Additionally, we found that there was specific trans-splicing in the rps12 gene of D. heterophyllum, where the 5′ end of the exon was located within the LSC region while the 3′ end was in the IR region, which is a situation commonly observed in many angiosperms [52].
Although introns do not encode proteins, they play a critical role in regulating gene expression [53]. The cp genomes of the earliest diverging angiosperms contained a complete repertoire of 18 genes with introns [54]. Within the cp genome of D. heterophyllum, those 18 genes with their introns also were detected, of which 16 contained one, and two contained two introns. Although the number and arrangement of genes with introns in the cp genomes of several Dracocephalum species were identical and their exons highly conserved, we detected considerable variation in intron length, especially in genes such as clpP, ndhA, and ycf3. Intron polymorphisms (IP), including length polymorphism (ILP) and single nucleotide polymorphism (ISNP), can be used as efficient molecular markers, which have been previously developed and applied for analyses of multiple lineages of plants [55].
Simple Sequence Repeats (SSRs) and Codon Usage Analysis
The variations among SSR copy numbers are distinctive among plant species, and therefore, provide an important source of genomic information suitable for phylogenetic studies [56,57]. Our study analyzed the number and distribution of different SSR motifs in the cp genome of D. heterophyllum and closely related species. The number, motif type, frequency, and distribution for detected SSRs showed high uniformity in the cp genomes of the four species of Dracocephalum. Among the SSRs, tri-nucleotide repeats were the richest, and more than half of SSRs were located in non-coding spacers (NCS), suggesting that these regions have the potential to be hotspots for mutations. Our results suggest that SSRs can be used to develop DNA markers with the potential to delimit species, reconstruct phylogenetic relationships, and assess genetic diversity in Dracocephalum [58].
Previous studies have revealed that codon usage has essential functions in expressing genetic information, and therefore, it is critical for the evolution of cp genomes [47,59]. Here, a total of 28,457 codons were detected in the CDS of D. heterophyllum, the amino acids encoding leucine (10.5%) being the most common, and the least common being the ones for cysteine (1.2%), which is consistent with findings in D. moldavica, D. tanguticum, and D. palmatum (Table S6). It has been reported that species with close phylogenetic relationships may adopt similar codon selection strategies [60]. In the case of Dracocephalum, most amino acids had codon bias with a high preference (RSCU > 1), except for methionine and tryptophan (RSCU = 1). The RSCU value of codon types was more significant than one ending in A or U, except for trnL-CAA and trnS-GGA. Research have shown that the codons with the highest AT-content in the cp genomes of terrestrial plants preferentially end with A/U, and this might be one of the causes why these codons are more prevalent in dicotyledons [61][62][63]. Therefore, these findings may contribute to further understanding of the evolutionary history of Dracocephalum, especially through natural selection and mutation pressures [40].
IR Expansion/Contraction of cp Genomes
Variations in the size of cp genomes of angiosperms are usually caused by expansions and contractions at the IR/SC borders [64]. Li et al. [65] suggested that the length of the IR regions in some species of Magnoliaceae have a positive correlation with the total length of the cp genome sequence. In our study, a comparison of the SC/IR boundaries among the seven cp genomes of Lamilaes revealed that the gene orders were identical, although there was some minor variation in the IR regions. Notably, the ycf1 gene, considered a pseudogene, was partially duplicated in all species except for O. basilicum and E. densa. Our results showed that the downstream sequences of IRb/SSC were conserved and the ndhF gene was adjacent to the IRb/SSC borders, which were consistent with the general pattern described for angiosperms [66]. The gene content and orders of SC/IR boundaries in three species of Dracocephalum were identical, suggesting that boundary region between the SSC and the two IR was relatively conserved. Last, the comparative genome analysis results with mVISTA revealed that the D. heterophyllus cp genome was relatively conserved, presenting some minor variation mainly localized at non-coding regions, due to insertions and deletions [67].
Comparative Genomes and Characterization of Substitution Rates
Although the cp genomes of land angiosperms are highly conserved, mutational hotspots in cp genomes are usually conserved at the generic level, which can be used to obtain multiple informative loci suitable for DNA barcoding and phylogenetic research [20]. Results obtained from mVISTA for cp genome divergence analysis in Lamiaceae suggest that Dracocephalum show low degrees of sequence divergence, with plastomes being relatively conserved. However, divergence hotspots for conserved non-coding sequences (CNS) regions and gene protein-coding regions were detected among all eight genomes of Lamiaceae. Similar phenomena have been observed in the cp genomes of other genera such as Scutellaria [43], Physalis [59], and Zygophyllum [40]. Meanwhile, we also identified five highly variable regions with high Pi values (>0.03) according to the sliding window analysis, including three intergenic sections (rps16-trnQ, trnT-psbD, and ndhF-rpl32) and two genes with unknown function and conserved open reading frames (ycf1 and ycf3). Previous multispecies studies found that intergenic spacers can serve as markers with high resolution for phylogenies [46]. For example, rps16-trnQ are highly variable in most plants and have been used for DNA barcoding in phylogenetic studies of 12 different genera across angiosperms [20]. Also, the gene ycf1 is at more variable loci suitable for barcodes of land plants than existing candidate barcodes [68]. Therefore, these highly variable regions in the cp genome of Dracocephalum are expected to provide enough genetic information to conduct studies on species delimitation and the phyletic evolution of Lamiaceae.
Synonymous (Ks) and non-synonymous (Ka) nucleotide substitution patterns are considered to be essential parameters for gene evolution analysis [69], and the Ka/Ks ratio can account for the selection pressure of genes [19,70]. Three directions of evolution were revealed through the value of Ka/Ks, where genes underwent purifying (<1), neutral (=1), and positive (>1) selections [70]. In our studies, the Ka/Ks value of the remaining 31 PCGs could not be calculated because their Ka or Ks was zero, suggesting that these genes were conserved without any non-synonymous or synonymous nucleotide substitutions. The values of Ka/Ks for 48 PCGs were all less than one except for rps11 and ycf2, indicating that the majority of PCGs of the four Dracocephalum samples experienced purifying selection. These results were consistent with previous reports that non-synonymous substitutions are less frequent than synonymous ones, and Ka/Ks ratios for most PCGs were less than one [71]. Given that purifying selection could sweep away deleterious mutations, as the primary pattern in natural selection [72], we inferred that most genes in the cp genomes of Dracocephalum underwent extensive purifying selection to sustain their conserved functions, and the evolutionary rate was relatively slow. This result correlates with previous analyses showing that the cp genome of D. heterophyllus is relatively conserved and with minor variation, mainly localized at non-coding regions.
Phylogenetic Analysis Based on cp Genomes
In the past three decades, many authors have focused on the phylogenetic relationships within Lamiaceae and have established a framework for the systematics and evolution of Lamiaceae [73][74][75]. Cp genome sequences have been successfully used to reconstruct phylogenetic relationships among plant lineages [76], and a phylogenetic tree of Lamiaceae using cp genomes could be used to better understand the evolution of this diverse family. Two data sets (the complete cp genome sequences and concatenated proteincoding sequences) recovered the same topology, which was congruent with previous trees constructed using molecular data for Lamiaceae. There are two monophyletic clades within the family, forming six groups corresponding to the six currently described tribes. The first monophyletic clade consisted of the tribes Mentheae, Elsholtzieae, and Ocimeae from subfamily Nepetoideae, while the second clade constituted the tribes Scutellarioideae, Stachydeae, and Pogostemoneae from the subfamily Lamioideae, where the former is sister to the latter two. Notably, the systematic relationships among the three tribes in Nepetoideae remain ambiguous [73]. Previous studies have shown three inconsistent relationships: (1) Ocimeae being sister to the Mentheae-Elsholtzieae branch [77], (2) Mentheae being sister to the Ocimeae-Elsholtzieae group [78][79][80][81], and (3) Elsholtzieae being sister to the Mentheae-Ocimeae clade [82]. Our result, however, supported the second relationship, where Mentheae is sister to the Ocimeae-Elsholtzieae branch [73], although with weak nodal support (BP = 52, PP = 0.74). Hence, more studies are still needed to resolve relationships among these three tribes. Overall, we consider that the phylogenetic relationships between Dracocephalum and other genera has been resolved through our studies, confirming previous findings within Lamioideae from Li et al. [1] and Zhao et al. [73]. Therefore, we propose that whole cp genomes could provide sufficient data information to reconstruct the phylogenetic relationship of plants, particularly within the family Lamiaceae.
Conclusions
In the present study, we sequenced the cp genome sequences of the Dragonhead herb Dracocephalum heterophyllum (Lamiaceae). After comparing this newly sequenced cp genome with closely related species, we found that the cp genome size, GC content, and gene number and order among species was found to be highly conserved. The location and distribution of 99 repeat sequences and some highly variable regions were identified. We expect that these variable regions and repeat markers can assist future studies on species delimitation, systematics and evolution, and genetic engineering of Dracocephalum. The mean Ka/Ks between D. heterophyllum and three other Dracocephalum species ranges from 0.01 (psbB) to 1.05 (ycf2). Two genes, ycf2 and rps11, were found to have high Ka/Ks ratios, implying that they may have undergone positive selection during their evolutionary history. Phylogenetic trees constructed using whole cp genomes and protein-coding sequences support D. heterophyllum as being a member of the tribe Mentheae in the subfamily Nepetoideae. The results obtained in this study are expected to provide valuable genetic resources to develop strategies to discern species, perform molecular breeding, and assess shallow relationships among Dracocephalum species in the future.
Supplementary Materials: The following supporting information can be downloaded at: www.mdpi.com/article/10.3390/d14020110/s1, Figure S1: Number of different forms of SSRs identified in the cp genome of Dracocephalum heterophyllum (Lamiaceae). Colors represent different nucleotide repeats; Figure S2: Phylogenetic trees of the Dracocephalum species based on the sequences of cp protein-coding gene by ML; Table S1: Information of reported chloroplast genomes that was used for phylogenetic trees constructing; Table S2: Length of genes with introns in the cp genomes of D. heterophyllum, D. moldavica, D. tanguticum, and D. taliense; Table S3: Characteristics and distribution of 90 simple sequence repeats (SSRs) in the chloroplast genome of Dracocephalum heterophyllum (Lamiaceae); Table S4: Statistics of identified simple sequence repeats in the chloroplast genome of Dracocephalum heterophyllum (Lamiaceae); Table S5: SSR analysis for the cp genome of four species in Dracocephalum; Table S6: The codon usage frequency for the cp genome of D. heterophyllum; Table S7: The comparison of codon usage frequency for the cp genome of four species in Dracocephalum; Table S8: Nucleotide diversity analysis for four species of Dracocephalum in DnaSP; Author Contributions: The experimental design was completed by X.S., Y.L. and G.F.; Samples collection and treatment were conducted by C.Z., T.L. and G.F.; G.F., C.Z., T.L. and Y.X. led data analysis, and figures and tables with assistance from C.Z. and T.L.; The manuscript was drafted by G.F. and X.S., and M.A.C.-O. edited the manuscript for structure, language, and scientific content. All authors have read and agreed to the published version of the manuscript. Institutional Review Board Statement: Not applicable.
Data Availability Statement:
The original contributions presented in the present study are publicly available, and accession numbers can be found Table S1. The datasets generated for this study can be found in GenBank, the accession number of final plastome is OM201748. The associated BioProject ID, Bio-Sample accession and SRA are PRJNA797531, SAMN25008769 and SRR17629219 for raw sequencing data, respectively.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-02-06T16:08:40.911Z
|
2022-02-03T00:00:00.000
|
{
"year": 2022,
"sha1": "31654baa18025e3ea0c5e1270ddbd1310ef92ff7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-2818/14/2/110/pdf?version=1644569726",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "960c4e3beb29e8f44dc240d9f99d7bcf528dfc1d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
219621006
|
pes2o/s2orc
|
v3-fos-license
|
COVID-19 diagnostic multiplicity and its role in community surveillance and control
Diagnosis of persons exposed to/infected with severe acute respiratory syndrome–related coronavirus-2 (SARS-CoV-2) is central to controlling the global pandemic of COVID-19. Currently, several diagnostic modalities are available for COVID-19, each with its own pros and cons. Although there is a global consensus to increase the testing capacity, it is also essential to prudently utilize these tests to control the pandemic. In this paper, we have reviewed the current array of diagnostics for SARS–CoV-2, highlighted the gaps in current diagnostic modalities, and their role in community surveillance and control of the pandemic. The different modalities of COVID-19 diagnosis discussed are: clinical and radiological, molecular based (laboratory based and point-of-care), Immunoassay based (ELISA, rapid antigen and antibody detection tests) and digital diagnostics (artificial intelligence based algorithms). The role of rapid antigen/antibody detection tests in community surveillance has also been described here. These tests can be used to identify asymptomatic persons exposed to the virus and in community based seroprevalence surveys to assess the epidemiology of spread of the virus. However, there are few concerns about the accuracy of these tests which needs to evaluated beforehand.
Introduction
Severe acute respiratory syndrome coronavirus -2 (SARS-CoV-2) is the seventh coronavirus that has crossed the species barrier and has emerged as a global health emergency (1).The first case of coronavirus disease (COVID- 19) was reported in December 2019 at Wuhan, Hubei Province, China (2).On 11 th March 2020, the World Health Organization (WHO) declared COVID-19 as a pandemic (3).There were only 11953 cases of COVID 19 with 259 reported deaths till 1 st Feb 2020.This has exponentially increased to more than 3 million cases with 0.2 million deaths as of 30 th April 2020 (Figure 1) (4).SARS-CoV-2 is an enveloped positive single stranded RNA virus belonging to Betacoronavirus genus, of Orthocoronavirinae subfamily in the Coronoviridae family of order Nidovirales (5).Like other betacoronaviruses, SARS-CoV-2 has Spike glycoprotein (S), Matrix proteins (M) and outer envelope (E) encapsulating the RNA and nucleoprotein (N) (Figure 2).Apart from these, the viral genome also encodes for proteins like RNA dependent RNA polymerase (RdRp) and 6 accessory ORF1ab, ORF3a, ORF6, ORF7a, ORF7b, and ORF8 proteins (6).Genomic analysis has shown that SARS-CoV-2 has 79.6% sequence identity to SARS-CoV and 96% identity with bat coronavirus (BatCoV RaTG13) (7,8).The virus entry is via the respiratory route where S protein mediates viral binding onto cells expressing ACE2 (angiotensin converting enzyme 2) receptor (8).Cellular serine protease TMPRSS2 present on the host cell is used by SARS-CoV-2 for S protein priming (9).After receptor mediated endocytosis the viral genome is released in the cytosol that translates replicase polyproteins.These polyproteins subsequently get cleaved and further assemble to form replicase transcriptase complex to helps in RNA replication and subgenomic RNA transcription (10) SARS-CoV-2 has evolved into 2 strains designated as L and S strains.L strain is more aggressive and was prevalent during early stages of the epidemic in Wuhan (11).
Screening is our window into the pandemic and its spread.Diagnosis of persons exposed to/infected with SARS-CoV-2 is central to controlling the global pandemic of COVID-19.
Few countries have upscaled diagnostic testing on a massive scale to successfully contain the spread of the pandemic.In contrast, resource poor countries like India have prioritized testing for specific groups of persons.Real-time reverse transcriptase polymerase chain reaction (RT-PCR) based assays are considered the reference standard for COVID-19 diagnostics.But the test protocol is complex and expensive, however, and is mainly suited to large, centralized diagnostic laboratories.This has inhibited upscaling of testing capacity.To overcome this barrier, point-of-care technologies and serologic immunoassays are rapidly emerging.But the performance of these have not been evaluated adequately.These challenges are even greater in low-resource settings.
Currently, several diagnostic modalities (Clinical, molecular, immune-based and digital) are available for COVID-19, each with its own pros and cons (Figure 3).Although, there is a global consensus to increase the testing capacity, it is also essential to prudently utilize these tests to control the pandemic.In the current scenario of information overload in the field of COVID-19 diagnostics, we have reviewed the current array of diagnostics for SARS-CoV-2, highlighted gaps in current diagnostic modalities, and their role in community surveillance and control of the pandemic.
Clinical and radiological diagnosis of COVID-19
COVID 19 presents with 3 clinical stages of infection after incubation period of 2-14 days.
Stage 1 -asymptomatic, Stage 2 -Upper airway and conducting airway response and Stage 3 -Hypoxia, ground glass infiltrates, and progression to Acute Respiratory Distress Syndrome (ARDS) (12).The stages and severity varies depending on the age, immune status of the individuals and associated co-morbidities (13).High viral load can be an important marker for severity of the disease and such patients also have long virus shredding period (14) The clinical trial of SARS-CoV-2 is fever, dry cough and shortness of breath and may led to severe form such as respiratory distress and failure (15).Respiratory failure that necessitates mechanical ventilation and support in an intensive care unit (ICU), can further cause multiorgan and systemic manifestations in terms of sepsis, septic shock, and multiple organ dysfunction syndromes.A case study by Li et al shows that the mean age of suffering from COVID-19 was around 59 years ranging from 15 to 89 years (16).Patients with comorbidities (cardiovascular disease, diabetes, chronic respiratory disease, hypertension, and cancers) had higher case-fatality rates (10.5%, 7.3%, 6.5%, 6.0%, and 5.6%, respectively) than those without comorbidities (0.9%) (17).Based on the presentation of symptoms and respiratory parameters, disease severity is divided into mild to moderate, severe and critical.
• Mild disease: non-pneumonia and mild pneumonia; this occurred in 81% of cases.
• Critical disease: respiratory failure, septic shock, and/or multiple organ dysfunction or failure; this occurred in 5% of case (17).
CDC has added six new symptoms to its list for COVID-19: chills, muscle pain, headache, sore throat, repeated shaking with chills and a loss of taste or smell (18).Kaye et al. reported anosmia in 73% of patients prior to COVID-19 diagnosis and was initial symptom in 26.6% of patients (19).COVID-19 infection causes a severe lower respiratory tract infection with bilateral, basal and peripheral predominant ground-glass opacity, consolidation or both as the most common reported chest radiological findings.These findings peak around 9-13 days and slowly begin to resolve thereafter (20).(21).A nasopharyngeal swab is the preferred choice for swab-based SARS-CoV-2 testing but sometimes oropharyngeal, mid-turbinate and anterior nares samples are also tested.A study by Wu J et al found that positivity of SARS-CoV-2 nucleic acid in the sputum of 132 patients with COVID-19 was higher than that of nasopharyngeal swabs, and viral nucleic acids were also detected in blood and digestive tract (faecal/anal swabs) (22).
Laboratory based Molecular Diagnostics
Detection of SARS-CoV-2 nucleic acid in nasopharyngeal swab alone does not yield high positivity, multi-sample SARS-CoV-2 nucleic acid detection can improve the accuracy, reduce false-negative rate and better guide clinical treatment (22).Samples should be collected using flocked swabs to increase the collection of viral load and release of cellular material.Certain specific swabs are not used for the collection of viral loaded samples such as those containing calcium alginate, wood or cotton because they contain material that inhibits PCR assays.
RT-PCR is capable of providing relatively fast results through amplification of low viral
RNA with high sensitivity and specificity.The oligonucleotide primers and probes for SARS-CoV-2 detection are usually derived from RNA-dependent RNA polymerase (RdRp) gene in open reading frame (ORF), nucleocapsid (N), envelope (E) regions of the virus (23) RT-PCR assay can be either a one-step or a two-step assay.In a one-step assay, conversion of RNA to cDNA and further PCR amplification are performed in single reaction tube.Although, this assay provides quick and reproducible results, optimizing the protocol is a challenging step.
In contrast, the two-step assay is carried out sequentially in two separate tubes.In comparison to one step PCR assay, this format is more sensitive, but time-consuming (24,25).
Limited evidence suggests that the viral load peaks during the first week of illness, then gradually declines over the second week (26).Viral presence has also been noted in some patients 28 days after onset of symptoms.High viral load during the early phase of illness suggests that patients could be most infectious during this period, and this might account for the high transmissibility of SARS-CoV-2.
Though, RT-PCR provides a highly sensitive and specific method for detection of infectious diseases, these methods are typically restricted in a specialized clinical laboratory and are not suitable for quick, easy, point of care diagnostic applications.Currently, reverse transcription loop-mediated isothermal amplification (RT-LAMP) is in development and testing phase for SARS-CoV-2 detection (27).This highly specific technique uses DNA polymerase and specially designed primers that recognize distinct target sequences on the target genome.In general, there are two inner primers and two outer primers designed to synthesize new DNA strands (28).The reaction occurs in less than an hour under isothermal conditions at 60-65 °C.The approach is much more efficient while still obtaining a high level of precision, less background signal, convenient visualization for detection, and does not need sophisticated equipment (28).
CRISPR-based detection can also provide a rapid, highly sensitive and specific approach for There are several other additional novel diagnostic methods in developmental phase or in evaluation.The Foundation for Innovative New Diagnostics (FIND) is conducting independent evaluations of molecular tests and immunoassays available for COVID-19 diagnostics, in collaboration with the WHO, the University Hospitals of Geneva (HUG) and others (Supplementary Table 1 and 2).Results for the first round of independent evaluation of COVID-19 PCR Based tests has been released and depicted in Table 1.
The Xpert Xpress SARS-CoV-2 test (Cepheid) (FDA Emergency Use Authorization) utilizes the GeneXpert platform, which is widely used for tuberculosis and HIV testing, especially in low-and middle-income countries.This capacity might be useful to scale up testing across the world, especially in resource poor settings.
Antigen detection tests
One type of RDT detects the presence of viral proteins (antigens) expressed by the COVID-19 virus in a respiratory sample.If the target antigen is present in sufficient concentrations in the sample, it will bind to specific antibodies fixed to a paper strip and generate a visually detectable signal, typically within 30 minutes.The antigen(s) detected are expressed only when the virus is actively replicating; therefore, such tests are recommended to identify acute or early infection.
The performance of these tests depends on the time from onset of illness, the concentration of virus in the specimen, the quality of the specimen collected from a person and how it is processed.Other antigen-based RDTs for other respiratory viruses such as influenza have demonstrated the sensitivity of these tests to vary from 34% to 80% (32).
Based on this information, half or more of COVID-19 infected patients might be missed by such tests.With the limited data now available, WHO does not currently recommend the use of antigen-detecting rapid diagnostic tests for clinical decision making, although research into their performance and potential diagnostic utility is highly encouraged.
According to the Seo G et al, field-effect transistor (FET)-based biosensing device for detecting SARS-CoV-2 can be used in clinical samples (33).The sensor was fabricated by coating graphene sheets of the FET with a precise antibody against SARS-CoV-2 spike protein.The functioning of the sensor was determined using antigen protein, cultured virus, and nasopharyngeal swab specimens from COVID-19 patients.The FET device could sense the SARS-CoV-2 spike protein at concentrations of 1 fg/mL in phosphatebuffered saline and 100 fg/mL clinical transport medium (34).Monoclonal antibodies against the nucleocapsid protein of SARS-CoV-2 have also been generated, which might form the basis of a future rapid antigen detection test (35).
Antibody detection tests
It is a known fact that identification of IgM/IgG antibodies is a much less complex process than molecular identification of virus (36).The assays can be performed on the samples collected from blood or saliva.The "serological" tests which rely on detection of antibodies are usually against the nucleocapsid or spike proteins in the sample.A negative result in the serological assays will not assure the absence of infection.Sometimes, cross-reactivity of the non-SARS-CoV-2 coronavirus protein is also a potential problem (37).These IgM/IgG detection assays are more reliable in the conditions where patients present to the hospital in the late stage of infection, when RT-PCR may be falsely negative, due to decrease in the viral shedding (38).
After SARS infection, IgM antibody could be detected in patient's sample after 3-6 days and IgG after 8 days (39) Apart from these rapid kits, many ELISA based antigen or antibody kits have been approved for diagnostic or research purpose, with several others in the process of development (Supplementary Table 1 and 2).Unlike rapid test kits, ELISA provide quantification of antibodies and are less vulnerable to false-positive and false-negative reactions.
Digital Diagnostics
In this era of machine learning, digital diagnostics has come up as a new innovation in medical field as a complimentary tool for standard screening and diagnostic tests.Current Digital technologies are highly sensitive, specific, non-invasive and cost-effective.They can help in reducing the timeframe and workload needed in dealing with high number of cases, hence minimizing the risk of transmission to other patients and hospital staff (42).
Community surveillance and control
Being resource intensive and costly, current molecular based tests are used for confirmation of COVID-19 among possible suspects, most often the symptomatic patients.However, apart from transmission from symptomatic patients, pre-symptomatic and asymptomatic transmission plays a key role in driving disease transmission across communities, especially due to the hidden nature of the spread.
Pre-symptomatic transmission:
The incubation period for COVID-19 is around 5-6 days, lasting upto 14 days.During this period, also known as the pre-symptomatic period, people can be contagious and transmission can occur.Pre-symptomatic transmission has been documented through contact tracing efforts and enhanced investigation of clusters of confirmed cases (43)(44)(45).Data suggests that some people can test positive for COVID-19 from 1-3 days before they develop symptoms which makes it more likely that people infected with COVID-19 could transmit the virus before significant symptoms develop (44).
Asymptomatic transmission:
An asymptomatic laboratory-confirmed case is a person infected with COVID-19 who does not develop symptoms.Asymptomatic transmission refers to transmission of the virus from a person, who does not develop symptoms.A recent study in NEJM reported that a viral load detected in an asymptomatic patient was similar to that detected in symptomatic patients, indicating the potential for transmission in asymptomatic patients (46).On January 24, The Lancet reported a familial cluster of SARS-CoV-2 infection with a travel history to Wuhan, with their asymptomatic child presenting with no fever, respiratory tract symptoms or diarrhoea but had ground-glass lung opacities seen on radiography (47).Subsequently, several asymptomatic patients were confirmed to have COVID-19 in many Chinese cities with most of them having an epidemiological history with a potential of infecting others.A study showed that during the outbreak of SARS-CoV, of all exposed health care workers, 7.5% were asymptomatic SARS-positive cases (48).
Early detection and isolation of these hidden cases is necessary to reduce the size of the outbreak of SARS-CoV-2.Current strategies have focused on identifying COVID-19 suspect/symptomatic, testing and isolating them.However, we are missing out on asymptomatic transmission which is a major driver of community transmission of the corona virus accounting to as high as 80% of transmission.Widespread testing of populations can play a key role in identifying asymptomatic people and isolating them, thus, curbing further transmission.Countries such as South Korea have successfully controlled the pandemic by testing aggressively to identify possible carriers of infection and isolating them effectively (Figure 4).However, in resource-poor settings, where upscaling of conventional RT-PCR is cumbersome, use of rapid test kits can be a feasible option for population-wide testing.
Rapid diagnostic tests (RDTs) are simple stand-alone antigen/antibody detection tests that can be used at the point of care outside the laboratory/hospital by minimally trained staff and can provide test results within 15 minutes.They are attractive for decentralized testing particularly in low resource settings.These rapid tests can be used to broaden the criteria for testing and include asymptomatics with probable exposure to the virus.In India, RDTs have been approved for use in hotspots/cluster containment zones to identify asymptomatic persons exposed to the virus and isolating them to prevent further community transmission.
However, because of its low specificity, RDT negatives are further confirmed by RT-PCR.
The use of rapid antibody tests is manifold.RDTs could be used in seroprevalence surveys to understand the dynamics of spread of the virus in the community, assess attack rates and extent of an outbreak.It can verify the immune response to vaccines during clinical trials, or be used in contact tracing weeks or longer after a suspected infection, help inform public policy makers about the burden of asymptomatic cases in a population.This is useful for the purpose of community surveillance and understanding the epidemiology of COVID-19 in the country.
A positive test result in the convalescent phase indicate that they will be safe from another infection for at least some time which mean they could return to work or work as a shield for the vulnerable population till we achieve herd immunity.However, there is no evidence that people who have recovered from COVID-19 and have antibodies are protected from a second infection.There have already been some reported cases of re-infection with corona virus.
Conclusions
Rapid and early detection of the SARS-CoV-2 virus is key to prevent the spread of the virus and control the pandemic.The first line of defence against any outbreak is always developing the diagnostic assays for identification of confirmed cases and isolating them.Immunoassays against the antigen or antibodies provide the second line of diagnostics and complement nucleic acid tests.
Worldwide lockdown with strict social distancing and recommended use of masks was adopted by most countries to curtail the spread of COVID-19.However, not doubting the efficiency of lockdown, there are high chances of secondary waves of epidemic following the end of this lockdown.Thus, prompt and reliable diagnostic facilities along with appropriate non-pharmacological interventions and vaccines is the need of the hour.The future development of portable assays such as isothermal amplification, barcoding, and microfluidic technologies and application of artificial intelligence algorithms could enable point-of-care testing and multiplex assays to be rapidly implemented in an outbreak situation.This approach can reduce mortality and help in curtailing the spread of zoonotic pathogens.
Figures
Laboratory based molecular diagnostics are the hallmark of diagnosis of COVID-19.Currently, the diagnosis of COVID-19 is based on testing the nasopharyngeal or oropharyngeal samples collected from suspected patients.RT-PCR based tests are the standard reference for diagnosis of COVID-19.A study by Wang et al. showed higher positivity in nasopharyngeal swabs than oropharyngeal swabs, especially among hospitalized patients molecular based diagnostics.CRISPR-based SHERLOCK (Specific High Sensitivity Enzymatic Reporter UnLOCKing) technique for the detection of COVID-19 uses a variant of Cas9 called Cas13 that gets activated by binding to SARS-CoV-2-specific guide RNA (29).Detection is through fluorescent signal produced by Cas13 mediated cleavage of fluorophorequencher probes.Another CRISPR-based DNA Endonuclease-Targeted CRISPR Trans Reporter (DETECTR) assay uses Cas 12a to provide a faster alternative to real-time RT-PCR assay (30).
However, the antibody response to SARS-CoV-2 has shown different profile as per limited serological studies.IgM and IgG appear 2-4 after the onset of symptoms with the median number of days for seroconversion being 10-13 days.Detection of IgM against SARS-CoV-2 tends to indicate recent exposure, whereas the detection of IgG indicates prolonged exposure to the virus.The detection of both IgM and IgG could provide useful information on the virus infection time course.These antibody kits could be IgM, IgG or combined IgM/IgG detection kits.
COVID- 19
outbreak provided another opportunity for Artificial Intelligence (AI) application to prove it's worth in health care settings.Two such examples are Infervision and Intrasense Myrian, which are algorithm based AI technologies developed to read clinical images (40).These algorithms distinguish between lung lesions of COVID-19 and other respiratory infections.They basically measure volume, shape, and density, and compare changes of multiple lung lesions from an image to provide quantitative report in order to assist healthcare workers to make quick decisions.Another, AI-based deep learning structure COVIDiagnosis-Net, showed a high accuracy of 98.3% in processing and analysing X-ray image for the early stage detections of the COVID-19 cases (41).Another digital diagnostic tool which is in development is AiroStotleCV19, a breath test for volatile organic compounds (VOCs).Being a viral infection, COVID-10 induces oxidative stress.Developers are working on the identification of oxidative stress biomarkers during breath test for early diagnosis COVID-19.
Figure 2 :
Figure 2: Diagrammatic representation of the structure of SARS CoV 2. SARS CoV-2 has outer envelope encapsulating the RNA & nucleoprotein (N).Spike glycoprotein (S) & matrix protein (M) are transmembrane proteins embedded in the envelope (9,49,50).Coronavirus spike (S) glycoprotein that mediates viral entry using ACE2.Made of 2 subunits S1 (binding to the host cell receptor) and S2 (fusion of the viral and cellular membranes)
|
2020-05-21T00:14:54.791Z
|
2020-05-08T00:00:00.000
|
{
"year": 2020,
"sha1": "e43ef7c14ae122c9c6834ca4a7b6e7e5de9b08e3",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/5f31262d-1970-4c0b-8b26-9df6ef3e8f88/ScienceOpenPreprint/SARS%20DIAGNOSIS_02-05-20%20(1).pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2adc7e4b940633f1d1c0a941d7f6bd6143921c34",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1297235
|
pes2o/s2orc
|
v3-fos-license
|
Hepatocellular Carcinoma: a Comprehensive Review of Biomarkers, Clinical Aspects, and Therapy
Hepatocellular carcinoma (HCC) is a cause of several deaths related to cancer worldwidely. In early stage, curative treatments such as surgical resection, liver transplant and local ablation can improve the patient ´s survival. However, the disease is detected in advanced stage; moreover some available therapies are restricted to palliative care and local treatment. Early detections of HCC and adequate therapy are crucial to increase survival as well as to improve the patient´s quality of life. Therefore, researchers have been investigating molecular biomarkers with high sensibility and reliability as Golgi 73 protein (GP73), Glypican-3 (GPC3), Osteopontin (OPN), microRNAs and others. MicroRNAs can regulate important pathways on carcinogenesis, as tumor angiogenesis and progression. So, they can be considered as possible markers of prognosis in HCC, and therapeutic target for this tumor type. In this review, we discuss the recent advances related to the cause (highlighting the main risk factors), treatment, biomarkers, clinic aspects, and outcome in hepatocellular carcinoma.
Introduction
Hepatocellular carcinoma (HCC), the primary cancer of the liver, is derived from hepatocytes and occurs in more than approximately 80% of cases of liver cancer (Jemal, 2011).
The incidence of HCC is higher on mean ages from its diagnosis between 55 and 59 years in China, and 63-65 years in North America and Europe. Higher rates of HCC are observed in East and Southeast Asia, East and Western Africa (Jemal et al., 2011;El-Serag, 2012). The latest data estimated by GLOBOCAN showed about 782,000 new cases and 745,000 deaths of liver cancer in 2012 worldwide. Therefore, the World Health Organization (WHO) considers HCC as the second leading cause of cancer deaths (Ferlay et al., 2015).
HCC development results from the interaction between environmental and genetic factors. Liver cirrhosis, hepatitis B virus (HBV) and hepatitis C virus (HCV) infection, excessive alcohol consumption, ingestion of aflatoxin B1, and nonalcoholic steatohepatitis (NASH) are important risk factors for HCC development (Pimenta et al., 2010;Gomes et al., 2013).
Life expectancy of patients with HCC depends on the stage of the cancer at diagnosis. In advanced stage, some months are expected, however, when the diagnosis is early and effective treatment performed, five-year survival rate can be accomplished . If the diagnosis is performed at early stage, its treatment is limited and effective; whereas, at advanced when traditional chemotherapy has no satisfactory effect, poor prognosis is expected (Liu et al., 2015). At early stage of HCC, curative treatments such as surgical resection, liver transplant and local ablation can improve the survival of the patients. Therefore, early detection and the adequate therapy are crucial to increase survival and improve the life quality of HCC patients. When classified as stage C (advanced stage) with the presence or absence of vascular invasion and preserved liver function, according to Barcelona Clinic Liver Cancer (BCLC) classification, the use of Sorafenib has been effective to improve these patients´ survival (Gomes et al., 2013;de Lope et al., 2012).
Alpha-fetoprotein (AFP) has been used as a biomarker in HCC diagnosis by serum. However, AFP is not a precise marker since it provides low sensibility and specificity (Morimoto et al., 2012;Lok et al., 2010). Therefore, a biomarker that presents higher diagnostic accuracy and high reliability are needed. Recent studies identified many tumor biomarkers in HCC as Golgi 73 protein (GP73), Glypican-3(GPC3), microRNAs and others (Ba et al., 2012;Feng et al., 2014;Bartel., 2004).
Current genetic research can contribute to the diagnosis, prognosis, and therapeutic in HCC as well as to provide insights on further steps of the molecular medicine applied to the cancer (Villanueva et al., 2010). In this context, the present review stands out the recent advances related to the causes, treatments, biomarkers, clinic aspects and outcome on hepatocellular carcinoma.
Causes/ Risk Fators
The HCC carcinogenesis is often associated with liver cirrhosis resultant from chronic liver diseases as chronic hepatitis, HBV or HCV infection, and autoimmune hepatitis. Other risk factors include excessive alcohol consumption, NASH, non-alcoholic fatty liver disease (NAFLD), exposure and ingestion of aflatoxin, diabetes mellitus, tobacco, and sporadically genetic diseases such as alpha-1 antitrypsin deficiency, hemochromatosis, tyrosinemia, porphyria and Wilson's disease (Mittal and El-Serag, 2013;McGlynn and London, 2011;Pimenta and Massabki, 2010). Figure 1 represents the risk factors for the development of Hepatocellular Carcinoma.
Yang and colleagues have studied other risk factors related to HCC development, as male gender, advanced age, obesity, and co-infection with VHB . A large population-based study with 11,801 male from Taiwan (followed up for 15 years) concluded that an increased risk for HCC development was influenced by three factors, in the following proportions: HBV (55.7%) and HCV (15.3%) infection, and independent alcohol consumed (2.1%). The study has observed an association between HBV and HCV infection and the increased risk of developing HCC (1.7%), and that the combination HBV infection with alcohol consumption increases the risk in 4,2% (Liao et al., 2012).
The presence of chronic HCV infection and excessive alcohol consumption can duplicate the risk of HCC when compared with the infection alone (Jelic et al., 2010). Women are more susceptible to liver lesion and cirrhosis development by alcohol ingestion compared to men, because the influence of hormones, such as estrogen. The liver is the organ responsible for metabolizing these hormones and the presence of alcohol in the liver results of an increase in oxidative stress and inflammation resulting from high levels of female hormones steroids (Guy and Peters, 2013;Eagon, 2010).
In the liver, the alcohol is metabolized to acetaldehyde, which is highly toxic to the hepatocytes and can result in the formation of adducts, which inactivate glutathione peroxidase system and cause mitochondrial damage. Deleterious effects of alcohol and its toxic metabolites include defect on synthesis and repair of DNA in the liver cells, and synthesis of reactive oxygen species (ROS), resulting in oxidative stress and up-regulation of proinflammatory signaling (Setshedi et al., 2010;Orman et al., 2013).
Some studies have shown that NAFLD is another risk factor for development of HCC. Currently, metabolic disorders have been a public health problem on a global scale, as diabetes mellitus, obesity, and metabolic syndrome. NAFLD is present in approximately 70% of the people with diabetes mellitus, and 90% of the people with obesity. Therefore, NAFLD is a potential risk factor for HCC (Gomes et al., 2013). Diabetic patients present a higher risk (2.5 fold) of developing HCC, independent of co-infection with HBV or HCV, and alcohol consumption (El-Serag, 2012). NAFLD can progress to NASH, cirrhosis, and HCC, due to the inflammation resulting from fat accumulation in the liver (Pocha et al., 2015;Starley et al., 2010;Michelotti et al., 2013). A study conducted in Europe suggested that tobacco consumption is a potential risk factor for the development of hepatocarcinogenesis. Tobacco was associated with liver disease and with an increased risk of mortality associated with alcohol consumption (Trichopoulos et al., 2011;Shih et al., 2012).
Another risk factor for hepatocarcinogenesis is the ingestion and exposure to fungal aflatoxins, particularly aflatoxin B1 caused by fungi (Aspergillus flavus and Aspergillus parasiticus), which often contaminate stored cereals, such as peanuts and wheat, in areas with limited resources. Mycotoxins bind to hepatocyte DNA, resulting in mutation of tumor suppressor genes or proto-oncogenes, particularly in the p53 gene, occurring in mitotically active cells, such as in livers with chronic HBV or cirrhotic (Pimenta et al., 2010;Yang et al., 2010).
Treatment
Actually, there are many curative and/or palliative treatments for HCC. The choice of the appropriate treatment should take into account the cancer stage, the expertise of the professionals, the available resources, age and comorbidities of the patient (Maida et al., 2014).
The BCLC classification considers the relevant parameters of HCC, and classifies the patients into very early, early, intermediate, advanced and terminal stage. Resection, ablation and transplantation are considered potentially curative options for patients in the early stages. Chemoembolization is indicated for intermediate stage, while patients in advanced stage can be treated with Sorafenib. Finally, patients in the terminal stage are submitted to palliative care for improve their quality of life However, studies have shown that relapse rate is higher in transplant living donor (Akamatsu et al., 2014;Chen et al., 2015;Rahimi et al., 2015).
Arterial chemoembolization is defined as an intraarterial infusion of a chemotherapeutic agent (doxorubicin or cisplatin) combined with embolization of tumor vascular supply, resulting in a cytotoxic and ischemic (hypoxia) effect (Waller et al., 2015). Hypoxia increases cell permeability, and the concentration of chemotherapeutic agent used. Chemotherapeutic agents are contraindicated for patients with impaired liver and portal vein function, encephalopathy and commitment of the biliary system. Studies have shown that the vascular obstruction induces the release of angiogenic factors, requiring the combination of Sorafenib and arterial chemoembolization in order to achieve greater therapeutic safety .
Currently, Sorafenib is an oral multikinase inhibitor with anti-angiogenic effect and antiproliferative and it is the choice treatment for advanced HCC patients with preserved liver function (Carr et al., 2010;Bruix et al., 2011). Meta-analysis and multicenter trials studies have shown the Sorafenib efficacy in prolonging survival and time to progression of the disease. However, common side effects such as dermatologic toxicity, diarrhea, fatigue, and increased incidence of hypertension were reasonably tolerated and provide conduits in clinical practice (Han et al., 2016). The treatment cost with Sorafenib is high and this medicament is not provided for use by Brazilian Public Heath (Carr et al., 2010).
Patients in the terminal stage are submitted to palliative care, aiming to improve the quality of life and increase the survival of patients. One method chosen by the healthcare team to the patient that is in this stage is brachytherapy, a less aggressive internal radiotherapy modality than the Forner et al., 2014;Maida et al.,2014;Mazzanti et al., 2016) (Figure 2).
Surgical resection is the choice treatment for HCC patients at early stage without cirrhosis. In cirrhotic patients, surgical resection is indicated only in cases with preserved liver function, single nodule, and absence of portal hypertension. The five-year survival rate varies between 50 to 75% in patients undergoing resection, but the recurrence rate (tumor development and metastasis) can reach 50% (Bruix et al., 2011). Therefore, surgical resection is not recommended for HCC associated with vascular invasion or tumor metastasis (Maida et al., 2014).
A common therapy is the ablation, which involves the destruction of tumors <5 centimeters using radiofrequency applications. Ethanol and cryosurgery are alternatives for patients who cannot be subjected to resection or transplantation. The ablation causes necrosis in the tumor microcirculation and ischemia (Ryan et al., 2016). Studies of randomized controlled trials comparing surgical resection and radiofrequency ablation have reported that there were no significant differences in survival or recurrence rates, though the ablation was associated with lower rates of hospitalizations and complications related to the treatment (Kang et al., 2015).
In HCC patients with cirrhosis the transplantation is the indicated treatment, because the reduced recurrence rate and the increase of survival. However, due to the lack of organs available for transplant, patients who could present better results are prioritized for the liver transplant (Adam et al., 2012;Clavien et al., 2012). Because the limited availability of deceased donor liver, transplant from living donors has been a strategy to reduce the waiting time for the transplant and decrease the mortality rate. Transplant from living donors is the alternative treatment for patients with HCC and advanced cirrhosis. conventional one. It is an important role of the health team to monitor the patient in all aspects, physiological and psychological, once any alteration may be suggestive of metastasis (Kumar and Panda, 2014;Schlachterman et al., 2015).
The recent positive results of the therapy with multi-target blocking activity, for example, KRAS (Kirsten Rat Viral Sarcoma Oncogene Homolog) and VEGF (Vascular Endothelial Growth Factor) represent progress in the treatment of HCC patients, proving the better efficacy of molecular therapies in comparison to the conventional chemotherapy, strengthening the current clinical oncology (Villanueva et al., 2010;Baines et al., 2011).
A challenge in the search for new therapies is that the HCC is genetically heterogeneous. Therefore, VEGF is involved in several mechanisms responsible for tumor progression, invasion, and metastasis. The identification of new targets resulted from experimental studies could predict liver carcinogenesis, and describe new biomarkers, delineating efficient therapeutic strategies in order to reduce significantly the number of HCC related deaths (Schütte et al., 2015).
Biomarkers
Modern analytical techniques such as Next Generation Sequencing (NGS), mass spectrometry, proteomics and metabolomics can provide important information for medical oncology and contribute to the identification of novel molecular biomarkers for the diagnosis of HCC (Marquardt and Andersen, 2012). These biomarkers are identified by genomic platforms, and other genetic analysis resultant of blood, tissue, urine, feces and saliva, and could contribute for the development of individualized treatment according to the genetic composition and exposure to environmental risk factors. Biomarkers may be used for diagnostic, prognostic and for identification of the clinical staging of HCC (Carethers et al., 2015;Jameson et al., 2015).
Recent studies indicate promising molecular biomarkers, such as GPC3 (Glypican-3), OPN (Osteopontin), GP73 (Protein Golgi 73), VEGF gene (Vascular Endothelial Growth Factor) , EGF gene (Epidermal Growth Factor), PDGF gene (Growth Factors Derived From Platelets) IGF gene (Growth Factor Similar To Insulin), mTOR (Protein In Mammalian Target Of Rapamycin), and microRNAs, are potential candidates to be clinically validated in the future (Mínguez et al., 2011;Schütte et al., 2015;Biselli-Chicote et al., 2012;Kedmi et al., 2015;Okada et al., 2015;Jung et al., 2015;Alqurashi et al., 2013). The most relevant genes, proteins, and their related pathways involved in HCC are summarized in the Table 1. GPC3 is a member of the cell-surface proteoglycans family, which is anchored to the plasma membrane by a glycosyl-phosphatidylinositol bond, performing a role in the control of cell division and regulation. This protein can inhibit the activity of dipeptidyl-peptidase-4 (DPP4) inducing apoptosis in some cell types and it is overexpressed in HCC predicting the poor prognosis for the patients. Thus, studies suggest the GPC3 as a potential marker of malignancy with sensitivity (77%) and specificity (96%) in the detection of small dysplastic nodules (<2 cm). Currently, researchers have investigated new strategies for therapies using GPC3 in HCC (Feng et al., 2014).
OPN is a protein involved in the fusion of osteoclasts in the mineralized bone matrix. It is a cytokine which up regulates the expression of interferon-gamma and interleukin-12. Studies suggest that OPN can induce the transition mesenchymal epithelium (MET) of HCC cells, by means of an increase in the stability of vimentin protein, which promotes the maintenance of the cell structure, cytoplasmic integrity and stabilization of cytoskeletal interaction. Thus, understanding the mechanisms in which OPN participate, could result in new therapeutic possibilities in metastatic HCC (Dong et al., 2016;Wen et al., 2016).
In a study in Thailand, OPN serum levels were significantly higher in HCC patients compared to controls or patients with nonmalignant chronic liver disease. This result suggests that the dosage of OPN could be a possible diagnostic marker for HCC (Chimparlee et al., 2015). In order to assess the diagnostic and prognostic value of OPN serum levels, a meta-analysis including eight clinical trials (n = 1399) observed that the increase of OPN level was significantly associated with reduced overall survival and relapse-free survival. So, this protein could has a potential predictive significance to determine the HCC survival rate, despite of AFP (Alpha-fetoprotein) (Cheng et al., 2014).
GP73 is a membrane protein belonging to the Golgi complex expressed in the liver and biliary epithelial cells. GP73 expression is higher in patients with HCC, suggesting that this protein can be involved in an important mechanism of liver carcinogenesis (Ba et al., 2012).
In a multicenter study conducted in China and United States, the serum levels of GP73 and AFP were evaluated in 4217 patients and controls. The sensitivity and specificity of GP73 in HCC were 74.6% and 97.4%, respectively, compared to 58.2% and 85.3% for AFP, indicating that GP73 is a potential tumor marker with higher sensitivity and specificity. Corroborating other studies, GP73 levels were significantly higher in HCC patients compared to the healthy subjects, and serum levels decreased after surgical resection and increased with tumor recurrence. Thus, it can be a useful biomarker for the identification of HCC in high-risk populations (Mao et al., 2010).
In order to elucidate the mechanism of gene expression regulation related to the tumor formation, some studies have been investigated microRNAs (miRNAs). MiRNAs are small non-coding RNAs, compounds of ~ 21 nucleotides, regulators of gene expression and multiple cellular processes such as cell differentiation, maintenance of progenitor cells, and epithelial-mesenchymal transition. Changes in the regulation of miRNAs are a common feature of malignant cancers. The mechanisms involved in cancer development include gene amplification/deletion, chromosomal rearrangements, and epigenetic regulatory mechanisms, including DNA methylation, and histone modifications. MiRNAs can act as tumor suppressors (down-regulated) or oncogenes (up-regulated) depending on their target genes (Bartel, 2004).
Studies highlighted fundamental roles of miRNAs in the liver carcinogenesis, such as the modulation of the cell in the different phases as proliferation, maintenance, apoptosis, and metastasis. One of the main advantages of the use of miRNAs in therapies is the capacity of regulate multiple genes and signaling cascades, involved in the tumor growth. Thus, some miRNAs represent potential targets in the treatment of HCC (Gramantieri et al., 2008;Liu et al., 2014).
A retrospective study has found clinical parameters combined with miRNAs, it is possible to develop a score classifying low groups and high risk of recurrence and mortality. The study suggested that the use of a specific pattern of microRNAs expression in combination with tumor classification criteria could provide a more accurate estimate of the tumor recurrence. Thus, miRNAs can act as important biomarkers after the liver transplantation to assess postsurgical recurrence (Liese et al., 2016).
A recent study has evaluated in silico 829 miRNAs and their targets in HCC tissues and non-tumor tissues. It was observed that six of these miRNAs can regulate a significant quantity of the targets involved in the liver carcinogenesis. miR-26a, miR-122, and miR-130a were down-regulated in HCC and are involved in repair pathways, DNA replication, and transcription, while miR-21, miR-93, and miR-221, related to the metabolic pathways of carbohydrate, amino acids, lipids and the immune system, were up-regulated (Thurnherr et al., 2016).
A study in China has analyzed the expression of microRNAs in 96 tumor samples and non-tumor tissues of HCC patients, of which 88% were HBV infected. MiR-122 was down-expressed in HCC and would be responsible for the mitochondrial metabolic regulation. The absence of the miR-122 can be deleterious to the preservation of liver function, and can result in higher of mortality and morbidity rates in HCC patients (Burchard et al., 2010).
A recent study found that low expression of miR-122 in HCC cell lines results in cell proliferation, migration, and invasion, resulted of the activation of the epithelial-mesenchymal transition by Wnt-signaling pathway/β-catenin, which is involved in the control of cell proliferation and differentiation of the hepatocytes. The miR-122 is one of the most abundant and specific miRNA presented in the liver, and can be detected from embryogenesis to adulthood. Therefore it can be an important biomarker for liver diseases (Waisberg and Saba, 2015;Wang et al., 2016).
Function Reference
Suppression/modulation of growth Dargel et al., 2015;Haruyama et al., 2015Haruyama et al., , 2016 Cytokine which upregulates expression of interferon gamma and interleukin-12 Dong et al., 2016 Membrane protein of the apparatus Golgi expressed in liver and biliary epithelial cells Ba et al., 2012;Yang et al., 2015 Repair pathways, DNA replication and transcription Thurnherr et al., 2016 Repair pathways, DNA replication and transcription Thurnherr et al., 2016 Repair pathways, DNA replication, transcription and autophagy mechanism Thurnherr et al., 2016 Metabolic pathways and the immune system Thurnherr et al., 2016 Metabolic pathways and the immune system Thurnherr et al., 2016 Metabolic pathways and the immune system Thurnherr et al., 2016Vascularization Fish et al., 2009Fátima and Papa , 2010;Liu et al., 2016Angiogenesis Fish et al., 2009Fátima and Papa , 2010;Liu et al., 2016 Suppressor migration and cell invasion Fish et al., 2009;Fátima and Papa , 2010;Liu et al., 2016 Cell proliferation and differentiation Kedmi et al., 2015Angiogenesis Shao et al., 2011Okada et al., 2015 Blockade of the cell cycle and apoptosis Li et al., 2016 Blockade of the cell cycle Zhang et al, 2014 Tumor suppressive Zhang et al., 2014 Angiogenesis Moeini et al., 2012;Cheng et al., 2016 Proliferation and cell migration Kedmi et al., 2015Angiogenesis Shao et al., 2011Okada et al., 2015 Cell development, homeostasis and aging Su et al., 2010;Elmashad et al., 2015;Jung and Suh, 2015 Cell growth, differentiation, proliferation and migration Li et al., 2016;Buitrago-Molina and Vogel, 2012;Merkenschlager and Marcais, 2015. A retrospective study has evaluated the expression of miR-21 in 112 patients with HCC underwent surgical resection. It was concluded that the miR-21 expression was significantly up-regulated in tumor tissues compared to adjacent non-tumor tissues. Patients with high expression of miR-21 had lower survival rates compared to patients with low expression. Thus, it is suggested that high expression of miR-21 is associated with the tumor progression, and can be used as biomarker for HCC .
There are several signaling pathways that are implicated in the HCC pathogenesis, such as VEGF, EGF, PDGF, IGF, and mTOR. Activation of these pathways possibly the resistence apoptosis, and promote cell growth, angiogenesis, invasion, and metastasis (Cervello et al., 2012). VEGF is an important regulator of the liver angiogenesis because it is strongly involved in neovascularization and infiltration of cancer cells in the tumor capsule (Moon et al., 2003). Several studies have shown that VEGF overexpression is associated with proliferation, portal vein thrombosis, tumor aggressiveness, and poor prognosis in HCC patients (Moeini et al., 2012). The VEGF inhibition has been effective in many types of cancer. Angiogenesis blockage is a route for reduction in the tumor growth and an increase of the survival rates (Biselli-Chicote et al., 2012). Inhibition of angiogenesis can have a therapeutic potential for HCC. Actually, several anti-angiogenic agents are being evaluated in clinical trials in order to control the tumor growth (Moeini et al., 2012). Some miRNAs are highly expressed in carotid artery of mice. It is proposed that these miRNAs belong to a specific class of miRNAs related to vascularization. The expression of miR-210 is induced by hypoxia and promotes the endothelial cell migration driven by VEGF, and formation of capillaries. MiR-126 induces angiogenesis in response to growth factors such as VEGF or fibroblast growth factor, repressing the negative regulators of signal transduction pathways. VEGF gene has been identified as a direct target of miR-101. This interaction results in the suppression of cell migration and invasion by the inhibition of VEGF. Therefore, miR-101 could be used for the development of therapies for HCC treatment (Fish et al., 2009;Fátima et al., 2010;Liu et al., 2016).
EGF leads to morphological and biochemical alterations in the target cells, resulting in proliferation and cell migration. It is known that miR-15b has the EGF gene as a target, and it is overexpressed in several tumor types, such as liver, colon, and cervical cancer, causing an increase in cell proliferation and differentiation. These alterations can be deleterious and promote the development and progression of cancer (Kedmi et al., 2015).
PDGF acts on transcriptional gene regulation by specific receptors, regulates the angiogenesis. A study has investigated the expression of miR-214 in the liver of transgenic mouse model expressing PDGF. Overexpression of PDGF was associated with hepatic fibrosis, steatosis, and HCC development in this study.
MiR-214 appears to be involved in the development of liver fibrosis, by the modulation of EGFR (Epidermal growth factor receptor) and TGF-β (Transforming growth factor beta) signaling pathways. The antimiR-214 would be a possible therapeutic agent for the prevention of liver fibrosis and development of HCC (Okada et al., 2015).
The IGF is responsible for the regulation of several biological mechanisms, including cell development, homeostasis, and aging. The dysregulation of these pathways is related to metabolic disorders, cancer, and neurodegenerative diseases. IGF levels indicate the liver function and are inversely related to the severity of liver disease. A study in Egypt has examined the IGF serum levels in 89 patients with HCC. The patients were divided into three groups: 30 patients treated with Sorafenib, 30 patients who received best supportive care and 29 patients who performed arterial chemoembolization. It was found that the patients with controlled disease had significantly higher levels of IGF compared to the patients without disease control. Therefore, the dosage of IGF can predict the liver function and estimate the prognosis of HCC patients (Elmashad et al., 2015;Jung et al., 2015).
A study has shown that miR-145 could inhibit the expression of genes as IGF, blocking the cell cycle and apoptosis in HCC by modulation of the Wnt-/β-catenin (Law et al., 2012). Other miRNAs are associated with IGF expression, such as the miR-122. The reduced expression of miR-122 made the cells resistant to Sorafenib and induced apoptosis, it was found that IGF is a possible target of this microrna, being repressed to this miR-122. Therefore, miR-122 could act as potential predictive biomarker for therapeutic resistance in HCC (Xu et al., 2016). Another study concluded that down-expression of miR-122 in patients infected with HBV induces chronic inflammation in the liver, contributing to carcinogenesis. In this context, the increase of miR-122 expression could be used as a strategy for preventing the development of HCC in patients with Hepatitis B (Li et al., 2016).
The mTOR signaling pathway regulates important cellular aspects such as cell growth factors, oxidative stress, and cellular metabolism, which are closely related to cell growth, differentiation, proliferation, and migration (Buitrago-Molina et al., 2012). The activation of the mTOR pathway in HCC is associated with poor prognosis and increased tumor recurrence. Considering the role of miRNAs as modulators of several pathways related to the development and progression of the tumor, the increased levels of certain miRNAs could regulate mTOR pathways and control the progression of the disease (Alqurashi et al., 2013).
MiR-99 is considered a regulator of the mTOR pathway, and mTOR levels were inversely related to the expression of miR-99 in HCC tissue (Li et al., 2013). The high expression of miR-100 can also be involved in the inhibition of the mTOR pathway, resulting in autophagy and apoptosis of HCC cells (Ge et al., 2014). A study has shown that the low expression of miR-149 acts as tumor suppressor by modulating the mTOR pathway and reduces the tumorigenesis in HepG2 (Zhang et al., 2014).
A potential advantage of using miRNAs as biomarkers is that detected in various biological materials, such as tissues primary HCC, plasma, serum, urine, and saliva, providing a remarkable means for non-invasive early diagnosis of cancer. The analysis of the miRNAs expression has been done mainly by real time-PCR and micro array techniques. Recently, the introduction of new technologies, such as NGS, has facilitated the discovery of new miRNAs. However, the lack of standardization of the sample collection procedure can affect the results (Anwar et al., 2015).
MiRNAs have emerged as a new group of small RNAs that control gene expression at post-transcriptional level, at about 30% of human genes, modulating biological mechanisms such as inflammation, carcinogenesis, and fibrogenesis. The use of microRNAs, either alone or in combination with other biomarkers, can enable to categorize samples with respect to the liver function and evolution of HCC. However, many problems have been identified such as the lack of standardization in the protocols, which difficult the validation of the use of microRNAs in the diagnosis and therapy of cancer (Anwar et al., 2015;Hayes et al., 2016).
Clinical Issues, Healthcare Team and Palliative Care
The first symptoms suggestive of HCC are pain in the upper-right quadrant of the abdomen, the appearance of palpable masses, loss of appetite and weight loss, jaundice, ascites, edema of lower extremity, malaise, diarrhea, and fever. The growth of ascites gastrointestinal bleeding, splenomegaly, and encephalopathy can be directly related to impaired liver function (Pimenta et al., 2010;Gomes et al., 2013).
The improvement of the discovery of new therapies for cancer resulted in an increased life expectancy of the Brazilian population and increase of the demand for palliative care. However, the term palliative care still is associated many stigmas, as the dying process thus, requiring extensive reflection of the multidisciplinary team, orientation to the patients and their families (Abreu et al., 2013).
Regardless the choice treatment for HCC, it is necessary the knowledge of all the multidisciplinary healthcare team including doctors, nurses, pharmacists, dietitians, social workers and others. Therefore, each patient can be treated individualized according to the appropriate therapies based on clinical and scientific evidence from clinical trials. These professionals should evaluate the benefits and the disadvantage of the therapy for the quality of life, and give patient assistance throughout their biopsychosocial and health education. A planned course of palliative care provides the patient with counseling and management of the disorder, ensuring good intervention results. The deliveries of supportive care for patients with HCC are summarized in Table 2, including aspects such as analgesia, radiotherapy and nutrition (Sun et al., 2008;Gholz et al., 2010;Gish et al., 2012;Kumar and Panda, 2014).
The HCC has a negative impact on patients, mainly on their quality of life, such as aspects related to their physical, mental, social, emotional and spiritual wellbeing. The healthcare team should be able to notice any changes in the patient, both physiological and psychological. Some individualized care should be taken into account for the improve of the quality of life of the patient (Fan et al., 2010).
Final Considerations
Actually, the HCC is one of the biggest challenges in cancer management in the clinical area, due to its different molecular pathways, causing agents, and late diagnosis. However, the risk factors for the development of carcinogenesis have been widely investigated and identified in the recent years by modern techniques as genomic sequencing, and can contribute for the discovery of new molecular biomarkers.
The improvement can also be associated to the discovery of new drugs, such as the Sorafenib, a multikinase inhibitor, from multi-center clinical studies, which have shown a possible cure to some advanced stage patients. However, the best way to significantly reduce the incidence and mortality rates related to the HCC remains in preventing HBV and HCV infection and reducing of alcohol consumption rates.
To achieve positive goals, effective health public policies coupled with the performance of health professionals in primary care should be implemented, conducting education and prevention of these diseases. So the HCC could leave the ranking of the most malignant tumors.
In conclusion, the emergence of the HCC is the result of a multifactorial process involving cirrhosis, HBV and HCV infection, alcohol consumption, presence of NAFLD or NASH, ingestion and exposure to fungal aflatoxins, tobacco, and genetic factors. Depending on the disease stage, there are several curative and/or palliative treatments, such as surgical resection, ablation, transplantation, chemoembolization, and Sorafenib, an oral multikinase inhibitor with anti-angiogenic and anti-proliferated effects, being widely used and bringing favorable results, with good cure rates and low relapse. However, all treatments can bring side effects that impair the quality of life of the patients. Therefore, researchers have sought new molecular biomarkers with enhanced sensitivity and reliability, such as the GP73, GPC3, OPN, and microRNAs. The microRNAs can regulate different cellular pathways, which are closely linked to the tumor progression and angiogenesis, facilitating the diagnosis, prognosis, and therapy of HCC. A better understanding of these biomarkers facilitates the diagnosis of HCC to significantly benefit patients by the discovery of new drug target and subsequent increase in the cure rate. So, further studies are needed in order to fully understand the hepatic carcinogenesis.
|
2017-09-07T14:41:15.430Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "9f5b5b4ae62009adcc669c5d0681bb5bb70a2d9a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9f5b5b4ae62009adcc669c5d0681bb5bb70a2d9a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18375482
|
pes2o/s2orc
|
v3-fos-license
|
Right-sided invasive metastatic thymoma of the heart
Cardiac tumours may display diverse symptoms through potential involvement of any structure of the heart. We describe a case of a highly malignant thymoma with involvement of different cardiac structures with important haemodynamic compromise. With the high sensitivity of transthoracic echocardiography for detection of intracardiac masses, computed tomography and magnetic resonance add essential structural preoperative information on the tumour and surrounding tissue as vessels, pleura, lung and mediastinum.
Introduction
In heart tumours, a variety of symptoms can be found, depending on the cardiac structures involved. Pericardial involvement occurs in approximately two-third of cases, which can lead to effusion and tamponade. Epicardial and myocardial localisation are each found in a third of cardiac metastases, whereas endocardial localisation is rare (5% of cases) [1].
As found in obduction series, metastatic tumours in the heart are much more common than primary cardiac tumours [1]. Haematogenous and lymphatogenous spread from intrathoracic tumours (lung cancer, breast carcinoma, and mesothelioma) or haematological malignancies (leukaemia and lymphoma) are most frequent, while direct extension has been described primarily from thymoma and oesophageal cancer [1]. Whereas most thymomas with cardiac involvement are limited to the pericardium [2], cases of extension in the caval vein and right atrium have also been described [3,4]. We describe a case of a highly malignant thymoma with involvement of different cardiac structures with important haemodynamic compromise.
Case report
A 25-year-old Nigerian man presented to our emergency department with complaints of tiredness, night sweats, and dizziness on exertion. On clinical examination the patient had facial swelling and venous distension in the neck. Blood pressure was 135/77 mmHg with pulsus paradoxus. Cardiac auscultation was normal. The liver was enlarged. There was no peripheral oedema. Electrocardiography showed sinus tachycardia of 110 beats/min, a vertical axis, QS pattern in V1 and a Q wave in V2 and low voltages in the limb leads. Laboratory testing showed lymphocytosis, increased creatinine (120 mmol/l) and abnormal liver enzymes (total bilirubin 47 μmol/l, aspartate aminotransferase 124 U/l, alanine aminotransferase 200 U/l and γ-glutamyl transferase 268 U/l). On chest X-ray, right pleural effusion and mediastinal widening were seen. With abdominal ultrasound the liver diameter was 17.5 cm, with ascites, pleural and pericardial effusion and hepatopetal flow in the portal vein. Transthoracic echocardiography revealed a mass almost completely occupying the right atrium and ventricle with a distended inferior vena cava (Fig. 1). The pressure gradient over the tricuspid valve was 25 mmHg (PAP 25+15 mmHg). Pericardial effusion was present. Computed tomography (CT) of the thorax with contrast injected into the right antecubital vein revealed a mass in the anterior mediastinum with no clear distinction from the surrounding structures. There was a lack of contrast in the innominate and superior caval veins, and contrast filled the right atrium through the inferior caval vein by way of collateral veins. A large right atrial filling defect was found with faint filling of the right ventricle (Fig. 2).
Due to the important haemodynamic compromise, the induction of anaesthesia was possible only after initiation of cardiopulmonary bypass with cannulation of femoral vessels under local anaesthesia. A large mass was removed from the right atrium (7×5×5 cm), but tumour tissue extending into the superior vena cava was only partially removed. Histology was compatible with type B2 thymoma. Moreover, within the tumour, immature T cells were found, with positive CD4, CD8, CD5, TdT and cytoplasmatic CD3 immunostaining. After an uneventful recovery, the patient was referred for palliative chemotherapy with cisplatinum, adriamycin and cyclophosphamide. Six months later, after four cycles of chemotherapy, regression of the residual tumour was seen and additive radiotherapy was planned.
Discussion
Independent of the nature of the primary tumour, cardiac metastases can involve any structure of the heart [1]. The clinical presentation may therefore vary considerably, and imaging will be prompted to provide the correct diagnosis. Sensitivity of transthoracic echocardiography for the detection of intracardiac masses has been reported 93% [5].
CT and magnetic resonance (MR) offer a structural overview of the tumour and surrounding tissue as vessels, pleura, lung and mediastinum. Extension in surrounding tissue and possible metastasis can be evaluated for preoperative information. Intravascular contrast provides information about intracardial extension and flow around the tumour. [3] Furthermore, cardiac MR can provide a differentiation between tumour, myocardium, thrombus and blood flow artifacts. MR can characterise tumour by different signal intensities on T1 and T2, and different enhancement after contrast, where most tumours have low intensity on T1, high intensity on T2 and strong enhancement after contrast. Our patient presented with cardiovascular collapse due to obstruction of the superior caval vein, right atrium and tricuspid valve. The imaging modalities of transthoracic echocardiography and CT provided a prompt evaluation of the severity and extent of the obstruction and offered essential pre-operative information on tumour extension and potential metastases. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
|
2016-05-06T11:44:32.548Z
|
2011-04-13T00:00:00.000
|
{
"year": 2011,
"sha1": "358afe02c40a0f2fde6d0a874576ba325a4c0277",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12471-011-0114-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "358afe02c40a0f2fde6d0a874576ba325a4c0277",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14226113
|
pes2o/s2orc
|
v3-fos-license
|
Atorvastatin at Reperfusion Reduces Myocardial Infarct Size in Mice by Activating eNOS in Bone Marrow-Derived Cells
Background The current study was designed to test our hypothesis that atorvastatin could reduce infarct size in intact mice by activating eNOS, specifically the eNOS in bone marrow-derived cells. C57BL/6J mice (B6) and congenic eNOS knockout (KO) mice underwent 45 min LAD occlusion and 60 min reperfusion. Chimeric mice, created by bone marrow transplantation between B6 and eNOS KO mice, underwent 40 min LAD occlusion and 60 min reperfusion. Mice were treated either with vehicle or atorvastatin in 5% ethanol at a dose of 10 mg/kg IV 5 min before initiating reperfusion. Infarct size was evaluated by TTC and Phthalo blue staining. Results Atorvastatin treatment reduced infarct size in B6 mice by 19% (p<0.05). In eNOS KO vehicle-control mice, infarct size was comparable to that of B6 vehicle-control mice (p = NS). Atorvastatin treatment had no effect on infarct size in eNOS KO mice (p = NS). In chimeras, atorvastatin significantly reduced infarct size in B6/B6 (donor/recipient) mice and B6/KO mice (p<0.05), but not in KO/KO mice or KO/B6 mice (p = NS). Conclusions The results demonstrate that acute administration of atorvastatin significantly reduces myocardial ischemia/reperfusion injury in an eNOS-dependent manner, probably through the post-transcriptional activation of eNOS in bone marrow-derived cells.
Introduction
Lipid-lowering therapy by 3-hydroxy-3-methylglutaryl-co-enzyme A (HMG-CoA) reductase inhibitors (i.e., statins), has largely been viewed as a long-term strategy to reduce cardiovascular risk. Recent studies suggested that early use of statins after acute coronary syndromes may reduce the risk of subsequent ischemic cardiovascular events, and the salutary effects of this early initiation of treatment was independent of baseline levels of cholesterol [1][2][3]. This suggests that, besides the lipid-lowering effects resulting from long-term use, statins might also act rapidly to reverse abnormalities of the circulatory system that may predispose to recurrent ischemic events. Potential examples of such abnormalities include endothelial dysfunction [4,5], local inflammatory responses [6,7], and/or an exaggerated thrombogenic tendency [8]. Several clinical trials have demonstrated that early statin treatment could reduce myocardial injury in patients undergoing PCI for myocardial infarction [9][10][11], although others reported opposite results [12]. However, the precise mechanisms of the infarct-sparing effect of statins remain to be defined. Animal studies have shown that statins, such as atorvastatin and simvastatin, attenuate myocardial I/R injury in a manner that is independent of lipid lowering effect [13,14]. Furthermore, statin was recently found to exert cardioprotective effects when administered at the onset of reperfusion by activating a signal transduction pathway involving endothelial eNOS [15]. Recently, eNOS has been identified in human and mouse platelets [16,17]. Statins, such as atorvastatin, increase eNOS levels in platelets in a dose-dependent manner and decrease platelet activation in vivo [16]. This inhibition of platelet activation through the upregulation of platelet eNOS may contribute to the atorvastatin-mediated protection against cerebral I/R injury [16]. The potential role of platelet eNOS in limiting myocardial I/R injury has yet to be explored.
In the current study, we examined the acute cardioprotection afforded by administering atorvastatin shortly before reperfusion in an intact mouse model of myocardial ischemia/reperfusion injury. We first hypothesized that atorvastatin acts as a potent inhibitor of post-ischemic inflammatory responses and thus protects the heart against reperfusion injury by activating eNOS. Given that the cardioprotective effects of atorvastatin proved to be robust in our model, we further examined the respective roles of endothelial eNOS and bone marrowderived eNOS in atorvastatin-mediated cardioprotection.
Materials and Methods
This study conformed to the Guide for the Care and Use of Laboratory Animals published by the National Institutes of Health (Eighth Edition, revised 2011) and was conducted under protocols approved by the University of Virginia's Institutional Animal Care and Use Committee (Protocol Number: 3985).
Materials
Atorvastatin was a gift of Pfizer Inc. Atorvastatin was prepared in PBS and 5% (vol/vol) ethanol (pH 7.6). Triphenyltetrazolium chloride (TTC) was purchased from Sigma Chemical Co. (St. Louis, MO) and 3-diaminobenzidine tetrahydrochloride from DAKO, Inc. (Carpinteria, CA). Rat anti-mouse neutrophil antibody was purchased from Serotec, Inc. (Raleigh, NC) and rabbit polyclonal antibody against a peptide corresponding to the 25 COOH-terminal amino acids of P-selectin was a gift from Dr. S. A. Green (Univ. of Virginia, Charlottesville, VA).
Animals
A total of 122 male, [9][10][11][12][13][14] week old mice, purchased from Jackson Laboratory (Bar Harbor, ME) were use in this study, including wild-type C57BL/6 (B6) mice and eNOS knockout (KO) mice. Among these, 71 mice were chimeras created between these two strains by bone marrow transplantation to post-irradiated recipients. These mice were assigned to 16 groups as detailed in Table 1.
Bone marrow transplantation
Chimeras were produced using standard techniques as described previously [18,19]. Briefly, donor mice (8,10 wks; 24-26 g) were euthanized with an overdose of sodium pentobarbital. Death was then confirmed by cervical dislocation. The bone marrow from the tibia and femur was harvested under sterile conditions yielding ,50 million nucleated bone marrow cells per mouse. Recipient mice (6,7 wks; 22-25 g) were irradiated with two doses of 600 rads each, 4 hours apart. Immediately following irradiation, 2,4610 6 bone marrow cells were injected intravenously via external jugular vein under general anesthesia plus local injection of Bupivacaine. During each irradiation, two control mice were included that did not receive bone marrow transplantation. Irradiated/ transplanted mice were housed in micro-isolators for at least 8 weeks prior to experimentation.
Hemodynamic studies in chimeras
Hemodynamic parameters were assessed in 3 mice from each group of chimeras. Mice were anesthetized with isoflurane (1% by volume in oxygen). The right common carotid artery was exposed and cannulated with a 1.4F Millar micro-tip catheter (Millar Instruments, Inc., Houston, TX). After acquiring peripheral arterial blood pressures, the catheter tip was advanced into left ventricular chamber. LV pressures (LVESP and LVEDP) and developed pressures (dP/dt + and -) were recorded ( Table 2).
Myocardial ischemia/reperfusion
Mice were subjected to 40 or 45 min of coronary occlusion followed by 60 min of reperfusion and then euthanized to count peripheral white blood cells and to evaluate myocardial infarct size and/or leukocyte infiltration (Table 1). A standard protocol was employed, as detailed previously [19,20]. Briefly, mice were anesthetized with sodium pentobarbital (80-100 mg/kg, IP) and orally intubated. Artificial respiration was maintained with a FiO2 of 0.80, 100 strokes/ min, and a 0.3-0.5 ml stroke volume. The heart was exposed through a left thoracotomy and coronary artery occlusion was achieved by passing a suture beneath the LAD and tightening over a piece of PE-60 tubing for 40 or 45 min. Reperfusion was induced by removing the PE-60 tubing.
Myocardial infarct size measurement
The mice were euthanized at 60 min after reperfusion and the hearts were cannulated through the ascending aorta for sequential perfusion with 3,4 ml of 1.0% TTC and 10% Phthalo blue. The LAD was re-occluded with the same suture used for coronary occlusion prior to Phthalo blue perfusion to determine risk region. The LV was cut into 5-7 transverse slices that were weighed and digitally photographed for determining infarct size as a percent of risk region as described previously [19,20].
Peripheral blood cell counts (CBC)
Peripheral blood cells were counted in B6 and eNOS KO mice with or without atorvastatin treatment (4 each in each group). CBC was measured before LAD occlusion and again at 60 min post-reperfusion following the 45 min LAD occlusion. Blood (30-40 ml) was obtained by puncturing the left external jugular vein at each time point. Cell counts were performed with a HemaVet Hematology System (CDC Technologies, Oxford, CT).
Immunohistochemistry of neutrophils and platelets
Hearts were harvested and cut into five to seven short-axis slices and immediately fixed in 4% paraformaldehyde in PBS (pH 7.4) for paraffin embedding. Paraffinembedded sections (5 mm) were rehydrated and incubated with 1% hydrogen peroxide. After being rinsed in PBS, the sections were incubated with 10% blocking serum. Immunostaining was performed with the use of a rabbit polyclonal antibody (1:4,000) against a peptide corresponding to the 25 COOHterminal amino acids of P-selectin (a gift from Dr. S. A. Green at Univ. of Virginia, Charlottesville). A rat anti-mouse neutrophil antibody (1:1,000) (Serotec, Raleigh, NC) was used to identify tissue neutrophils. The appropriate biotinylated secondary antibodies (Vector Laboratories) were then applied for 1 h at room temperature. After incubation with avidin-biotin complex (Vector Laboratories), immunoreactivity was visualized by incubating the sections with 3,3-diaminobenzidine tetrahydrochloride (DAKO) to produce a dark brown precipitate.
Statistical analysis
All data are presented as the mean ¡ SEM. Cell counts, hemodynamic parameters, infarct sizes and risk region sizes were compared using one-way ANOVA followed by Student's t test with Bonferroni correction.
Exclusion and mortality
Of the 122 mice that underwent myocardial ischemia/reperfusion injury, 2 mice died during early reperfusion. One mouse was excluded due to an inordinately small risk region (,25% of LV mass) ( Table 1).
Effects of atorvastatin on myocardial infarction in B6 and eNOS KO mice
In WT C57BL/6 mice, 45 min of LAD occlusion and 60 min of reperfusion produced an infarct size of 62¡2 (% of the risk region, RR). Atorvastatin administered 5 min before reperfusion resulted in a 19% decrease in myocardial infarct size as assessed by TTC staining (Fig. 1 & Data S1). A comparable infarct size was found in vehicle-treated eNOS KO mice (65¡2, p5NS vs. B6 vehicle control,), however, atorvastatin provided no protection in eNOS KO mice (68¡2 vs. 65¡2, p5NS. See Fig. 1).
In order to define the target(s) that atoravastatin acts upon to produce the cardioprotective effect, the effect of atorvastatin, as well as the eNOS gene, on blood cells were evaluated by HemaVet. Between B6 and eNOS KO mice, the hematology parameters were comparable in hemoglobin levels (12.9¡0.2 vs. 12.5¡0.2 g/dl), WBC count (5.5¡1.1 vs 5.8¡0.6 K/ml), and count of platelets (784¡57 vs. 893¡51 K/ml). However, the white blood cell differentials were significantly different between the two strains, neutrophils: 14% in B6 vs. 6% in eNOS KO mice and lymphocytes: 83% vs. 91% (p,0.05). In B6 mice, LAD occlusion and reperfusion caused a 40-50% reduction in total circulating white blood cells and a 50-60% reduction in circulating lymphocytes. In contrast, the neutrophil count nearly doubled after ischemia/reperfusion in B6 mice, but this negative effect was effectively abolished by atorvastatin when administered just prior to the onset of reperfusion (Fig 2 & Data S1).
Hemodynamic parameters in chimeras
Mild hypertension has been found in eNOS KO mice [21]. Hemodynamic parameters in chimeric mice were assessed before studying these mice in the I/R injury protocol. Mice were anesthetized with 1% isoflurane. Arterial blood pressure and left ventricular pressure were measured with a Millar MicroTip Catheter in 4 different groups of chimeras (donor/recipient by definition). There were no significant differences in heart rate, left ventricular end-diastolic pressure or left ventricular developed pressures (Table 2). However, there was a 20% increase in peripheral arterial pressures (systolic, diastolic and mean pressures) and in left ventricular end-systolic pressure (LVESP) in KO/KO mice as compared to B6/B6 mice. In KO/B6 mice, where only circulating blood cells lacked eNOS activity, no changes were found in peripheral blood pressures and LVESP as compared to B6/B6 chimera. Whereas in B6/KO mice, where only circulating blood cells had eNOS activity, systolic and mean arterial pressures as well as LVESP remained significant higher than that of B6/B6 or KO/B6, but diastolic and mean arterial pressures were found to be significantly lower than that of KO/KO chimeras ( Table 2).
Infarct sparing effect of atorvastatin in chimeras
As our ongoing studies revealed that the mouse model of ischemia/reperfusion injury can be made more sensitive to the effects of drug intervention on reperfusion injury by reducing the duration of ischemia from 45 min to 40 min, we refined our protocol accordingly by adopting 40 min of LAD occlusion and 60 min of reperfusion. Risk regions (RR, defined as percentage of left ventricular mass) were comparable among the 8 groups (35% to 43%, p5NS). Infarct size (% of RR) was also comparable among the vehicle-treated chimeras, which ranged from 41% to 48%. Compared to the corresponding vehicle-treated group, atorvastatin significantly reduced infarct size by 42% in B6/B6 mice and by 48% in B6/KO mice (p,0.05). However, no cardioprotective effect was found in KO/KO or KO/B6 chimeras (Figs 3, 4 & Data S1). Immunohistochemistry was performed in vehicle-treated B6/B6 mice, and atorvastatin-treated B6/B6, KO/B6 and B6/KO chimeras (n53 in each group) to evaluate the infiltration of neutrophils and platelets into the myocardium. Atorvastatin was found to reduce both platelets and neutrophils in the previously ischemic region in B6/B6 and B6/KO chimeras, but not in KO/B6 chimera (Fig 5).
Discussion
Many studies using in vivo animal models have consistently demonstrated that statins significantly reduce myocardial ischemia/reperfusion injury by activating eNOS [22][23][24]. However, the cell type(s) that statins act on remained unclear. By performing experiments in wild type (B6) and eNOS knockout (KO) mice, and chimeras of the two strains, we demonstrate here that bone marrow-derived cells are the primary mediators of myocardial reperfusion injury. These results are entirely consistent with our previous reports [19,20]. Furthermore, the experiments performed in bone-marrow chimeras clearly demonstrate that the cardioprotective effect of atorvastatin is primarily due to its activation of eNOS in bone marrow-derived cells.
In wild type B6 mice, atorvastatin was found to significantly reduce myocardial infarct size and this salutary effect completely disappeared in eNOS KO mice; indicating that activation of eNOS mediates the effect of atorvastatin in reducing post-ischemic myocardial injury. In KO/B6 chimeras, which lack eNOS only in bone marrow-derived cells, the protective effect of atorvastatin was also abolished. . At baseline, hemoglobin, WBC and platelets were comparable between B6 and eNOS KO mice; however, the white cell differentials showed significantly higher lymphocytes in eNOS KO mice. In B6 mice, LAD occlusion and reperfusion caused a 40-50% reduction in total circulating white blood cells and a 50-60% reduction in circulating lymphocytes. In contrast, the neutrophil count nearly doubled after ischemia/ reperfusion in B6 mice, but this negative effect was essentially abolished atorvastatin when administered just prior to the onset of reperfusion. atorvastatin is due to its effects on bone marrow-derived cells, not on the vasculature. Furthermore, immunostaining showed that atorvastatin markedly reduced the infiltration of platelets and neutrophils into the post-ischemic myocardium, indicating that atorvastatin protects the heart against reperfusion injury by inhibiting inflammatory responses through the activation of eNOS in bone marrow-derived cells.
It is well established that the inflammatory response elicited by myocardial I/R injury includes leukocyte adhesion to endothelial cells, followed by the transmigration of leukocytes into the interstitial space of the reperfused myocardium. Myocardial ischemia and reperfusion is also known to promote the emigration of neutrophils into myocardium upon restoration of blood flow to initiate a cascade of neutrophil-mediated injury. Reperfusion causes a dramatic increase in neutrophil adherence to the reperfused endothelium, which leads to capillary plugging and edema resulting in a reduction in coronary blood flow [19,25]. The adhesion of neutrophils to endothelial cells is mediated by a welldefined sequence of interactions between cell adhesion molecules on both the endothelium and neutrophils. Recently, platelets were found to contribute to I/R injury by interacting with endothelial cells and enhancing neutrophil-induced I/R injury [26][27][28]. Platelets are among the first cells recruited within minutes after reperfusion and they colocalize with leukocytes in area of infarction [20]. In a recent study, platelet P-selectin (independent of endothelial P-selectin) was found to mediate neutrophil-induced myocardial injury [27], indicating a significant role for platelets in this process [29].
The potential role of statins to reduce acute myocardial I/R injury has not been clearly elucidated until recently. An increasing body of evidence has shown that statins appear to have pleiotropic effects beyond their ability to lower lipid levels [13,[30][31][32]. Although the signal transduction pathways have not been clearly defined, statins exert cardiovascular protective effects by improving endothelial function [30,[33][34][35], inhibiting inflammatory responses [6,7,36] and antagonizing thrombogenic tendencies [22,37]. Atorvastatin and simvastatin improve endothelial function both by upregulating eNOS expression and by enhancing endothelial NO production [13,31,34]. The increase in ambient nitric oxide, in turn, inhibits the cell surface display of adhesion molecules, neutrophil accumulation and platelet aggregation [37]. Signal transduction pathways that might enhance nitric oxide production by eNOS in endothelial cells have been reported including the activation of PI3K and Akt [13,23] and the inhibition of Rho GTPase [16]. Statins are now recognized to be potent anti-inflammatory drugs. A number of studies have reported very powerful anti-inflammatory actions of statins that are largely dependent on eNOS [13,16,31,34]. The inducible isoform of NOS (iNOS) is also reported to protect against myocardial I/ R injury [38] and to mediate the cardioprotective effects of atorvastatin downstream of eNOS [39]. However, iNOS in bone marrow derived leukocytes is reported to be deleterious during myocardial I/R injury [40]. One explanation for this apparent discrepancy is that NO produced by iNOS in different cell types mediates different biological functions. Further research is needed to elucidate the effects of atorvastatin on iNOS in bone marrow derived cells.
Recently, eNOS has been identified in human and mouse platelets [16,17], and nitric oxide released from activated platelets inhibits platelet recruitment [41]. Atorvastatin significantly increased eNOS levels in platelets in a dose-dependent manner and also decreased platelet activation in vivo, which may contribute to atorvastatin-mediated protection against cerebral ischemia/reperfusion injury [16]. The role of platelet eNOS in limiting myocardial ischemia/reperfusion injury has yet to be explored. However, it will be particularly interesting to investigate given the current evidence suggesting that the putative antithrombotic and cardioprotective effects of statins are not exclusively due to modulation of the endothelial eNOS system. The current study for the first time clearly demonstrates that the infarct-sparing effect of atorvastatin is primarily due to its action on bone marrow derived cells, probably platelets. By using wild type B6 and eNOS KO mice, atorvastatin was found to exert cardioprotective effect via activation of eNOS. Atorvastatin was also found to reduce circulating neutrophils, which are widely considered to be the end-effectors of myocardial reperfusion injury. In order to differentiate between the specific roles of eNOS in endothelial vs. circulatory cells, eNOS tissue-specific knockout mice were created by bone marrow transplantation between B6 and eNOS KO mice. These chimeric mice demonstrated that endothelial eNOS plays a major role in regulating arterial blood pressure; however, eNOS in circulatory cells only played a minor role in this regard ( Table 2). Although there were no significant differences in infarct size among vehicle-treated chimeras, atorvastatin significantly reduces infarct size only in those chimeras where functional eNOS is retained in blood borne cells (Figs 3&4). Further, inflammatory responses as reflected by the accumulation of platelets and neutrophils in the myocardium were alleviated only in those chimeras that retained functional eNOS in their blood borne cells (Fig 5). Thus our results do not support a role for cardiomyocyte eNOS in mediating the cardioprotective effects of atorvastatin. However, Bell et al. have reported that atorvastatin protects isolated perfused hearts by activating the PI3K/Akt pathway and phosphorylating eNOS [13]. As always, it should be noted that different animal models (ex vivo vs. in vivo) often give rise to conflicting results. Using an in vivo model, we here demonstrate that atorvastatin fails to protect chimeric mice that have eNOS in cardiomyocytes, but are deficient in bone marrow derived eNOS. Further, it should be noted that bone marrow derived immune cells resident in heart tissue have been reported to mediate pharmacological cardioprotection in isolated perfused hearts [42].
Statins are now recognized to be powerful anti-inflammatory agents that can exert cardiovascular protective effects by improving endothelial function, inhibiting inflammatory responses and antagonizing thrombogenic tendencies. Activation of eNOS is thought to be the mechanism primarily responsible for the antiinflammatory properties of this class of drugs. Chronic treatment with statins exert their anti-inflammatory capacities as mentioned above by activating eNOS of myocytes, endothelial cells and bone marrow-derived cells, probably via the transcriptional activation [35,43]. The efficacy of acute stain therapy shortly before reperfusion in reducing the size of myocardial infarction has been reported recently. The current studies further confirmed that acute administration of atorvastatin just prior to the onset of reperfusion significantly reduces myocardial reperfusion injury in an eNOS-dependent manner, probably through the post-transcriptional activation of eNOS. Interestingly, the cardioprotective effect of atorvastatin after chronic treatment wanes with time associated with an increase in PTEN levels. This waning protection can be recaptured by an acute high dose given immediately before ischemia and reperfusion [44]. So the cardioprotective mechanisms of statins are very likely different between the acute and chronic treatment. With acute use, the infarctsparing effect of statin is primarily due to its action on bone marrow-derived cells through post-transcriptional activation of eNOS [45].
In summary, by testing the effect of a statin, atorvastatin, in wild type, eNOS knockout, and chimeric mice specifically lacking eNOS on bone marrow-derived cells, atorvastatin was found to play a critical role in down-regulating proinflammatory responses and mediate cardioprotection against reperfusion injury through the activation of eNOS. The infarct-sparing effect of atorvastatin is primarily due to its action on bone marrow-derived cells, probably platelets. These results may have potential clinical relevance. The significant reduction in infarct size achieved by adjunctive use of statins in conjunction with direct percutaneous coronary intervention (PCI) or thrombolytics has the potential to define a new standard in cardiac care.
|
2017-07-08T21:18:40.660Z
|
2014-12-03T00:00:00.000
|
{
"year": 2014,
"sha1": "95a85506238789fa2cc751aa57e51c6ee7158be7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0114375&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "95a85506238789fa2cc751aa57e51c6ee7158be7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15713470
|
pes2o/s2orc
|
v3-fos-license
|
Estimating Vertex Measures in Social Networks by Sampling Completions of RDS Trees
This paper presents a new method for obtaining network properties from incomplete data sets. Problems associated with missing data represent well-known stumbling blocks in Social Network Analysis. The method of “estimating connectivity from spanning tree completions” (ECSTC) is specifically designed to address situations where only spanning tree(s) of a network are known, such as those obtained through respondent driven sampling (RDS). Using repeated random completions derived from degree information, this method forgoes the usual step of trying to obtain final edge or vertex rosters, and instead aims to estimate network-centric properties of vertices probabilistically from the spanning trees themselves. In this paper, we discuss the problem of missing data and describe the protocols of our completion method, and finally the results of an experiment where ECSTC was used to estimate graph dependent vertex properties from spanning trees sampled from a graph whose characteristics were known ahead of time. The results show that ECSTC methods hold more promise for obtaining network-centric properties of individuals from a limited set of data than researchers may have previously assumed. Such an approach represents a break with past strategies of working with missing data which have mainly sought means to complete the graph, rather than ECSTC's approach, which is to estimate network properties themselves without deciding on the final edge set.
the manifold factors that limit the reliability of incomplete network data-factors such as network boundary specifications, inherently incomplete data collection methods, imposed limits on vertex degree in data collection, and various forms of response error (including especially non-response). Butts [44] has recently discussed issues of data collection reliability, following a series of articles by Bernard and Killworth and colleagues [12]- [16] (see also [17]). Ethical issues around name generators in sensitive contexts and the rising costs of complete network surveys only make matters more worse [18] [19]. The only example we know of that addresses RDS data type spanning trees specifically is by Handock and Gile [20] [21], who consider the network over the set of actors to be the realization of a stochastic process and present a framework with which to model the process parameters while compensating for network sampling design and missing data patterns.
Here we propose a second method for dealing with the missing data inherent in RDS spanning trees. Rather than attempting to replace missing data, or quantify the effects of missing data, we begin by considering the network to be a fixed structure about which we wish to make inferences based on partial observation. Specifically, we evaluate the constraints implied by very limited information about the marginals of the adjacency matrix and a small subset of its entries, and assess the extent to which these constraints can be used to re-construct the relative values of network-centric vertex measures. In the following paper, we describe a set of experiments undertaken to ascertain the extent to which network level statistics can be generated from the limited sorts of data normally produced by RDS samples. The method of "estimating connectivity from spanning tree completions" (ECSTC, pronounced ek-stuh-see) proposed here seeks to recover network-centric measures for individuals within RDS samples, given only very limited information about links within the ambient network in which the survey is conducted. The method does not seek to construct concrete networks that most plausibly impute missing network links from the limited input data. Rather, if ECSTC can estimate network-centric vertex measures in spite of the missing links peculiar to data generated through RDS, then combining ECSTC with RDS might potentially provide a way around the high cost of conventional social network survey methods.
ECSTC
The method of "estimating connectivity from spanning tree completions" (ECSTC) begins with the edge set determined in the course of referrals made during the RDS process, together with individual network degree information determined in each subject survey. The residual difference between these two quantities represents the number of undiscovered edges at each vertex. The ECSTC method randomly adds these missing edges to the RDS tree until each vertex has gained the requisite degree 1 . Stated equivalently, ECSTC takes as its input very limited information: a small set of entries within a network's adjacency matrix, together with the matrix's marginals. It then samples from the space of all adjacency matrices that are consistent with the partial information provided. In assigning missing edges to form complete networks, the intention is not to assert a final edge set. Rather, ECSTC seeks only to estimate network-centric vertex measures-foregoing the attempt to deduce the network's structure in any final manner. It does this by producing large numbers of random graph completions consistent with what is known about vertex degrees. Each randomly completed network is then analyzed to determine network variable(s) at each vertex; here we consider the betweenness centrality, Burt's measure of aggregate constraint, and effective size of each vertex. The completion process is then repeated on the same RDS tree, and the vertex properties once again measured for each of the completions. The values obtained from multiple independent completions are used to obtain a mean value for each variable (for each vertex) and the standard deviation is calculated to estimate variability across different completions. The ECSTC method is described in greater detail in Section 4.
Our strategy for evaluating the ECSTC method makes use of computational experiments on known, albeit idealized, topologies drawn from a class of theoretically plausible Barabasi-Albert (BA) networks 2 . For purposes of this trial, we use multiple instances of randomly generated BA graphs of 100 and 500 vertices. Unlike most tests of techniques aimed at addressing the problem of missing network data, we do not begin by removing a random subset of vertices or edges (or both). Rather, we begin by simulating an RDS sample the known graph, by which a list of vertices and a fraction of their connecting edges are discovered. We take an idealized view of the RDS method, by assuming that coupon referral tracks real network ties of equivalent edge strength, that subjects distribute coupons randomly among their network neighbors, recursively, until the referral chains all reach vertices with no undiscovered neighbors 3 .
To begin the RDS simulation, one "seed" vertex is chosen randomly from among the vertices, to serve as the starting point of the simulated RDS. We assume that at each progressive step in the RDS simulation, accurate information is obtained from the surveyed subject (vertex) regarding its network size and actual neighbors. Each surveyed vertex is then "given" three coupons 4 .
2 While Barabasi-Albert (BA) graphs represent an idealized model, they represent viable topology for many of the social networks for which RDS methods are normally applied. A recently completed metastudy of 15 STD/HIV related network studies by Rothenberg and Muth found that fat tailed, right-skewed degree distributions with log-linear decay coefficients around 2 might be considered the "basic underlying pattern" for risk networks as such [22] (pp. 110-111). While actual risk networks such as those analyzed by Rothenberg and Muth may or may not be formed by "preferential attachment" (in the sense of Barabasi-Albert), the overall distribution of edges across a network of these sizes, as produced by BA algorithms, would seem an apt model on which to test RDS completion techniques for real world risk networks of similar scale. 3 Some have found limits in the ability of the RDS method to meet these assumptions, based, they suggest, on such factors as the tight locational clustering of the population, the relatively low level of the incentives offered [23] (pp. i12-3); (See also [33] and [24] for similar conclusions) or attempts to game the remuneration system ( [25]; though see [26] [27], and the other contributors to the same issue; see [28] for further discussion). 4 Speaking specifically of RDS, Platt, Wall, and Rhodes point out: Adjusting the RDS sample to obtain population estimates depends on the ability to recruit a random population within a subject's social networks and a positive probability of recruiting everyone in that network. The possibility that the network is highly dependent on the incentive raises the question whether the latter condition obtains. This is particularly relevant when the definition of the population of study is fluid or artificially constructed by the research as with IDUs and sex workers. It should also be noted that the collection of information describing network characteristics which allows RDS analysis to produce population estimates requires the respondent to recall detailed information on the composition of their network, including its size and each member's relationship with the recruiter. This process carries a large potential for error [30] (pp. i50-1). For this reason, the authors discounted the correction and estimation features of RDS, limiting much of what is normally reported by others as the main advantage of the methodology. Importantly, Heckathorn notes that independent analyses of the accuracy of reported information on network size has shown RDS gathered data to be "strongly associated" [29] (p. 163), citing [31] and [32]; see also [34], noting that many of these issues are what Johnston calls "implementation challenges" [33].
We chose three coupons because this is the current standard practice in most RDS studies, though the proposed method is impervious to this parameter setting. This node "distributes" the three coupons to up to three of its as-yet undiscovered neighbors, which it chooses uniformly at random. This process continues to exhaustion, which is to say until we reach a state where no further steps to unsampled nodes are possible. In practice, we find that a relatively high proportion, though not necessarily all of the vertices are encountered in this way. In addition, terminal nodes in the referral tree tend to be low degree nodes, though occasionally terminal nodes may have higher degree if all their neighbors have already been sampled at previous stages of the RDS simulation. The ECSTC method is then used to generate multiple independent completions of the RDS tree, as described previously. The network-centric vertex measures of betweenness centrality, Burt's constraint, and effective size, and computed for each vertex within each completion, and the mean of these values serves as the ECSTC-derived estimate of the per-vertex measures. ECSTC-derived estimates are then compared with the true values of the network-centric measures, where the latter is readily computed using the ambient graphs on which the RDS simulation itself was conducted. Plots of the estimated versus actual measures of each vertex (for each variable) are made, and serve as the basis of conclusions concerning the extent to which the relative magnitudes of ECSTC-derived estimates reflect the relative magnitudes of the true values of the measures.
The preceding process is repeated for different RDS trees, in order to determine the sensitivity of our conclusions to the random choices involved in any particular RDS tree. The entire process is then repeated for different graphs in order to determine the sensitivity of the conclusions to the choice of particular BA network.
Network-Centric Vertex Measures
For purposes of this experiment, three common network measures were chosen to test the efficacy of the ECSTC method: effective size of a vertex, betweenness centrality, and Burt's constraint coefficient. We chose Burt's constraint and effective size as they represent related but quite different "neighborhood" measures for social network analysis. Betweenness centrality was chosen to assess the method's performance on measures affected gu global network geometry (rather than just the neighborhood of the measured vertex). We note, however, that any other measure defined for a (combinatorial) graph could be substituted in place of these three (e.g. triad census or other more complex topological functions). Since each round of the ECSTC process produces a "completed" network, all that is needed is to compute the measure of interest for the each of the completions produced in successive ECSTC rounds; the mean of these computed values then serves as an estimate of the true measure.
Effective Size (ES)
The first function examined in the experiment is the effective size of a vertex. Like Burt's constraint coeffiecient (discussed below), this is a measure of local or neighborhood topology intended to make clear the importance of a vertex to the connectivity of its neighbors (and is thus a measure of mediation or influence). Effective size is simply the degree of a vertex minus the average of the degrees of its k = 1 neighbors with respect to one another. Being largely dependent on degree information, and averaging across k = 1 neighbors, this function was thought beforehand as likely to be the most amenable to ECSTC methods. In the experiment, effective size ES(v) is calculated as: (1) where S v is the sum of all edge values s incident on vertex v and s u,w is the 0/1 value of an edge between any two vertices u and w, where u ≠ v ≠ w.
Betweenness Centrality (BC)
Betweenness centrality is defined by Wasserman and Faust [35] as the sum of the likelihoods of a vertex to lie along any of all geodesic paths in a given graph, and has been expanded upon to provide both internal and comparative measures of mediation and brokerage [36]. Betweenness centrality was found by Costenbader and Valente [37] to be among the most systematically poor performers in coping with missing data in actual networks, including symmetrized versions of the same networks. In their experiment, betweenness centrality showed a high correlation between error and sampling level, such that as levels of missing data went up, errors in the betweenness centrality of a particular vertex went up proportionally. This is perhaps not surprising given the dependence of the measure on whole graph characteristics [38]. In the current experiment, the betweenness centrality C B of a given vertex v is defined as: (2) where σ st is the number of geodesic paths from s to t, and σ st (v) is the number of geodesic paths from s to tthrough vertex v.
Constraint (CON)
Burt's constraint is a measure of the extent to which a vertex is linked to alters who are in turn linked to one another [39]. It is defined as the sum of all dyadic constraints of a vertex, where the dyadic constraint for any edge from ego to alter is defined as the square of the sum of the proportional strength of that the edge (from ego to alter) and the product of the proportional strengths of the two edges that connect ego to alter via some third vertex, and where the proportional strength of a tie is the value of that arc divided by the sum of the value of all arcs incident with the same vertex. As explained by Burt, this measure is intended to weigh both the importance of a particular edge given the connectivity of vertex, and the number of structural holes incident with that edge. In our case, where edge strengths were assumed to be equal, the proportional strength of an edge is simple the inverse of the degree of the vertex. In the experiment, the constraint CON(u) of a particular vertex u is defined as: (3) where j ≠ q ≠ i, and p ij is the proportional strength of the tie between i and j, while p iq , p qj are the proportional strength of the ties between q and i, j respectively. Burt's constraint was chosen as a test of the ECSTC method to determine the extent to which complex neighborhood structures could be accurately recovered, given the sparseness of neighborhood level inputs in the observed data. Because the absence of ties (as well as their presence) plays a significant role in the calculation of this measure, it was supposed that constraint would remain among measures that are most sensitive to missing edges, and thus an appropriate test of the method to cope with more detailed micro-level network topologies than are discovered by measures of effective size. In relative terms, this measure stands opposite betweenness centrality in its dependence on entirely local determinants, but remains quite different from effective size in that it depends as much on the accurate placement of missing edges as well as those present.
Mathematical Model
Denote by a generative model for constructive sampling of finite graphs, parameterized by θ 1 , θ 2 ,···,θ k . Although our approach is more widely applicable, in this paper we focus solely on the Barabasi-Albert (BA) model with parameters: n the number of vertices, m the number of edges that each new vertex requires during preferential attachment, and a 0 the non-negative offset added to the degree of every vertex during the computation of attachment probabilities. We consider to be the induced distribution over the space of n-vertex unlabeled undirected graphs 5 .
Let G = (V G , E G ) be the underlying social network, randomly chosen from .
Denote by the function which specifies the degree of each vertex in G. Let be the vertex measure of interest, e.g. fix μ G to be Effective Size (ES), Betweenness Centrality (BC), or Constraint (CON), as measured relative to G.
The next two subsections present the ECSTC procedure precisely, using which the function μ G may be estimated from just d G ; we also present evaluation strategies for assess the quality of the generated estimates.
Estimation Process
To begin, we note that uniformly sampling spanning trees of a general graph G is, in general, not an easy computational task [40]; most approaches to the problem require sampling from random walks covering G [41]. To circumvent this, we consider the following process that samples a maximal bounded degree subtrees T = (V T , E T ) from G.
2)
Now starting at s, recursively perform breadth-first search by expanding each frontier vertex to include edges leading to at most Δ of its yet-undiscovered neighbors.
The above process implicitly defines a distribution T(G, Δ) on a set of (Δ + 1) degreebounded subtrees of G. We note that the bounded-degree constraint in the constructive definition of T (G, Δ) ensures a balance between "deep trees" that would be generated from a pure depth-first search, and the "fat trees" that would be generated from an (unbounded degree) pure breadth-first search. Certainly T(G, Δ) is not, in general, a uniform distribution over the spanning trees of G, since it may assign a non-zero probability to trees that do not span all of G's vertices, and it may assign zero probability to some actual spanning trees of G. However, T(G, Δ) has the advantage that it is effectively computable, and more importantly, when G is a social network, one can sample from T(G, Δ) using wellestablished distributed protocols like respondent-driven sampling (RDS), which effectively mimic the aforementioned sampling procedure. Accordingly, we refer to T(G, Δ) as the Space of Δ-bounded RDS trees in G.
Let T = (V T , E T ) be a tree sampled from the distribution T(G, Δ) and define to be the function assigning to each vertex its degree in T. We shall define a distribution C(T, d G ) over what are, loosely speaking, the set of imputations of T in view of G's known degree sequence d G . More specifically, C(T, d G ) will be a distribution over a family of undirected unlabeled graphs; each graph in the family of undirected unlabeled graphs; each graph C in the family enjoys these three properties:
1)
The number of vertices in C is |V T |.
2)
Degrees of vertices in C agree with d G .
3)
The graph C contains T as a subgraph.
C(T, d G ) in defined implicitly by the following constructive procedure which samples from the distribution: C1. Initialize C = (V C , E C ) by taking. Initialize by setting δ C (v) = d T (v) for all v ∈ V C . In the next step (C2), the vertex set V C will remain unchanged, the edge set E C will be repeatedly augmented, and the map δ C will be correspondingly updated.
C2. Repeat Steps (a)-(c) until
Define a probability distribution over the vertices v in V T , by taking (4) (b) Choose vertices v 1 ,v 2 from V T via P.
(c)
If v 1 ≠ v 2 and (v 1 ,v 2 ) is not in E C , then: Add the edge (v 1 ,v 2 ) to E C ; increment the values of δ C (v 1 ) and δ C (v 2 ) 6 .
The output of the above process implicitly defines a distribution C(T, d G ) on a set of all graphs having d G and containing T is a subgraph. We refer to this distribution as the Space of completions of tree T relative to the degree sequence d G .
Steps C2 (a)-(c) above are a sort of "preferential completion", since the algorithm chooses vertices v 1 and v 2 in a way that is linearly biased based on the number of edges they are missing. Note that constructing C from T does not require knowledge of the edge structure of G, but rather only the degrees of G's vertices.
Repeating the aforementioned processes we obtain T (p) = {T 1 ,T 2 ,···,T p }, a size-p collection of Δ-bounded RDS trees T i = (V T i , E T i ) in G, drawn independently with replacement from T(G, Δ). For each tree T i , we obtain , a set of k completions of T i (relative to d G ), drawn independently with replacement from C(T i , d G ). We denote the set of vertices discovered in the course of this, as . Relative to a particular T (p) , let be the (indices of) trees in Network-centric vertex measure estimates. Given a specific completion C (in which a vertex v appears), the vertex measure μ G (v) can be estimated by computing it over the structure of C (in place of the structure of G this provides an estimate . Given that we have k p completions, the vertex measure μ G can be estimated by computing its mean value (over the k completions of each of the |S(v)| trees which contain v), denoting this estimate as: (5)
Evaluation Strategies
Let T (p) be the p trees sampled from T(G, Δ), and be k completions of T i sampled from C (T i , d G ). We evaluate the extent to which μ G is well-approximated by μμ T (p) using two distinct measures of estimate quality:
1)
The correlation r is taken to be the Pearson coefficient of the point set in which each point maps the true vertex measures μ G (u) to the ECSTC-based estimate μμ T (p) (u).
2)
The misclassification ε is the percentage (between 0 and 100) of pairs of vertices (u, v) for which μ G (u) < μ G (v) but μμ T (p) (u) ≥ μμ T (p) (v). Because vertex measures frequently play a part in assessing the relative rank of individuals in a social network (with respect to the particular measure), the misclassification rate captures the probability that incorrect conclusions about relative rank are reached when the estimate μμ T (p) is used in place of the true measure μ G .
Experiments
In this section, we seek to experimentally determine the effects of increasing the number of RDS trees p and the number of completions per tree k, on the quality of generated estimates (in terms of r and ε defined above). The general paradigm for such experiments starts by choosing a network measure(s) and family of networks on which the ECSTC method of estimating the measure(s) is to be evaluated Here we consider Barabasi-Albert networks of size 100, so ; later in the paper we consider networks of size 500 to test the scalability of the technique. The network measures we investigate are ES, BC, and CON. Fix p (the number of trees), and k (the completions per tree) which ECSTC will use in the computation of its estimates.
The following constitutes a single experimental trial: • Draw a random graph G from .
• Use the kp completions to compute measure estimate μμ T (p) (v) for each vertex v.
To illustrate, fix p = 1 as the number of trees and k = 10 as the number of completions. Figure 1 shows a 100 vertex Barabasi-Albert (BA) graph G sampled from . Figure 2 shows three graphs, one for each of the network measures considered. Each vertex v is plotted as a bar that relates the actual measure to the estimated measure (y-coordinate).
The bar corresponding to vertex v has x-coordinate μ G (v); it central y coordinate is at μμ T (p) , and the length of the vertical error bar is the standard deviation of the set of estimates generated by each of the 10 completions. The value of r is given for each plot in the upper right hand corner, and a best fit line is drawn through the centers of the error bars. Figure 3 shows analogous results for 10 completions of a single BA network with 500 vertices. Together, Figure 2 and Figure 3 show that for all three network measures, the ECSTC method is able to produce a high correlation with the actual values using only completions of a single spanning tree samples.
To counter the possibility that these results might by due to chance (either in the choice of graph, or the choice of tree, or the choice of completions), we evaluated the robustness of the results by conducting t = 25 trials, and computing the mean (r) and standard deviation (std r) of the 25 values of correlation obtained, and analogously, the mean ( ) and standard deviation (std ε) of the 25 misclassification values. Such a sensitivity analysis was considered for different settings of k (between 1-50 completions), and p (between 1 -50 trees). The results concerning r̄ are presented in Table 1, while results related to are the subject of Table 2. The tables indicate the close fit of the estimated scores to the actual scores for graphs over 25 distinct trials. These patterns in these tables are described next section; the conclusions drawn there are also valid for the corresponding tables (not shown) derived from experiments on networks of size 500.
Correlation as a function of number of completions
For a fixed number of trees, the mean correlation across all vertices improves. The high values support the idea that the ECSTC method is able to successfully recover significant data across a range of network measures, with increased numbers of completions improving the fit of the estimated values to the actual ones. For several network measures, at high numbers of completions, correlation approaches 1. This holds true across a range of variables, with strong correlations between actual and estimated values apparent for betweenness centrality, effective size, and Burt's constraint. These observations are mitigated in those instances where high numbers of trees were included. There, the correlation values (for 50 trees, for example) were already so high that the use of multiple completions added only very marginal gains. The standard deviation of correlation values across 25 independent trials shows a similar trend. Where the number of trees is held steady (and low), increasing numbers of completions produces a lower standard deviation across trials, meaning that high numbers of completions tend to mitigate sensitivity to initial starting conditions, and the vaguaries of the starting point of the sampling tree.
Correlation, as a function of multiple trees
Where the number of completions is held steady (and low), the effect of producing multiple trees has a similar effect to producing multiple completions, improving the fit between estimated and actual. Here too, where high numbers of completions are included, the fit is already so tight that there is only a marginal improvement provided by raising the number of trees. The standard deviation of correlation values across 25 independent trials shows a similar trend. Where the number of trees is held steady (and low), increasing numbers of completions produces a lower standard deviation across trials, meaning that high numbers of completions tend to mitigate sensitivity to initial starting conditions.
Misclassification, as a function of number of completions
As with correlation, increasing the numbers of completions shows an improvement in the fit between estimated and actual values, with high numbers of completions resulting in a lower percentage of misclassified vertex pairs. This holds true across effective size, Burt's constraint, though not for betweeness centrality. Here, a high number of completions did not result in a steady decrease in the number of misclassified pairs. Across 25 trials, the standard deviation of misclassification decreased as the number of completions increased. This held true across all three network measures. We note here, though, that where high number of trees were available, the improvement provided by high numbers of completions was negligible, as the the standard deviation across trials was already approaching 0.
Misclassification, as a function of multiple trees
Here the observation that pertained to correlation is reversed. The inclusion of multiple trees did not significantly improve (i.e. lower) the percentage of misclassifications, and in the case of betweenness centrality, the percentage of misclassifications actually increased with the inclusion of more sampling trees of the same ambient graph.
These observations, overall, suggest that multiple completions carry much the same results as multiple spanning tree samples of the same network, and at times produce better results. They also have the effect of minimizing sensitivity to initial starting conditions, as examined across 25 distinct trials. Beyond this, for these (idealized) conditions, the ECSTC method proved capable of recovering significant amounts of network data, in close correlation with the values that obtain in the original network.
Discussion and Future Work
As above, the purpose of this experiment was to test the potential and begin to assess the validity of the ECSTC method for obtaining network properties from fairly sparse data sets, especially the sorts of spanning tree data sets normally produced by Respondent-Driven Sampling methodologies. The high conformity of the estimated values to the known values surprised the authors. These results are encouraging, showing that the method is capable under the circumstances described here of estimating accurately the values of a known but only partly sampled graph, with relatively small levels of variation in that estimate or dependence on initial conditions. A major concern for the authors was the sensitivity of the method to any single random walk. Given the relationship between this method and RDS research protocols-where ordinarily only a single random walk sample is taken-we worried that stochastic factors inherent in the walk itself (randomness that plays a large role in RDS's ability to reach sampling equilibrium in a population) would bias the results of the completions. Again this appears, at first attempt, not to be the case. The high concurrence of results over multiple sampling walks of the same networks, and the generally low standard deviation of the variation of those results across 25 distinct trials, means that we can have some confidence that the ECSTC method is not overly sensitive to peculiarities of any particular sampling walk.
Not surprisingly, the method was not equally successful across all measures, nor equally successful among those it was able to estimate closely. It worked best (closest fit and smallest individual error) for effective size. The authors were very surprised at the ability of the method to recover Burt's constraint measure, with a very high Pearson's r score, and low mean standard deviation. We expected the technique to fare worse on this measure. Despite past results showing that betweenness centrality to be among the least resiliant measure in the face of missing data, these scores were actually quite good as well, indicating that the mean values of these distributions (of estimates) were, in general, quite close to the actual values. These results were consistent over the course of 25 trials.
There remains much work to be done, as discussed below. But if the results shown here for the Barabasi-Albert distribution are consistent across other topologies and sampling scenarios, then the ECSTC method may prove a valuable extension of the Respondent-Driven Sampling method, allowing researchers to recover at least some broad topological data from the sampling trees produced by RDS. This would address two problems that social network researchers commonly face: the cost of large surveys where all participants must be asked about all others, and the problem of anonymity and informed consent. RDS trees are samples that do not attempt to ask respondents about others in the sample, other than the sorts of degree and ego-network questions necessary for tracking their own sampling. Likewise, the coupon referral method normally used in RDS allows for anonymous tracking of links, not necessitating the use of names or rosters.
Several important limits to our results must be discussed, however. Because the spanning tree samples stop when they reach a vertex with no additional undiscovered edges, this means that low degree nodes of degree one are likely to be known quite accurately for a higher proportion of their edge set (obviously), and that low degree nodes will have a lower proportion of their edges appear as "missing" in the sample. The result is that we have much higher levels of accuracy from the initial spanning tree for low degree vertices. In a BA graph, these make up the majority of the network, such that we begin the completion protocol with much of the periphery of the network fairly well known. This means that ECSTC method does most of its work, in the current instance of a BA graph, among the more highly connected vertices. This may be why betweenness centrality estimation remained accurate despite the fact that, in general, less than 50% of the edges are discovered in the sampling walks.
An issue for our results is that we assumed that we were able to record accurate degree information at each step of the walk, even though we did not discover the full set of edges to which that degree corresponded. A legitimate question is, to what extent such a measure is normally accurate in network interviews [44] [45]? This question goes beyond the current discussion but will be taken up directly in a subsequent paper that relates the ESCTC method to the RDS methodology as it is used among actual social networks and where corrections for degree misestimation are dealt with in more detail. Likewise, this experiment dealt only with symmetrized edges, and an assumption of uniform edge type and edge strength. This leaves aside a host of important features of RDS samples, and social networks in general. It also assumes many things that we know not to be true about RDS trees, including the fact that people often do not chose randomly among their personal network [46], and at times choose people outside their network for reasons of convenience or mutual economic benefit (as referrals and interviews are paid). These considerations would, obviously, compromise the significance of the method described here.
Author Manuscript
Khan et al.
Page 20 Table 1 Correlation (mean and standard deviation) over 25 trials.
|
2024-02-27T10:47:20.600Z
|
0001-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "3d59beda1678d921410080597e717bea47f0c9e6",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=53145",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e1a55d7dce7b7c9c8382e3b84ad0d1c2e067e8e",
"s2fieldsofstudy": [
"Computer Science",
"Sociology"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
}
|
235240865
|
pes2o/s2orc
|
v3-fos-license
|
Microstructured Hollow Fiber Membranes: Potential Fiber Shapes for Extracorporeal Membrane Oxygenators
Extracorporeal membrane oxygenators are essential medical devices for the treatment of patients with respiratory failure. A promising approach to improve oxygenator performance is the use of microstructured hollow fiber membranes that increase the available gas exchange surface area. However, by altering the traditional circular fiber shape, the risk of low flow, stagnating zones that obstruct mass transfer and encourage thrombus formation, may increase. Finding an optimal fiber shape is therefore a significant task. In this study, experimentally validated computational fluid dynamics simulations were used to investigate transverse flow within fiber packings of circular and microstructured fiber geometries. A numerical model was applied to calculate the local Sherwood number on the membrane surface, allowing for qualitative comparison of gas exchange capacities in low-velocity areas caused by the microstructured geometries. These adverse flow structures lead to a tradeoff between increased surface area and mass transfer. Based on our simulations, we suggest an optimal fiber shape for further investigations that increases potential mass transfer by up to 48% in comparison to the traditional, circular hollow fiber shape.
Introduction
The use of respiratory assistance devices for patients with severe forms of respiratory failure, such as extracorporeal membrane oxygenators, allow for low tidal volume protective ventilation, therefore reducing the stress associated with mechanical ventilation [1]. Improving the efficiency of hollow fiber membrane oxygenators is a crucial topic as the survival rate for patients is low (between 60 and 70% [2]), which is partially contributed to by the large amount of blood that is circulated out of the body and into the membrane module [3]. Therefore, a potential way to optimize oxygenator performance is to increase the membrane area available for CO 2 and O 2 gas exchange, without increasing the priming volume of the device. One way to achieve this improved area-to-volume goal is the use of microstructured hollow fiber membranes that alter the traditional circular shape of the membrane surface.
Hollow fiber membranes are commonly produced by utilizing a phase inversion process, where a liquid polymer solution is pumped through a ring gap with a non-solvent solution ("Borefluid") in the center (Figure 1a) [4]. Adjustment of the spinneret allows for a microstructured lumen or shell side of a fiber (Figure 1b). A number of studies altered the lumen geometry of hollow fibers either by directly adjusting the spinneret [5] or spinning the lumen geometry of hollow fibers either by directly adjusting the spinneret [5] or spinning parameters [6]. However, for applications in membrane oxygenators, microstructuring of the lumen is less important, as the main transport resistance occurs on the blood and, therefore, shell side of the fiber [1].
(a) (b) Experimental work that altered the shell side of the fiber in the longitudinal direction, using a pulsating bore fluid concept, showed potential improvements in the mass transfer capabilities in comparison to a straight fiber geometry [7,8]. In a different approach, microstructuring was achieved by rotating a 3D printed spinneret, resulting in helically twisted fiber geometries [9]. Both the pulsating and rotating concepts induce microstructuring along the fiber, while keeping the traditional circular fiber cross section of the spinneret. Another method to enhance the fiber surface would be to adapt the cross section of the spinneret, which increases the options for non-cylindrical fiber shapes. Experimentally, this has been done by Çulfaz et al., who investigated the influence of spinning parameters on the shape of a structured ultrafiltration fiber [10].
Augmenting the fiber shape not only increases surface area, but also changes blood flow characteristics around the hollow fibers. As with any membrane separation process, secondary flow structures should be encouraged, and stagnating zones, where the convective mass transfer is inhibited, should be avoided to reduce the risk of concentration polarization [11]. This is especially true for blood-contacting applications where areas of low flow velocity are a potential source for thrombus formation. A thrombus is an agglomeration of red blood cells and platelets that, if big enough and detached from the vessel walls, can cause critical complications such as cerebral infarction or pulmonary embolism [12]. Detailed knowledge about the flow field around microstructured fibers is therefore valuable for the selection of an optimal fiber shape, however, little work has been published in this regard. Yang et al., used computational fluid dynamics (CFD) to evaluate different fiber shapes for direct contact membrane distillation. They predicted a gear-shaped cross section to achieve the highest average mass flux, however, they limited their research to a straight single-fiber module [13].
Therefore, the question arises: Is there an optimal fiber shape that maximizes membrane surface area and increases mass transfer, while simultaneously not increasing the risk of potential flow stagnation zones? As the production of arbitrary shaped hollow fiber membranes is complex, and experimental visualization of the flow patterns inside a hollow fiber membrane packing is difficult [14], computational fluid dynamics simulations are a potentially powerful tool to gain insight to this question. In this work, we follow the approach of Santos et al. [15] to calculate the local Sherwood number on a membrane sur- Experimental work that altered the shell side of the fiber in the longitudinal direction, using a pulsating bore fluid concept, showed potential improvements in the mass transfer capabilities in comparison to a straight fiber geometry [7,8]. In a different approach, microstructuring was achieved by rotating a 3D printed spinneret, resulting in helically twisted fiber geometries [9]. Both the pulsating and rotating concepts induce microstructuring along the fiber, while keeping the traditional circular fiber cross section of the spinneret. Another method to enhance the fiber surface would be to adapt the cross section of the spinneret, which increases the options for non-cylindrical fiber shapes. Experimentally, this has been done by Çulfaz et al., who investigated the influence of spinning parameters on the shape of a structured ultrafiltration fiber [10].
Augmenting the fiber shape not only increases surface area, but also changes blood flow characteristics around the hollow fibers. As with any membrane separation process, secondary flow structures should be encouraged, and stagnating zones, where the convective mass transfer is inhibited, should be avoided to reduce the risk of concentration polarization [11]. This is especially true for blood-contacting applications where areas of low flow velocity are a potential source for thrombus formation. A thrombus is an agglomeration of red blood cells and platelets that, if big enough and detached from the vessel walls, can cause critical complications such as cerebral infarction or pulmonary embolism [12]. Detailed knowledge about the flow field around microstructured fibers is therefore valuable for the selection of an optimal fiber shape, however, little work has been published in this regard. Yang et al., used computational fluid dynamics (CFD) to evaluate different fiber shapes for direct contact membrane distillation. They predicted a gear-shaped cross section to achieve the highest average mass flux, however, they limited their research to a straight single-fiber module [13].
Therefore, the question arises: Is there an optimal fiber shape that maximizes membrane surface area and increases mass transfer, while simultaneously not increasing the risk of potential flow stagnation zones? As the production of arbitrary shaped hollow fiber membranes is complex, and experimental visualization of the flow patterns inside a hollow fiber membrane packing is difficult [14], computational fluid dynamics simulations are a potentially powerful tool to gain insight to this question. In this work, we follow the approach of Santos et al. [15] to calculate the local Sherwood number on a membrane surface as a qualitative measure of mass transfer. In total, we examine seven different geometries, theoretically increasing the available gas exchange surface by up to 79% compared to the traditional, circular shape. For this study, we chose an experimental design that represents transverse flow through an oxygenator hollow fiber packing. Initially, we compared experimental velocity data from micro-particle image velocimetry (µPIV) measurements to computational fluid dynamics results in order to validate our simulations. By examining the computed flow field and Sherwood numbers, we give a discussion on potentially adverse flow structures and calculate theoretical oxygenator performance.
Non-Circular Fiber Shapes
In total, seven different cases were evaluated: A circular fiber geometry in a nonstaggered arrangement ("Circle, non-staggered"), a circular fiber geometry in a staggered arrangement ("Circle, staggered") and five non-circular geometries in staggered arrangements (Table 1). face as a qualitative measure of mass transfer. In total, we examine seven different geometries, theoretically increasing the available gas exchange surface by up to 79% compared to the traditional, circular shape. For this study, we chose an experimental design that represents transverse flow through an oxygenator hollow fiber packing. Initially, we compared experimental velocity data from micro-particle image velocimetry (µPIV) measurements to computational fluid dynamics results in order to validate our simulations. By examining the computed flow field and Sherwood numbers, we give a discussion on potentially adverse flow structures and calculate theoretical oxygenator performance.
Non-Circular Fiber Shapes
In total, seven different cases were evaluated: A circular fiber geometry in a nonstaggered arrangement ("Circle, non-staggered"), a circular fiber geometry in a staggered arrangement ("Circle, staggered") and five non-circular geometries in staggered arrangements (Table 1). Non-circular geometries were created using a sinusoidal function (Equation (1)), that incorporated the average diameter (davg), which was kept constant at 400 µm, amplitude (x), number of periods (n) and angle (ϕ).
, , The specific area S (Equation (2)) was calculated as the total membrane surface area A in relation to the packing volume VP (Figure 2a, green line). (2)
Experimental Setup
In order to approximate transverse flow conditions in a membrane oxygenator, a rectangular channel (3.6 mm × 20 mm × 1 mm) with a non-staggered fiber arrangement in the center was fabricated (Figure 2a). Diameter (400 µm) and center to center distance (600 µm) between the fibers correspond to typical dimensions found in hollow fiber membrane oxygenators [16] (Figure 2b). A 6 × 6 arrangement placed in the center was chosen to eliminate possible influence from the channel walls and ensure fully developed flow profiles. Fabrication of the acrylic channel and fiber arrangement was done using CNC milling. Using digital microscopy (VHX-6000, Keyence, Osaka, Japan), the quality of manufacturing in regard to dimensions was verified. Finally, the channel was sealed by gluing a thin acrylic sheet on the top that provided optical access to the flow chamber for µPIV measurements. face as a qualitative measure of mass transfer. In total, we examine seven different geometries, theoretically increasing the available gas exchange surface by up to 79% compared to the traditional, circular shape. For this study, we chose an experimental design that represents transverse flow through an oxygenator hollow fiber packing. Initially, we compared experimental velocity data from micro-particle image velocimetry (µPIV) measurements to computational fluid dynamics results in order to validate our simulations. By examining the computed flow field and Sherwood numbers, we give a discussion on potentially adverse flow structures and calculate theoretical oxygenator performance.
Non-Circular Fiber Shapes
In total, seven different cases were evaluated: A circular fiber geometry in a nonstaggered arrangement ("Circle, non-staggered"), a circular fiber geometry in a staggered arrangement ("Circle, staggered") and five non-circular geometries in staggered arrangements (Table 1). Non-circular geometries were created using a sinusoidal function (Equation (1)), that incorporated the average diameter (davg), which was kept constant at 400 µm, amplitude (x), number of periods (n) and angle (ϕ).
, , The specific area S (Equation (2)) was calculated as the total membrane surface area A in relation to the packing volume VP (Figure 2a, green line). (2)
Experimental Setup
In order to approximate transverse flow conditions in a membrane oxygenator, a rectangular channel (3.6 mm × 20 mm × 1 mm) with a non-staggered fiber arrangement in the center was fabricated (Figure 2a). Diameter (400 µm) and center to center distance (600 µm) between the fibers correspond to typical dimensions found in hollow fiber membrane oxygenators [16] (Figure 2b). A 6 × 6 arrangement placed in the center was chosen to eliminate possible influence from the channel walls and ensure fully developed flow profiles. Fabrication of the acrylic channel and fiber arrangement was done using CNC milling. Using digital microscopy (VHX-6000, Keyence, Osaka, Japan), the quality of manufacturing in regard to dimensions was verified. Finally, the channel was sealed by gluing a thin acrylic sheet on the top that provided optical access to the flow chamber for µPIV measurements. face as a qualitative measure of mass transfer. In total, we examine seven different geometries, theoretically increasing the available gas exchange surface by up to 79% compared to the traditional, circular shape. For this study, we chose an experimental design that represents transverse flow through an oxygenator hollow fiber packing. Initially, we compared experimental velocity data from micro-particle image velocimetry (µPIV) measurements to computational fluid dynamics results in order to validate our simulations. By examining the computed flow field and Sherwood numbers, we give a discussion on potentially adverse flow structures and calculate theoretical oxygenator performance.
Non-Circular Fiber Shapes
In total, seven different cases were evaluated: A circular fiber geometry in a nonstaggered arrangement ("Circle, non-staggered"), a circular fiber geometry in a staggered arrangement ("Circle, staggered") and five non-circular geometries in staggered arrangements (Table 1). Non-circular geometries were created using a sinusoidal function (Equation (1)), that incorporated the average diameter (davg), which was kept constant at 400 µm, amplitude (x), number of periods (n) and angle (ϕ).
, , The specific area S (Equation (2)) was calculated as the total membrane surface area A in relation to the packing volume VP (Figure 2a, green line). (2)
Experimental Setup
In order to approximate transverse flow conditions in a membrane oxygenator, a rectangular channel (3.6 mm × 20 mm × 1 mm) with a non-staggered fiber arrangement in the center was fabricated (Figure 2a). Diameter (400 µm) and center to center distance (600 µm) between the fibers correspond to typical dimensions found in hollow fiber membrane oxygenators [16] (Figure 2b). A 6 × 6 arrangement placed in the center was chosen to eliminate possible influence from the channel walls and ensure fully developed flow profiles. Fabrication of the acrylic channel and fiber arrangement was done using CNC milling. Using digital microscopy (VHX-6000, Keyence, Osaka, Japan), the quality of manufacturing in regard to dimensions was verified. Finally, the channel was sealed by gluing a thin acrylic sheet on the top that provided optical access to the flow chamber for µPIV measurements. face as a qualitative measure of mass transfer. In total, we examine seven different geometries, theoretically increasing the available gas exchange surface by up to 79% compared to the traditional, circular shape. For this study, we chose an experimental design that represents transverse flow through an oxygenator hollow fiber packing. Initially, we compared experimental velocity data from micro-particle image velocimetry (µPIV) measurements to computational fluid dynamics results in order to validate our simulations. By examining the computed flow field and Sherwood numbers, we give a discussion on potentially adverse flow structures and calculate theoretical oxygenator performance.
Non-Circular Fiber Shapes
In total, seven different cases were evaluated: A circular fiber geometry in a nonstaggered arrangement ("Circle, non-staggered"), a circular fiber geometry in a staggered arrangement ("Circle, staggered") and five non-circular geometries in staggered arrangements (Table 1). Non-circular geometries were created using a sinusoidal function (Equation (1)), that incorporated the average diameter (davg), which was kept constant at 400 µm, amplitude (x), number of periods (n) and angle (ϕ).
, , The specific area S (Equation (2)) was calculated as the total membrane surface area A in relation to the packing volume VP (Figure 2a, green line). (2)
Experimental Setup
In order to approximate transverse flow conditions in a membrane oxygenator, a rectangular channel (3.6 mm × 20 mm × 1 mm) with a non-staggered fiber arrangement in the center was fabricated (Figure 2a). Diameter (400 µm) and center to center distance (600 µm) between the fibers correspond to typical dimensions found in hollow fiber membrane oxygenators [16] (Figure 2b). A 6 × 6 arrangement placed in the center was chosen to eliminate possible influence from the channel walls and ensure fully developed flow profiles. Fabrication of the acrylic channel and fiber arrangement was done using CNC milling. Using digital microscopy (VHX-6000, Keyence, Osaka, Japan), the quality of manufacturing in regard to dimensions was verified. Finally, the channel was sealed by gluing a thin acrylic sheet on the top that provided optical access to the flow chamber for µPIV measurements. face as a qualitative measure of mass transfer. In total, we examine seven different geometries, theoretically increasing the available gas exchange surface by up to 79% compared to the traditional, circular shape. For this study, we chose an experimental design that represents transverse flow through an oxygenator hollow fiber packing. Initially, we compared experimental velocity data from micro-particle image velocimetry (µPIV) measurements to computational fluid dynamics results in order to validate our simulations. By examining the computed flow field and Sherwood numbers, we give a discussion on potentially adverse flow structures and calculate theoretical oxygenator performance.
Non-Circular Fiber Shapes
In total, seven different cases were evaluated: A circular fiber geometry in a nonstaggered arrangement ("Circle, non-staggered"), a circular fiber geometry in a staggered arrangement ("Circle, staggered") and five non-circular geometries in staggered arrangements (Table 1). Non-circular geometries were created using a sinusoidal function (Equation (1)), that incorporated the average diameter (davg), which was kept constant at 400 µm, amplitude (x), number of periods (n) and angle (ϕ).
, , The specific area S (Equation (2)) was calculated as the total membrane surface area A in relation to the packing volume VP (Figure 2a, green line). (2)
Experimental Setup
In order to approximate transverse flow conditions in a membrane oxygenator, a rectangular channel (3.6 mm × 20 mm × 1 mm) with a non-staggered fiber arrangement in the center was fabricated (Figure 2a). Diameter (400 µm) and center to center distance (600 µm) between the fibers correspond to typical dimensions found in hollow fiber membrane oxygenators [16] (Figure 2b). A 6 × 6 arrangement placed in the center was chosen to eliminate possible influence from the channel walls and ensure fully developed flow profiles. Fabrication of the acrylic channel and fiber arrangement was done using CNC milling. Using digital microscopy (VHX-6000, Keyence, Osaka, Japan), the quality of manufacturing in regard to dimensions was verified. Finally, the channel was sealed by gluing a thin acrylic sheet on the top that provided optical access to the flow chamber for µPIV measurements. Non-circular geometries were created using a sinusoidal function (Equation (1)), that incorporated the average diameter (d avg ), which was kept constant at 400 µm, amplitude (x), number of periods (n) and angle (φ).
The specific area S (Equation (2)) was calculated as the total membrane surface area A in relation to the packing volume V P (Figure 2a, green line).
Experimental Setup
In order to approximate transverse flow conditions in a membrane oxygenator, a rectangular channel (3.6 mm × 20 mm × 1 mm) with a non-staggered fiber arrangement in the center was fabricated (Figure 2a). Diameter (400 µm) and center to center distance (600 µm) between the fibers correspond to typical dimensions found in hollow fiber membrane oxygenators [16] (Figure 2b). A 6 × 6 arrangement placed in the center was chosen to eliminate possible influence from the channel walls and ensure fully developed flow profiles. Fabrication of the acrylic channel and fiber arrangement was done using CNC milling. Using digital microscopy (VHX-6000, Keyence, Osaka, Japan), the quality of manufacturing in regard to dimensions was verified. Finally, the channel was sealed by gluing a thin acrylic sheet on the top that provided optical access to the flow chamber for µPIV measurements. For this study, deionized water was used as a working fluid (µ = 1 mPas). A syringe pump (Harvard Apparatus Model 11, Instech Laboratories Inc, Plymouth Meeting, PA, USA) controlled the flow rate during the experiment. Selection of the flow rates was based on previous work [16] that allowed estimation of fluid velocity between fibers in a prototype hollow fiber membrane module (0.42 mL/min, 0.72 mL/min and 1.29 mL/min). Using the average inlet velocity U, fiber diameter d and kinematic viscosity ν, the Reynolds number (Equation (3)) for the performed experiments corresponded to 0.8, 1.3 and 2.4.
Velocity Measurement
A micro-particle image velocimetry (µPIV) system was used to visualize the flow velocity field between two fibers in the arrangement. A simplified schematic of the measurement principle is given in Figure 3. In µPIV, a fluid flow is seeded with fluorescent tracer particles that follow the flow field. Two quick, successive laser pulses are used to excite fluorescent signals that are observed by a camera. Knowing the timing between pulses, imaging processing software calculates the velocity field based on the movement of the particles. Detailed explanation of this measurement principle is given elsewhere, for example, in [17]. For this study, deionized water was used as a working fluid (µ = 1 mPas). A syringe pump (Harvard Apparatus Model 11, Instech Laboratories Inc., Plymouth Meeting, PA, USA) controlled the flow rate during the experiment. Selection of the flow rates was based on previous work [16] that allowed estimation of fluid velocity between fibers in a prototype hollow fiber membrane module (0.42 mL/min, 0.72 mL/min and 1.29 mL/min). Using the average inlet velocity U, fiber diameter d and kinematic viscosity ν, the Reynolds number (Equation (3)) for the performed experiments corresponded to 0.8, 1.3 and 2.4.
Velocity Measurement
A micro-particle image velocimetry (µPIV) system was used to visualize the flow velocity field between two fibers in the arrangement. A simplified schematic of the measurement principle is given in Figure 3. In µPIV, a fluid flow is seeded with fluorescent tracer particles that follow the flow field. Two quick, successive laser pulses are used to excite fluorescent signals that are observed by a camera. Knowing the timing between pulses, imaging processing software calculates the velocity field based on the movement of the particles. Detailed explanation of this measurement principle is given elsewhere, for example, in [17]. For this study, deionized water was used as a working fluid (µ = 1 mPas). A syringe pump (Harvard Apparatus Model 11, Instech Laboratories Inc, Plymouth Meeting, PA, USA) controlled the flow rate during the experiment. Selection of the flow rates was based on previous work [16] that allowed estimation of fluid velocity between fibers in a prototype hollow fiber membrane module (0.42 mL/min, 0.72 mL/min and 1.29 mL/min). Using the average inlet velocity U, fiber diameter d and kinematic viscosity ν, the Reynolds number (Equation (3)) for the performed experiments corresponded to 0.8, 1.3 and 2.4.
Velocity Measurement
A micro-particle image velocimetry (µPIV) system was used to visualize the flow velocity field between two fibers in the arrangement. A simplified schematic of the measurement principle is given in Figure 3. In µPIV, a fluid flow is seeded with fluorescent tracer particles that follow the flow field. Two quick, successive laser pulses are used to excite fluorescent signals that are observed by a camera. Knowing the timing between pulses, imaging processing software calculates the velocity field based on the movement of the particles. Detailed explanation of this measurement principle is given elsewhere, for example, in [17]. The system used here consisted of a Nd:YAG laser (Bernoulli 200-15, Litron Lasers Ltd., Rugby, Warwickshire, UK) with emission at 532 nm in combination with an inverted microscope (Olympus IX73, Tokyo, Japan) and a high-speed camera (Zyla 5.5 sCMOS USB 3.0, Andor, Oxford Instruments plc, Tubney Woods, Abingdon, UK). The camera control input was connected to a synchronizer (LaserPulse Synchronizer 610036, TSI Inc., Shoreview, MN, USA), which adjusted the camera shots to the laser pulses. The output of the camera was connected to the control PC unit where the imaging software processed the results (4G Insight 11.1.0.5, TSI Inc., Shoreview, MN, USA). The flow channel was fixed on the stage of the microscope. Approximately 5 v% polystyrene seeding particles with a diameter of 1.8 µm were added to the working fluid. The excitation peak of the fluorescent dye was 542 nm and the emission peak was 612 nm (Fluoro-Max, Thermo Fisher Scientific, Fremont, CA, USA). Postprocessing and image generation of the results were performed using Tecplot 360 (Tecplot Inc., Bellevue, WA, USA). The depth of correlation (DoC), i.e., the distance above and beneath the focal plane where particles were illuminated [17], was calculated as 30 µm.
Velocity measurements were performed at the center plane (height 500 µm) of the channel, between two fibers (Figure 2c). For the validation of the CFD simulations, the velocity magnitude was extracted along the centerline between two fibers (Figure 4a, white dotted line). Following this approach, repeated measurements were performed, and the measurement error was calculated.
Membranes 2021, 11, x 5 of 16 The system used here consisted of a Nd:YAG laser (Bernoulli 200-15, Litron Lasers Ltd., Rugby, Warwickshire, UK) with emission at 532 nm in combination with an inverted microscope (Olympus IX73, Tokyo, Japan) and a high-speed camera (Zyla 5.5 sCMOS USB 3.0, Andor, Oxford Instruments plc, Tubney Woods, Abingdon, UK). The camera control input was connected to a synchronizer (LaserPulse Synchronizer 610036, TSI Inc., Shoreview, MN, USA), which adjusted the camera shots to the laser pulses. The output of the camera was connected to the control PC unit where the imaging software processed the results (4G Insight 11.1.0.5, TSI Inc., Shoreview, MN, USA). The flow channel was fixed on the stage of the microscope. Approximately 5 v% polystyrene seeding particles with a diameter of 1.8 µm were added to the working fluid. The excitation peak of the fluorescent dye was 542 nm and the emission peak was 612 nm (Fluoro-Max, Thermo Fisher Scientific, Fremont, CA, USA). Postprocessing and image generation of the results were performed using Tecplot 360 (Tecplot Inc., Bellevue, WA, USA). The depth of correlation (DoC), i.e., the distance above and beneath the focal plane where particles were illuminated [17], was calculated as 30 µm.
Velocity measurements were performed at the center plane (height 500 µm) of the channel, between two fibers (Figure 2c). For the validation of the CFD simulations, the velocity magnitude was extracted along the centerline between two fibers (Figure 4a, white dotted line). Following this approach, repeated measurements were performed, and the measurement error was calculated.
Computational Fluid Dynamics
Computational domains were derived from the experimental setup (Figure 2a), which changed only in the shape of the fibers according to Table 1. All non-circular fiber shapes were arranged in a staggered pattern where the distances between the fiber centers were kept constant (Figure 4a). Spatial discretization, or meshing, was done using the mesh generation utility snappyHexMesh [18]. A mesh dependence study evaluating the influence of cell size on the mean Sherwood number was performed, resulting in about 500,000 cells for all geometries (see Appendix B). Special care was taken with the membrane patches to ensure uniform boundary layers along the surface (Figure 4b
Computational Fluid Dynamics
Computational domains were derived from the experimental setup (Figure 2a), which changed only in the shape of the fibers according to Table 1. All non-circular fiber shapes were arranged in a staggered pattern where the distances between the fiber centers were kept constant (Figure 4a). Spatial discretization, or meshing, was done using the mesh generation utility snappyHexMesh [18]. A mesh dependence study evaluating the influence of cell size on the mean Sherwood number was performed, resulting in about 500,000 cells for all geometries (see Appendix B). Special care was taken with the membrane patches to ensure uniform boundary layers along the surface (Figure 4b), as the calculation of the Sherwood number is reliant on the gradient in this area.
Based on the inlet Reynolds number (Equation (3), hydraulic diameter as characteristic length), laminar flow was expected throughout the computational domain, therefore no turbulence model was selected. The computational domain consisted of patches for inlet, outlet, membrane and wall structures (Figure 4c). Inlet velocity boundary conditions were derived from experimental flow rates by calculating the average velocity. The open source code OpenFOAM ® 5.0 (The OpenFOAM Foundation Ltd., London, England) [17] was used for the computational fluid dynamics simulations. All simulations were run on server
Flow Simulations
A steady state, incompressible solver (simpleFoam) using the semi-implicit method for the pressure linked equations algorithm (SIMPLE) with second order discretization schemes was applied to solve the governing equations for momentum and mass conservation (Equations (4) and (5)), that characterize the flow field for an incompressible, Newtonian fluid. These simulations were carried out until the convergence criteria for pressure and velocity were met (residuals < 1 × 10 −5 ).
Two types of flow simulations were conducted. First, simulations where the velocity field was compared to the experimental µPIV data ("Validation simulation"). Second, simulations to generate the velocity field for the Sherwood number calculations ("Sherwood simulations"). For the validation simulations, a no-slip boundary condition for velocity and zero gradient boundary condition for pressure were applied on all wall structures. This was done to match the flow conditions within the experimental microfluidic channel. In contrast, Sherwood number simulations applied cyclic boundary conditions at the top and bottom wall, i.e., were treated as neighboring patches [20]. The reasoning behind this approach was to simulate mass transfer on a continuous fiber packing, eliminating non-physical wall effects for the calculation of the Sherwood number. Boundary conditions for both simulation types are summarized in Table 2.
Sherwood Number Simulations
After convergence for pressure and velocity was achieved, the resulting velocity field was mapped to the computational domain and a second, transient solver comprising Equations (6)-(8) (modified version of scalarTransportFoam) was used to calculate the local Sherwood number on the membrane patches. This was done by solving the transport equation for an arbitrary component T (Equation (6)), where D T denotes the diffusion coefficient of T. In this work, D T is set as 6.96 × 10 −10 m 2 /s, which corresponds to the diffusion of dissolved CO 2 in blood [16].
The local mass transfer coefficient k c of each cell was then calculated by the surface normal gradient of T (Equation (7)).
Finally, the local Sherwood number of each membrane face was calculated (Equation (8)), where d is the average fiber diameter of the structure.
A maximum Courant number limit of 1 was chosen to adjust time steps in these simulations [21]. Termination was done after no significant change in the Sherwood number was observed (~3000 time steps). A fixed inlet concentration of 1, and -as an approximation -complete removal on the membrane walls were assumed for species T (Table 3).
Evaluation of Results
Experimental (µPIV) and numerical (CFD) velocity magnitudes were compared by extracting flow profiles along the center plane of the channel. Positioning on the x-axis was done by matching maximum velocities of the parabolic flow profiles (Figure 5b). Subsequently, the percentage mean error was calculated as a measure of fit between experimental and numerical data.
µPIV Measurements
The visualized, experimental flow field between two fibers in the center of the packing at Re = 2.4 is given in Figure 5a. Velocity magnitude is presented as a contour plot with streamlines depicting flow direction. In Figure 5a, low velocity areas close to the fiber walls are clearly visible. Extraction of the velocity magnitude along the white dotted line yields the experimental flow profile depicted in Figure 5b. Error bars denote measurement uncertainty derived from three repeated measurements. Numerical results are presented as green line plots. The mean and maximum deviations between experimental and numerical data for the individual flowrates are as follows: Re 0.8: mean 2.6%, max. 8.3%; Re 1.3: mean 1.9%, max. 12.7%; Re 2.4: mean 6.1%, max. 11.2%.
Computational Fluid Dynamics Results
The mean Sherwood number as calculated by (Equation (9)) in relation to the Reynolds number is given in Figure 6. All geometries show a clear linear increase in the mean Sherwood number with increasing Re (R 2 > 0.98), however, the slope of this function varies. At lower Re, the differences between the geometries are less pronounced than at high Re. Ranking the geometries, we observe the best results, i.e., highest Sherwood number, in Sinus 6 50 µm, circular staggered and Sinus 6 25 µm options. The lowest values are observed in the circular non-staggered and Sinus 9 50 µm arrangements. To assess the CFD results, the area-weighted averaged Sherwood number was calculated by the total membrane area of the computational domain A, the local Sherwood number of a cell Sh i and face area of that cell a i (Equation (9)).
Membranes 2021, 11, 374 As a means to compare the flow conditions of the different geometries, the velocity distribution was computed for the CFD data (Equation (10)). This was done by relating the volume fraction of cells that included velocities of a certain category (∑ n i v i ) to the total volume fraction of the fiber packing (V P ). Only cells inside the fiber packing were considered for this calculation.
In order to evaluate the influence of the fibers varying in specific area, we calculated the theoretical flux of component T (J T ) for different oxygenator module sizes ranging from 100-300 mL, which approximately corresponded to priming volumes found in adult membrane oxygenators [22]. Calculation was done as shown in (Equation (11)), where A is the membrane surface derived from the specific area, and ∆T the driving force of component T, i.e., difference between surface and bulk value. As an approximation, we set the concentration of T on the membrane walls to zero, assuming total removal of the component. The mean mass transfer coefficient k C was determined based on the CFD results.
µPIV Measurements
The visualized, experimental flow field between two fibers in the center of the packing at Re = 2.4 is given in Figure 5a. Velocity magnitude is presented as a contour plot with streamlines depicting flow direction. In Figure 5a, low velocity areas close to the fiber walls are clearly visible. Extraction of the velocity magnitude along the white dotted line yields the experimental flow profile depicted in Figure 5b. Error bars denote measurement uncertainty derived from three repeated measurements. Numerical results are presented as green line plots. The mean and maximum deviations between experimental and numerical data for the individual flowrates are as follows: Re 0.8: mean 2.6%, max. 8.3%; Re 1.3: mean 1.9%, max. 12.7%; Re 2.4: mean 6.1%, max. 11.2%.
Computational Fluid Dynamics Results
The mean Sherwood number as calculated by (Equation (9)) in relation to the Reynolds number is given in Figure 6. All geometries show a clear linear increase in the mean Sherwood number with increasing Re (R 2 > 0.98), however, the slope of this function varies. At lower Re, the differences between the geometries are less pronounced than at high Re. Ranking the geometries, we observe the best results, i.e., highest Sherwood number, in Sinus 6 50 µm, circular staggered and Sinus 6 25 µm options. The lowest values are observed in the circular non-staggered and Sinus 9 50 µm arrangements. The velocity distribution inside the different fiber packings for a Reynolds number of 0.8 is given in Figure 7, according to Equation (10). The highest volume fraction at velocities below 0.001 m/s is found in the circular, non-staggered arrangement at almost 30%, whereas staggering these fibers results in the lowest amount in this category at about 12%. On the other side of the spectrum, we find that only three of the seven geometries The velocity distribution inside the different fiber packings for a Reynolds number of 0.8 is given in Figure 7, according to Equation (10). The highest volume fraction at velocities below 0.001 m/s is found in the circular, non-staggered arrangement at almost 30%, whereas staggering these fibers results in the lowest amount in this category at about 12%. On the other side of the spectrum, we find that only three of the seven geometries include velocities that exceed 0.01 m/s (circular non-staggered, Sinus 6, 50 µm and Sinus 9, 50 µm). Overall, the circular staggered arrangement yields the most uniform velocity distribution. Excluding the lowest velocity category, we find the modal value of all geometries between 0.005 and 0.006 m/s for this Reynolds number. As a visual comparison of the flow fields, CFD velocity contour plots of all geometries are given in Appendix A ( Figure A1) for Re 0.8. Employing Equation (11), we calculate a theoretical module performance for different oxygenator volumes at Re 0.8 ( Figure 8). With increasing module size, the differences in performance are increased. We observe the lowest performance in the staggered circlular and Sinus 3 options. The best performance, standing out from all other geometries, is the Sinus 6, 50 µm variant. Comparing best and worst performing geometries, a difference of about 50% in component flux is observed.
Discussion
The aim of this study was the detailed investigation of the flow field around microstructured hollow fiber membranes and calculation of their theoretical mass transfer capabilities. Initially, we conducted µPIV experiments on one of the structures to validate the velocity field obtained by our CFD simulation. Comparison of the velocity magnitude
Discussion
The aim of this study was the detailed investigation of the flow field around microstructured hollow fiber membranes and calculation of their theoretical mass transfer capabilities. Initially, we conducted µPIV experiments on one of the structures to validate the velocity field obtained by our CFD simulation. Comparison of the velocity magnitude given in Figure 5b shows good agreement between experimental and numerical data, with a maximum deviation between CFD and µPIV of 12.7%. To account for uncertainty caused by the depth of correlation, CFD data were extracted not only at the center plane, but also at positions corresponding to the DoC (focal plane ± 30 µm, as indicated in Figure 2c). Notably, however, due to the height of the channel (1 mm), this variation caused only minor changes in the results and was therefore deemed negligible for this investigation. Both the experimental (Figure 5a) and numerical ( Figure A1a) velocity contour plots show high velocities between the fibers in the flow direction, and low velocity regions perpendicular to the flow. This influences the velocity gradient along the membrane surface, which in turn influences the Sherwood number.
Looking at Figure 6, we find that the slope (k) of the Sherwood number in relation to the Reynolds number varies between the geometries. It is lowest in the Circle nonstaggered (k = 2.4), and highest in the Sinus 6, 50 µm (k = 4.6) variant. Higher Reynolds numbers, equal to higher blood flow rates through the oxygenator packing, are therefore beneficial to increase mass transfer and potentially impact the effectiveness of microstructured fibers. Additionally, we found that the Sherwood number does not increase with an increasing number of periods (Sinus 6 > Sinus 3 > Sinus 9). In regard to amplitude, there is a clear difference between the Sinus 6 and Sinus 9 geometry. For Sinus 6, both the 25 and 50 µm variants result in similar Sherwood numbers. Contrary, for Sinus 9, a difference of about 20% is observed between the 25 and 50 µm options. These findings indicate interactions between the number of periods and amplitude, suggesting an ideal combination for maximum Sherwood number.
The velocity distribution inside the fiber packings is of great interest for the present investigation for two main reasons. First, concentration polarization, the buildup of a concentration gradient in the membrane boundary layer, reduces membrane efficiency and should therefore be avoided. One way to prevent this phenomenon is the disruption of the boundary layer by induction of secondary flows, while low-velocity, stagnating zones should be avoided [11]. Whereas concentration polarization can be assumed as a general challenge in membrane separation processes, hemostasis and subsequent thrombus formation are unique to applications in blood-contacting devices. The formation mechanism of thrombi is complex, however, a major contributing factor is areas of low blood flow [23]. Therefore, we use the velocity distribution given in Figure 7 as a measure of thrombosis risk, i.e., the higher volume in the lowest velocity category (≤0.001 m/s), the higher the risk for hemostasis. Judging by this criterion, the least risk for thrombosis would be found in the Circle, staggered and the highest risk in the Circle, non-staggered geometry. Notably, the amplitude plays an important role in this regard as both the Sinus 6 and Sinus 9 geometries contain more low-velocity volume when their respective amplitude size is 50 µm as compared to 25 µm. Furthermore, we observe a correlation between the number of periods and low-velocity areas as structures with three, six and nine 50 µm amplitudes show corresponding increases in low-velocity volume (fraction ≤ 0.001 m/s: Sinus 3: 15%, Sinus 6, 50 µm: 19%, Sinus 9, 50 µm: 23%).
Looking at the CFD contour plots of the velocity flow fields ( Figure A1), low-velocity zones are found around the fibers and inside the amplitudes. Using the local Sherwood number calculated on the membrane surface, we can visualize this observation by plotting along the circumference of a single fiber (Figure 9). For example, geometries with low (Circle, staggered) and high (Sinus 9, 50 µm) fractions of low-velocity zones are compared. Clearly, the high velocities between the nine amplitudes create periodic, pointwise high Sherwood numbers. However, these alternate with areas of stagnating flow, causing the Sherwood number to drop significantly. On these parts of the membrane surface, convective mass transport would be close to zero.
Looking at the CFD contour plots of the velocity flow fields ( Figure A1), low-velocity zones are found around the fibers and inside the amplitudes. Using the local Sherwood number calculated on the membrane surface, we can visualize this observation by plotting along the circumference of a single fiber (Figure 9). For example, geometries with low (Circle, staggered) and high (Sinus 9, 50 µm) fractions of low-velocity zones are compared. Clearly, the high velocities between the nine amplitudes create periodic, pointwise high Sherwood numbers. However, these alternate with areas of stagnating flow, causing the Sherwood number to drop significantly. On these parts of the membrane surface, convective mass transport would be close to zero. The influence of these low Sherwood number regions is apparent when comparing expected and actual calculated increases in component flux (Table 4). Using the Circle, staggered geometry as a baseline, the microstructured fiber shapes increase the available surface area at a constant volume by up to 79%. If no changes in mass transfer coefficient were assumed, these increases would reflect the expected performance increase. Comparing these values to the calculated component fluxes (Figure 8), where the mass transfer coefficient is derived from the CFD data, differences are obvious. Primarily, across all structures, the actual increase is lower than the expected one, which contributes to the low Sherwood number regions around the fibers. Notably, we find the lowest differences in the geometries with six periods, and the highest in the geometries with nine periods, i.e., there is no corresponding increase in gas exchange performance with an increasing number of periods. In general, we found that an increase in specific area does not lead to an equivalent increase in component flux. The influence of these low Sherwood number regions is apparent when comparing expected and actual calculated increases in component flux (Table 4). Using the Circle, staggered geometry as a baseline, the microstructured fiber shapes increase the available surface area at a constant volume by up to 79%. If no changes in mass transfer coefficient were assumed, these increases would reflect the expected performance increase. Comparing these values to the calculated component fluxes (Figure 8), where the mass transfer coefficient is derived from the CFD data, differences are obvious. Primarily, across all structures, the actual increase is lower than the expected one, which contributes to the low Sherwood number regions around the fibers. Notably, we find the lowest differences in the geometries with six periods, and the highest in the geometries with nine periods, i.e., there is no corresponding increase in gas exchange performance with an increasing number of periods. In general, we found that an increase in specific area does not lead to an equivalent increase in component flux. Table 4. Comparison of the expected and actual component flux increase (Equation (11)). Percentile values refer to a comparison with the "Circular, staggered" geometry.
Geometry
Expected Out of the possibilities investigated in this work, we propose that the Sinus 6, 50 µm geometry is the most suitable potential shape for a microstructured hollow fiber. With a calculated increase in component flux of 48%, it surpasses the other possibilities by a wide margin. Moreover, the velocity distribution of this variant shows moderate fractions of low-velocity regions, which reduces additional risks of thrombosis. Therefore, it is the most promising candidate for future spinning of a microstructured hollow fiber membrane.
Limitations of This Study
The findings of this study are of potential interest for future membrane oxygenator optimizations, however, limitations apply. First, the geometry in this work approximates real-world membrane packings accounting for transverse flow but neglecting parallel flow along the fibers. In this regard, we follow previous investigations in this field [24,25]. Furthermore, this arrangement was chosen as it allows the use of µPIV measurements to visualize the experimental flow field and subsequently validate our CFD results. Due to the nature of the measurement principle, flow parallel to the fiber axis is difficult to measure.
Second, the Sherwood number-based model in this work is a simplified approach to compare mass transfer in hollow fiber membranes that assumes total removal of the species on the membrane walls. It does not account for permeances, solubility or partial pressure of the components. Including these factors in the modeling of membrane mass transfer is an important research topic addressed by numerous publications [26,27], however, this is not the aim of this work. The present approach allows for a qualitative, but not quantitative, comparison of different fiber structures.
As whole blood cannot be used for µPIV measurements due to its optical properties, we used water as the working fluid for the present investigation. Although essentially a non-Newtonian fluid, the shear thinning properties of blood are only present at low shear rates (<200 s −1 ) [28]. These shear rates are usually exceeded in membrane packings [29], allowing it to be treated as a Newtonian fluid. We checked this assumption in our simulation, comparing Newtonian and Casson viscosity models [30], and found no difference in results.
Lastly, the results of this work are solely based on the shell side geometry of hollow fiber membranes, neglecting the potential influence of the lumen shape. It is obvious that a combination of a circular lumen with any of the alternative shapes presented here would lead to very inconsistent wall thicknesses, which in turn would lead to varying mass transfer along the fiber circumference. Consequently, we note that the application of microstructured fibers probably requires the same geometric shape for the shell and lumen side of hollow fiber membranes. Assuming a phase inversion process for the production of fibers, this implies equal adjustment of both the bore and dope fluid part of the spinneret.
Conclusions
Improving mass transfer in oxygenators by introducing microstructured hollow fibers with a larger surface area is a plausible way to increase performance. In an effort to find a fiber shape that maximizes mass transfer but at the same time reduces the risk of flow-stagnating zones, we conducted validated computational fluid dynamics simulations to calculate the local Sherwood number on the membrane surfaces and evaluate flow conditions around the fibers. We found that amplifying the area-to-volume ratio bears the risk of creating low-flow areas around the fibers which, apart from potential concentration polarization, increases risk for thrombus formation. Based on the simulation results, we conclude that increasing the specific area by adjusting membrane shell surfaces does not automatically lead to increased oxygenator performance. From the structures investigated in this work, the Sinus 6, 50 µm option showed the most promising result, increasing the calculated component flux by up to 48% compared to the circular geometry.
|
2021-05-30T05:11:25.716Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ff2400a5c276fe7fcefa65af4b0abb32b4f308ee",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/11/5/374/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff2400a5c276fe7fcefa65af4b0abb32b4f308ee",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119448315
|
pes2o/s2orc
|
v3-fos-license
|
Abrupt Changes in the Dynamics of Quantum Disentanglement
Entanglement evolution in high dimensional bipartite systems under dissipation is studied. Discontinuities for the time derivative of the lower bound of entanglement of formation is found depending on the initial conditions for entangled states. This abrupt changes along the evolution appears as precursors of entanglement sudden death.
Entanglement evolution in high dimensional bipartite systems under dissipation is studied. Discontinuities for the time derivative of the lower bound of entanglement of formation is found depending on the initial conditions for entangled states. This abrupt changes along the evolution appears as precursors of entanglement sudden death. Entanglement is a cornerstone of modern quantum physics [1,2]. The evolution of entanglement in open quantum systems is a matter of increasing interest and new phenomena have been predicted [3,4,5,6,7,8,9,10]. One of the most outstanding effects arises when entanglement vanishes long before coherence is lost. It has been pointed out that systems composed of two qubits in a noisy environment can lose its entanglement in finite time, a phenomena named as Entanglement Sudden Death (ESD), even though full decoherence happens asymptotically. This feature appears for certain classes of states of two qubits under the action of independent reservoirs. Examples of these classes are the so-called "X"mixed states as well as some particular types of nonmaximally entangled pure states [7].
The purpose of this work is to explore dynamical behavior of entangled states in larger bipartite systems under the action of independent reservoirs. We show that unlike the case of two qubits, 3 ⊗ 3 systems may present not only ESD, but also intermediate abrupt changes in the disentanglement dynamics, i.e. the rate in which a given state loses its entanglement may change through-out the dissipative process even though coherence is lost in a constant rate. We show that these rate changes are associated with sudden changes in the rank of the partially transposed density matrix, which also provides an explanation for the sudden death of entanglement.
We analyze the disentanglement dynamics for different initial states and show that abrupt changes may be present or not depending on the variation of a small number of parameters. We also recover the result for two qubits when preparing the initial state in a 2⊗2 subspace of the whole system. Finally, we interpret these results in terms of changes in the set of entanglement witnesses appropriated for the characterization of the entangled state in each part of the dynamics.
In this work we use a general measurement for the lower bound of Entanglement of Formation (EOF) for a mixed state in m⊗n dimensions, which has been recently proposed [11]. This proposal is based on the comparison between two major criteria: (i) the positivity under partial transposition (PPT criterion) [12,13] and (ii) the realignment criterion [14,15]. EOF for m ⊗ n-dimensional systems (m ≤ n) is defined as [11] where m is the dimension of the first subsystem and γ is given by with Λ = max( ρ TA , R(ρ) ) and The matrix ρ TA is the partial transpose with respect to the subsystem A, that is, ρ TA ik,jl = ρ jk,il , and the matrix R(ρ) is defined as R(ρ) ij,kl = ρ ik,jl .
The PPT criterion says that ρ TA ≥ 0 for a separable state [12]. On the other hand, the realignment criterion says that a realigned version of ρ, for a separable state must satisfy the condition: R(ρ) ≤ 1. These conditions state that entanglement exists for Λ > 1. The maximum values that Λ(t) can assume depend on the dimensions of the bipartite systems. For example, for a maximal two qutrits entangled state Λ = 3, for two qubits Λ = 2. The minimum value for a separable state is always Λ = 1. In this work we use this quantity to study the time evolution of entanglement in the presence of dissipation.
Let us consider entangled quantum states of two qutrits, with at most two excitations, in the presence of dissipation at zero temperature. Such situation can be conveniently described by the evolution equation: where c i , c † i describes annihilation and creation operators for bosonic modes and ρ is a 3 ⊗ 3 density matrix in the Let us consider at first glance a class of initially mixed states of two qutrits which corresponds to a modification of a maximally entangled state given as follows: where λ is a real parameter ranging from 0 < λ < 1.
In the extreme cases, λ = 0, we have a separable state whereas for λ = 1 we have a maximally entangled state. The Eq.(3) can be solved for arbitrary decay constants, but for simplicity we reduce the problem to the simplest case Γ 1 = Γ 2 = Γ. By a numerical calculation we realize that ρ TA ≥ R(ρ) for all times, so that, we need to concentrate only in Λ(t) = ρ TA . Fig. 1 shows the evolution of Λ(t) for λ = 0.1. As we observe, Λ(t) undergoes sudden changes along its evolution exhibiting discontinuous derivatives, finally evolving to a situation where entanglement abruptly dies. As compared with the case of two qubits a reacher dynamical behavior of entanglement appears. From the definition of Λ(t), this feature must be closely connected with the temporal dependence of the eigenvalues of M = ρ TA · ρ TA † . In our case both analytical and numerical calculations of the eigenvalues of the matrix M can be carried out. From numerical calculations we realize that the abrupt changes of entanglement evolution are dominated by the behavior of a restricted number of eigenvalues given by: where are the density matrix elements ρ ij,kl . These eigenvalues (5) are plotted in Fig. 2, where we observe that the times where they vanish are in exact agreement with the times where abrupt changes in the entanglement evolution appear. From Eqs. (5), these times can be analytically calculated in terms of the parameter λ: Fig. 3 shows the smooth behaviors of these times as a function of the parameter λ defining particular two qutrits mixed states. From this picture we realize that the abrupt changes in the dynamics of entanglement will appear for all values of λ in the interval [0, 1]. In particular for the maximally entangled state with λ = 1, there is sudden change for t 1 = ln 2, and the time of the second and third sudden change, which is the ESD, goes to infinite, showing that the entanglement decays asymptotically. Note that this result differs substantially from its two-qubit counterpart where the corresponding maximally entangled states disentangle smoothly [7]. Also note that these abrupt changes can be mathematically interpreted as discontinuities of the derivative for the expression Λ(t) = 9 i=1 E i (t), and a sudden change in the evolution of Λ occurs whenever one of the nine eigenvalues E i becomes zero, as observed in Fig. 2 The analysis to explain these particular sudden changes in the evolution of entanglement has been done in terms of the eigenvalues of the matrix M . However, we can also understand these abrupt changes in Λ(t) by observing the behavior of the eigenvalues of the partial transpose matrix ρ TA . In our case only three eigenvalues give us information about these sudden changes and are plotted in Fig. 4. We notice that these eigenvalues change from negative to positive values for specific times which are in agreement with the sudden changes in the entanglement evolution. In other words, the disentanglement rate changes whenever the rank of the partially transposed matrix changes abruptly. We can also associate to each eigenvalue of ρ TA a corresponding entanglement witness operator such that α i (t) = T r (W i ρ (t)) with i = 1, 2, 3 and each W i is given by At t = 0, all three operators can be used to identify entanglement in ρ. As time goes by, they consecutively lose this capacity until there is no entanglement left. This suggests a geometrical interpretation to the phenomena here described which will be explored in further publications.
It is interesting to compare the case analyzed previ- ously with that of an initial state restricted to a twodimensional subspace ρ = (1/2)(|11 11| + |22 22| + χ|11 22| + χ|22 11|). Fig. 5 shows the evolution of the entanglement for χ = 0.2 as compared with the state in Eq. 4 for λ = 0.15. We observe that in the case of the initial condition restricted to two dimensional subspace we have only one abrupt change in the evolution corresponding to the ESD which resembles the behavior observed for two qubits [7]. If we look at the eigenvalues of M we see that at the time when Λ goes to zero there is also an eigenvalue that goes to zero, indicating that an abrupt change occurs. A similar conclusion could actually be obtained when looking at the negativity instead of Concurrence for the case of two qubits. In addition we could explore the entanglement evolution for initial non maximally pure entangled states, for example, | Φ = α | 00 + β | 11 + γ | 22 . For the sake of simplicity we consider α, β and γ real positive numbers. In this case, a richer dynamics for the entanglement can be observed. Depending on the choice of the amplitudes we can have asymptotic decay, sudden death, sudden changes or a combination of them. Times corre- sponding to the sudden changes and the ESD time are given by: where Z = 5 − 27 αγ β 2 + 3 √ 3 1 − 10 αγ β 2 + 27 From these expressions we realize that the entanglement dynamics exhibit: (a) asymptotic decay for (α ≥ β > γ), (b) one sudden change and asymptotic decay for (β ≥ α ≥ γ, or α > γ > β), (c) two sudden changes and asymptotic decay for (β > γ > α), and (d) two sudden changes and ESD for (γ > β > α). Fig. 6 shows two particular dynamics evolution for the cases (b) and (d).
In summary we have studied the evolution of entanglement for high dimensional dissipative quantum systems. By evaluating the entanglement contained in the system using the Chen, Albeverio and Fei measure we have observed outstanding new effects. Quantum correlations undergo abrupt changes as precursors of ESD. These can be characterized by observing the eigenvalues of the M matrix which defines the amount of entanglement for the quantum system. The dynamical changes are related to sudden changes in the rank of the Matrix M . Similar behavior can be found for both initially mixed or pure states and the ESD is recovered as a particular case of these sudden dynamical changes.
FL and CEL acknowledge the financial support from MECESUP USA0108. GR from CONICYT Ph. D. Programm Fellowships. MFS acknowledges support from Milênio Infoquant/CNPq and thanks to Universidad de Santaigo de Chile for the hospitality. JCR acknowledges support from Fondecyt 1070157 and Milenio ICM P02-049.
|
2019-04-14T03:22:49.381Z
|
2007-02-26T00:00:00.000
|
{
"year": 2007,
"sha1": "877ae5c6b1f8f8e9fe0ebe04d1fa87aaca58fb4b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/0702246",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5872e8dd20f8a166ad1a975348d9e6db94ad8f5e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
25642153
|
pes2o/s2orc
|
v3-fos-license
|
Rurality of communities and incidence of stroke: a confounding effect of weather conditions?
Introduction: An urban–rural gap in stroke incidence or mortality has been reported. However, whether the effect of rurality on stroke is independent of the distribution of conventional individual-level risk factors and other community-level risk factors is inconclusive. Methods: A cohort study was conducted involving 4849 men and 7529 women residing in 12 communities throughout Japan. Baseline data were obtained between April 1992 and July 1995. Follow up was conducted annually to capture first-ever-in-life stroke events. During that period, geographic, demographic and weather information was obtained for each community. Multi-level logistic regression analysis was conducted to evaluate the association between stroke incidence and each geographic/demographic factor adjusted for meteorological parameters (temperature and rainfall), in addition to individual-level risk factors (age, body mass index, smoking, total cholesterol, hypertension, and diabetes). Results: Throughout an average of 10.7 years' follow up, 229 men and 221 women with stroke events were identified. In women, low population (odds ratio [OR] per 1000 persons 0.97; 95% confidence interval 0.94-1.00), low population density (OR per 1/km 0.85; 0.74-0.97) and high altitude (OR per 100 m 1.18; 1.09-1.28) increased the risk of stroke independently of individual-level risk factors; however, significance was absent for all three associations when further adjusted for weather parameters. Conversely, the association between each meteorological parameter and stroke in women was significant, even after adjustment for each of the three geographic/demographic factors. Similar results were obtained for cerebral infarction.
Introduction
A geographic disparity in stroke incidence and mortality exists in many countries and world regions. In Japan, the north-east prefectures have higher stroke mortalities than other parts of the country 1 . In the USA, stroke incidence and mortality are highest in some south-eastern states, referred to as the 'stroke belt' 2 . In the UK and Finland, the northern parts of both countries have a higher incidence of stroke than in the south 3,4 . In Europe, northern countries have a higher stroke incidence than those in the south 5 .
Explanations have been proposed for this geographic disparity, for example the concentration, in some areas of higher stroke incidence, of traditional risk factors such as hypertension, hypercholesterolemia, higher age, and black African genetic heritage. It is also known that socioeconomic factors such as income level and academic background affect the risk of stroke 6 and can contribute to the geographic disparity. However, it has also been reported that individuallevel risk factors can only partially explain the disparity of stroke incidence and/or mortality among communities 3,7-12 .
The community-level geographic and demographic features may be helpful to explain disparity in stroke incidence/mortality. Rural residents are known to be more vulnerable to stroke than their urban counterparts 13,14 . Nishi et al. reported that Japanese women living in communities with smaller population have a higher stroke mortality than those in other communities, and this was independent of individual-level risk factors, such as age, blood pressure and cholesterol level 12 . Rural residents have a higher stroke incidence or mortality, even after adjustment for age and sex, in the USA 15 , Canada 16 , China 17 , Bulgaria 18 , and Potugal 19 . While higher stroke mortality may be due to a lower availability of stroke treatments, and lower access to medical facilities in rural areas 13,20 , the higher incidence of stroke cannot, however, be explained by rural treatment or access problems.
Variations in the weather is another community-level variable with the potential to contribute to a geographic disparity of stroke incidence. In several countries stroke mortality and/or incidence reportedly increases in cold months, and thus exposure to low temperatures is a possible risk factor for stroke [21][22][23][24] . Because in Japan, as in many countries in the northern hemisphere, stroke incidence/mortality is higher in northern areas 1,3-5 , low temperatures may contribite to the north-south disparity. It has also been reported that women in communities with lower temperatures or a higher rainfall had a higher incidence of stroke than women in other communities, independent of individual-level risk factors 25 .
The geographic disparity of stroke incidence may also be due to an interaction of individual-level and community- Consequently, a cohort study based in 12 communities throughout Japan assessed whether living in a rural community predicted the first-ever-in-a-lifetime stroke incidence of residents, independently of individual-level risk factors and weather conditions.
Study population
The JMS Cohort Study commenced in 1992. Its primary objective was to clarify the relationship between potential risk factors and cardiovascular diseases in 12 rural municipalities (towns, villages and cities selected from the then 3238 municipalities) in Japan 26
Measurement of baseline variables
Body weight was recorded with the subject clothed, and 0.5 kg in summer or 1 kg in other seasons was subtracted from the recorded weight. Body mass index (BMI) was presented as kg/m 2 . 'Hypertensive subjects' were defined as those with currently-treated hypertension (systolic blood pressure ≥140 mmHg, or diastolic blood pressure ≥90 mmHg). Blood samples were obtained from all participants, and 5532 (44.3%) of these followed overnight fasting. 'Diabetic subjects' were defined as those with currently-treated diabetes, plasma glucose ≥126 mg/dL after overnight fasting, or casual blood glucose ≥200 mg/dL. Age at graduation from the final school attended was used as a proxy measure of socioeconomic status.
Meteorological variables
Weather information was obtained for each community from the nearest observatory of the Japan Meteorological Agency, Ministry of Land, Infrastructure, Transport and Tourism. The distance between the center of a community and its observatory ranged from 0 to 28 km (average, 11.3 km).
Meteorological variables used were annual cumulative rainfall (mm) and mean daily temperature for 1 year (°C). All data were derived from the Agency's web site 28 . The data for each variable were obtained for every year between 1995 and 2005, and the average value of the 11 years was used for analysis. Rainfall and temperature were measured according to 0.5 mm and 0.1°C, respectively.
Geographic and demographic variables
For each community, the population data regarding population size, the 'elderly rate' (those aged ≥65 years) and population density; and the area data of latitude, longitude and altitude, were extracted from Social and Demographic Statistics: the Whole Nation Municipality-level Area Data (Sinfonica, Tokyo, Japan), which was compiled from a number of national censuses. The population and elderly population data used in the present study was the average of that collected in the census years of 1990, 1995 and 2000. Due to massive mergers of municipalities that began in 2004, population data for 2005 was not available for some communities, and therefore it was not used at all. Population density was calculated as persons per km [2]. The elderly rate was presented as the percentage of elderly people in the whole population. The geographic parameters referred to in this article include the area, altitude, latitude and longitude of a community. Demographic parameters indicate population size, density, and the elderly rate.
Follow up
Repeat examinations (part of the national mass-screening program) were used to follow up most subjects on an annual basis. Subjects who did not present for screening examination were contacted by mail or phone. Those examined were asked whether they had experienced a stroke since enrolling, and all who answered in the affirmative were contacted by the present investigators. Any required information was obtained from subjects by visiting public health nurses. The treating hospital records of those with a history of stroke were checked to determine if these subjects were hospitalized for any reason. If a stroke-related incident was suspected, pertinent CT and/or MRI images were obtained for diagnostic confirmation of stroke.
Diagnostic criteria
Diagnosis was determined independently, by means of a diagnosis committee composed of a radiologist, a neurologist and two cardiologists. A diagnosis of stroke was determined on the basis of the presence of a focal and non-convulsive neurological deficit, of clear onset, lasting for 24 hours or longer. Stroke subtypes were confirmed based on CT and/or MRI imaging in all cases except for two (1.1%), whose images were unavailable (in those cases the diagnosis was based on local hospital medical records only). The subtype classification was conducted according to the criteria of the National Institute of Neurological Disorders and Stroke 29 .
Statistical analysis
Statistical analyses were carried out using SPSS for Windows, v 11.5 (SPSS Inc; Chicago, IL, USA). Continuous variables were compared among communities using ANOVA. Categorical variables were compared using the χ 2 test.
Because the data consisted of individual-level data nested within community-level data, it formed a multilevel structure 30 factor, which was significantly associated with stroke incidence in Model 1, 2 or 3 in the above analysis.
A second-order, penalized, quasi-likelihood procedure was used to estimate the multilevel regression coefficients.
Variance of the intercept in the two-level null random intercept model without any explanatory variable was recognized as the between-area variance. In all statistical tests p<0.05 was considered significant.
Ethical approval
The study design and procedure were approved by the government of each community and the Ethical Committee of Epidemiologic Research at Jichi Medical University, Japan.
Results
Among the eligible study subjects, 95 declined follow up, and seven could not be followed up, providing a participant total of 12 276 (4807 men and 7469 women), 99.2%.
Participants' mean age at baseline survey was 55.2 years for men and 55.3 years for the women. Mean follow-up duration was 10.7 years.
The individual-level and community-level variables for each community are shown (Table 1). For all individual-level variables, significant differences were observed among the communities. The stroke incidence among communities was between 1.3 and 6.2 per 1000 person-years. The betweenarea variance for total incidence of stroke was 0.180 (95% Associations between geographic/demographic parameters and total stroke incidence are shown for men and women ( Conversely, the association between each meteorological parameter and stroke in women was significant, even after adjustment for all individual-level risk factors and each of the three geographic/demographic factors (altitude, population density and population) ( Table 3). This indicates that the weather-stroke association is stronger than the geography/demography-stroke association. The exception is the association of rainfall when adjusted for individual-level risk factors and altitude, which was not significant.
Relationships between geographic/demographic factors and cerebral infarction are shown (Table 4). In men, no factor was positively or negatively attributed to the incidence of cerebral infarction. In women, low population, low population density and high altitude were associated with an increased risk of cerebral infarction in Model 1, but there was no significance for population or population density in Model 2. Altitude was not significant in Model 3. Table 5 shows the association of each meteorological factor with cerebral infarction when adjusted for each geographic/demographic factor. When adjusted for altitude, the association between meteorological factors and cerebral infarction was not significant, but when adjusted for population, the association was significant. When adjusted for population density, only the association of rainfall was significant.
The association of each geographic/demographic factor with cerebral hemorrhage was evaluated (not shown in tables
Discussion
Women who lived in communities with smaller populations or a lower population density had a higher incidence of stroke, and this was independent of individual-level risk factors such as age, hypertension and diabetes. There was no significant association, however, when adjustment was made for meteorological parameters. Conversely, the association of meteorological parameters with stroke remained significant for individual-level factors and geographic/demographic parameters of the community, even after adjustment.
There was variation among communities in the incidence and mortality of stroke. In men, age-adjusted stroke mortality per 100 000 population was highest at 84.0 in Aomori prefecture, and lowest at 49.6 in Wakayama and Nara prefectures (relative gap between highest and lowest values: 1.7). In women, the highest was in Tochigi prefecture (46.4) and the lowest in Okinawa prefecture (23.1) (relative gap: 2.0). The geographic gap for stroke mortality was larger than that of other major causes of death, such as heart diseases, malignant neoplasms and pneumonia 31 . As a potential cause of the geographic disparity in stroke incidence or mortality, the demographic characteristics of a community, such as rurality, are increasingly of interest to researchers. A study from Japan revealed that women residing in municipalities with a population of less than 30 000 had a higher risk of stroke death (odds ratio 1.68), compared with women in municipalities with a population of more than 300 000, even after adjustment for traditional risk factors (eg age, BMI, cholesterol, diabetes, hypertension, smoking status and alcohol drinking) 12 . Because the present study was not based on incidence data, it is not clear whether the higher mortality in rural areas was due to a higher stroke incidence or a lower survival rate of stroke patients. The present study showed similar results for stroke incidence, suggesting that the previously reported higher stroke mortality for rural women was derived, at least partially, from their higher incidence of stroke. That is, living in a rural area may increase the risk of stroke in women. Another Japanese study showed that the age-adjusted incidence of stroke in a rural community has been consistently higher than in an urban community since 1964 32 . In the present study, the association between living in a rural community and stroke was independent of traditional risk factors. However, previous studies have not accounted for the confounding effect of potential community-level risk factors, such as weather. This study showed that the link between living in a rural area and stroke in women could be explained by the link between weather and stroke; while the link between weather and stroke was robust against the influence of rurality. Thus, the geographic disparity of stroke incidence in women may be explained by the difference in weather conditions among communities, rather than by urban-rural residence differences. No plausible underlying mechanism has been identified for the association between rural living and stroke. However, low temperature is known to cause an increase in coagulation-related factors such as fibrinogen and factor VII 33 , an elevation in blood pressure [34][35][36] , an exacerbation of hemoconcentration 37,38 , and an increase in plasma lipids 39 , which can cause thromboembolic disease, including stroke.
In this and a previous study it was reported that, in terms of stroke incidence, women were more vulnerable to meteorological factors than men 25 . A possible explanation for this sex difference may be women's greater vascular reaction to cold exposure, due to estrogen-induced increased adrenergic alpha 2C-receptor activity [40][41][42][43] . However, because most of the female stroke cases in our study were postmenopausal, with a consequent low level of blood estrogen, the reasons for the sex difference are unclear.
A limitation of this study is the small number of communities 12 . This, and the rural-bias of those communities, makes it difficult to generalize the study results to other areas of Japan. In addition, the limited number of communities decreases the power to detect significant difference among them. Indeed, the absence of a significant association between geographic/demographic variables and stroke incidence in Model 3 may be explained by the limited statistical power. Another limitation is that the weather-stroke association seen in this study may be confounded by unmeasured community-level variables (such as local industries types). These issues can be clarified in future studies involving a larger number of communities.
As a geographic parameter, altitude was significantly associated with stroke incidence in women. The association was absent in Model 3 and, therefore, it may be confounded by weather conditions. The associations between weather parameters and stroke were not present when adjusted for altitude. So, as was expected, altitude and weather conditions are likely to have strong correlations.
|
2017-09-06T14:47:28.583Z
|
2010-09-03T00:00:00.000
|
{
"year": 2010,
"sha1": "ffb8cbe7c6a96e7d4bdbae732544d05904b08d1b",
"oa_license": "CCBY",
"oa_url": "https://www.rrh.org.au/journal/download/pdf/1493/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a9a793087e33042d23d184cfe000b0b1ae0ca8a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
74792472
|
pes2o/s2orc
|
v3-fos-license
|
The diagnostic pathway embolism: from the Emergency Department to the Internal Medicine Unit
The diagnostic pathway of pulmonary embolism, both in the Emergency Department and in the Medical Unit, is not a standardized one. Pulmonary embolism, often but not always complicating surgery, malignancies, different medical diseases, sometimes but not often associated with a deep vein thrombosis, is not infrequently a sudden onset life-threatening and rapidly fatal clinical condition. Most of the deaths due to pulmonary embolism occur at presentation or during the first days after admission; it is therefore of vital importance that pulmonary embolism should promptly be diagnosed and treated in order to avoid unexpected deaths; a correct risk stratification should also be made for choosing the most appropriate therapeutic options. We review the tools we dispose of for a correct clinical assessment, the existing risk scores, the advantages and limits of available diagnostic instruments. As for clinical presentation we remind the great variability of pulmonary embolism signs and symptoms and underline the importance of obtaining clinical probability scores before making requests for further diagnostic tests, in particular for pulmonary computer tomography; the Wells score is the only in-hospital validated one, but unfortunately is still largely underused. We describe our experience in two different periods of time and clinical settings in the initial evaluation of a suspected pulmonary embolism; in the first one we availed ourselves of a computerized support based on Wells score, in the second one we did not. Analysing the results we obtained in terms of diagnostic yield in these two periods, we observed that the computerized support system significantly improved our pulmonary embolism diagnostic accuracy. Correspondence: Attilia Maria Pizzini, Medicine I, Hemostasis and Thrombosis Center, Arcispedale Santa Maria Nuova IRCCS, viale Risorgimento 80, 42100 Reggio Emilia, Italy. Tel.: +39.0522.295832 Fax: +39.0522.296853. E-mail: attilia.pizzini@asmn.re.it
the extension itself and on the possible underlying cardiopulmonary impairment. 4,5 As pulmonary embolism symptoms are totally non-specific and heterogeneous, a correct initial assessment is essential in order to rule in and rule out pulmonary embolism as well as to identify the patients who would benefit from an early aggressive treatment. 6 We suggest that a clinical pretest-probability of 85% or more could be the threshold that rules in pulmonary embolism and justifies anticoagulant therapy; this correlates to a moderate or high clinical suspicion. Conversely, the threshold that rules out pulmonary embolism, advising against anticoagulant therapy, is a probability pre-test ≤2%. [6][7][8] Two validated scores are widely used: the Wells score 9 and the revised Geneva one 10 (Tables 1 and 2). We refer mainly to the Wells score, validated in inpatients; Geneva score is reserved to outpatients. The Wells score, which we consider the first step to address the choice of subsequent tests, consists of seven variables ( Table 1) that allows to classify patients in pulmonary embolism likely (>4 points) or unlikely (≤4 points) ( Figure 1). 3,4,6,11 The next step, after evaluating the pre-test-probability, is the D-dimer assay. The D-dimer, a specific fragment of the fibrin clot, reflects the hemostatic balance steady state and has strong intra-individual variability. 12 It is a highly sensitive test (≥95% for quantitative ELISA or automated turbidimetric assays) [
Variables Points
Clinical signs of deep venous thrombosis 3 Alternative diagnosis less likely than pulmonary embolism 3 Heart rate >100 beats/min 1.5 Immobilization or surgery in previous 4 weeks 1.5 History of venous thromboembolism 1.5
Hemoptysis 1
Malignancy or treatment for it in previous 6 months 1
Score interpretation Points Prevalence
Pulmonary embolism likely:* High probability ≥6.5 60% Moderate probability 4.5-6 25% Pulmonary embolism unlikely: Low probability ≤4 5% *A score ≥4.5 (moderate + high probability) has termed Pulmonary embolism likely. 5,6 This group makes up about 40% of patients and has a prevalence of pulmonary embolism of about 33%. with a strong negative predictive value. D-dimer testing should be evaluated together with pre-test probability calculation ( Figure 1). The combination of a normal, high-sensitive, quantitative D-dimer test result and an unlikely clinical probability has a negative predictive value; alone it can rule out acute pulmonary embolism without further imaging. On the contrary all patients with an elevated D-dimer or a clinical evaluation of likely probability should be referred to radiological evaluation. 6,13 Thanks to these two simple tests we could be able to diagnose acute pulmonary embolism, thus postponing CT-scan or scintigraphic evaluation. Despite the simple feasibility of the above-mentioned tests, Wells score is little known and surely underused, while D-dimer assay is misused. Ddimer is frequently part of the so-called coagulation test list, which is often requested without a reasonable motive, stirring up further expensive and sometimes useless diagnostic tests. D-dimer has little specificity as several medical conditions, pathological or not, can give rise to elevated levels ( Table 3); [14][15][16] it should be used with caution in in-hospital patients, since numerous diseases and invasive procedures can rise its levels in the absence of thrombosis. Furthermore, D-dimer assays should not be used in anticoagulated (heparin or warfarin) patients: clinical studies have demonstrated that anticoagulants decrease circulating Ddimer levels, thus causing a false negative value. It is suggested that D-dimer testing should not be used as a screening test for pulmonary embolism. 15, 16 The radiological diagnostic instruments are scintigraphy and pulmonary CT scan. Today the first one is seldom used as CT-scan is the gold standard exam: scintigraphy should be performed only in patients with renal insufficiency, contrast hypersensitivity, in younger patients in whom scintigraphy has a greater specificity, and in any case if chest -ray is negative. 3,17,18 In the last years computed tomography pulmonary angiography has become the gold standard diagnostic tool for suspected pulmonary embolism. Lung CT-scan is readily available in many hospitals and has been shown to have a high sensitivity and specificity. 19,20 Its easy accessibility and great sensitivity have led to a remarkable increase in its use, even though this approach is not always correct. The percentage of positive CT-scan examinations ranges from 20% in controlled multicenter trials to less than 10% in observational ones. 3,19 Its overutilization not only exposes the patient to radiation and contrast kidney disease risk, 21,22 but it is weighed as well by exceeding costs; costs that are enhanced by overtreatment of incidental pulmonary embolism, that should not be treated at all. 8,21 Despite the undoubted advantage of this tool in the diagnostic pathway of thrombosis, its use should be targeted and limited to patients with a high pre-test clinical probability or an elevated Ddimer test. 3,6 Lastly, if lower limbs compression ultrasound, which should precede imaging tests in pregnant women and in patients with a contraindication to CT, 4,23-25 is performed as a first step, CT or scintigraphy could be avoided in about 10% of patients. A diagnosis of proximal venous thrombosis in a symptomatic and hemodynamically stable patient, or in an asymptomatic patient who has contraindications to CT, is considered a sufficient criterion for pulmonary embolism diagnosis. 25
Integrated approach
To improve CT-scan diagnostic performance, and at the same time to safely rule out pulmonary embolism, diagnostic algorithms and predictive scores have been elaborated; in spite of their appropriateness and easy applicability, they are unfortunately seldom used in clinical practice. 20 We have compared the number of CT and of perfusional lung scan performed in the Emergency Department during two consecutive periods, each one of 15 months: i) T1 from 1 st January 2010 to 31 st March 2011; and ii) T2 from 1 st April 2011 to 30 th June 2012. During the first period a computerized system [26][27][28][29][30] to support the decisional pathway was adopted in the Emergency Department. The computerized system was an integrated approach to the radiological request which consisted in the mandatory filling of every Wells score field by the emergency physician (Table 4). Only in case of high pre-test probability the CT-scan request was accepted by the radiological department. It was possible to bypass this procedure only by a written request or by a direct telephone call to the radiologist (Figure 2). 26 During the first 15 months (T1) a total of 48 pulmonary embolism diagnoses were made (data extrapolated from diagnosis-related groups), similarly to what happened in T2 (49 pulmonary embolism diagnoses). However in T1, thanks to the computerized support, a relevant decrease in the number of CT requests was observed in contrast to what happened in T2 (55 versus 95). The outcome was an improved diagnostic management and a related better diagnostic yield.
Moreover both during the first (T1) and the second (T2) period the number of lung scans to diagnose pulmonary embolism was considerably reduced in com- The diagnostic pathway embolism
Conclusions
The data emerging from this simple survey are very interesting and we propose to resume the T1 method applying it to the new diagnostic requests system (named Aurora) and extending the computerized request system to D-dimer test as well.
We hope that our positive experience with computerized support during the T1 period may be exported to suburban hospitals, where it could represent a guide to Emergency medical staff improving the diagnostic yield and avoiding useless expensive examinations.
|
2019-03-12T13:13:25.138Z
|
2016-03-22T00:00:00.000
|
{
"year": 2016,
"sha1": "7ad5d33e2b22bfd7b7ab0e2dbe4ec65023eafbc2",
"oa_license": "CCBYNC",
"oa_url": "https://www.italjmed.org/index.php/ijm/article/download/itjm.2016.546/795",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4d17d65fef24e062ea289a2cad650971da932ba1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258080223
|
pes2o/s2orc
|
v3-fos-license
|
The cancer testis antigen TDRD1 regulates prostate cancer proliferation by associating with the snRNP biogenesis machinery
Prostate cancer is the most commonly diagnosed noncutaneous cancer in American men. TDRD1, a germ cell-specific gene, is erroneously expressed in more than half of prostate tumors, but its role in prostate cancer development remains elusive. In this study, we identified a PRMT5-TDRD1 signaling axis that regulates the proliferation of prostate cancer cells. PRMT5 is a protein arginine methyltransferase essential for small nuclear ribonucleoprotein (snRNP) biogenesis. Methylation of Sm proteins by PRMT5 is a critical initiation step for assembling snRNPs in the cytoplasm, and the final snRNP assembly takes place in Cajal bodies in the nucleus. By mass spectrum analysis, we found that TDRD1 interacts with multiple subunits of the snRNP biogenesis machinery. In the cytoplasm, TDRD1 interacts with methylated Sm proteins in a PRMT5-dependent manner. In the nucleus, TDRD1 interacts with Coilin, the scaffold protein of Cajal bodies. Ablation of TDRD1 in prostate cancer cells disrupted the integrity of Cajal bodies, affected the snRNP biogenesis, and reduced cell proliferation. Taken together, this study represents the first characterization of TDRD1 functions in prostate cancer development and suggests TDRD1 as a potential therapeutic target for prostate cancer treatment.
INTRODUCTION TDRD1 (Tudor Domain Containing 1) is a germ cell-specific gene solely expressed in human testes and ovaries under physiological conditions but not in any other normal tissues. However, in up to 68% of prostate tumors, TDRD1 is erroneously overexpressed, and its expression levels strongly correlate with TMPRSS2-ERG gene fusion. Indeed, we and others confirmed that TDRD1 is a bona fide ERG target gene [1][2][3]. While TDRD1 overexpression is present in nearly all ERG-expressing primary tumors, some tumors express TDRD1 even without TMPRSS2-ERG fusion [1].
As its name indicates, TDRD1 contains 4 Tudor domains, which are conserved protein structural domains with~60 amino acids in length. Tudor domains have been identified as epigenetic "readers" that bind to methylated lysine and arginine residues through their aromatic-binding cage structure [4]. Being a germ cell-specific protein, TDRD1 acts as a scaffold protein and interacts with several piRNA processing proteins through its Tudor domains in mouse testis. A complete Tdrd1 knockout in mice abolished the piRNA biogenesis pathway and led to male infertility [5,6].
On the other hand, it has become increasingly recognized that the protein arginine methyltransferase (PRMT) family of enzymes are involved in cancer development [7]. Two types of PRMT proteins catalyze dimethylation on arginine residues. Type I PRMTs produce asymmetric dimethylarginine (aDMA); whereas the type II PRMTs produce symmetric dimethylarginine (sDMA). PRMT5 is the major type II PRMT and is overexpressed in many types of cancers, including leukemia/lymphoma, glioblastoma, melanoma, as well as prostate cancer [8][9][10]. Interestingly, it was reported that PRMT5 protein has opposite roles on prostate cancer cell growth depending on its subcellular localization. The nuclear PRMT5 protein inhibits prostate tumor growth, whereas cytoplasmic PRMT5 promotes tumor growth [11]. Consistent with this finding, in prostate premalignant and cancer tissues, PRMT5 mainly accumulates in the cytoplasm, and its expression and methyltransferase activity is essential for cancer cells to grow [11]. These studies imply the importance of cytoplasmic substrates of PRMT5 in prostate cancer cell growth. However, in contrast to welldocumented evidence on nuclear substrates of PRMT5 and their direct roles in transcriptional regulation, little is known about the cytoplasmic function of PRMT5 in prostate cancer.
Studies have shown that PRMT5 methylates Sm (smith core) proteins, and this event is an essential initiation step for the assembly of small nuclear ribonucleoproteins (snRNPs) in the cytoplasm. Partially assembled snRNP is transported to the nucleus and further matured in non-membrane bound nuclear bodies named Cajal bodies. There are very few studies on the role of Sm proteins and snRNP biogenesis in prostate cancer, but two interesting reports showed that SNRPE, also known as SmE, is overexpressed in high-grade prostate cancer cells [12,13]. Knockdown of SNRPE suppressed prostate cancer cell proliferation, while overexpression of SNRPE promoted cancer cell proliferation [12]. These findings suggest that PRMT5-mediated Sm protein methylation and snRNP assembly likely play an important role in sustaining the growth of prostate cancer cells.
In this study, we found that in prostate cancer cells, TDRD1 is associated with important proteins in snRNP assembly in both the cytoplasm and the nucleus. Cytoplasmic TDRD1 interacts with methylated Sm proteins in a PRMT5-dependent manner, and nuclear TDRD1 interacts with Coilin, the scaffold protein of Cajal bodies. Ablation of TDRD1 in prostate cancer cells by CRISPR-Cas9 disrupted the cellular localization of Coilin and the production of snRNAs. TDRD1 perturbation activated the tumor suppressor p53 and significantly impaired prostate cancer cell proliferation. In addition, depletion of TDRD1 in VCaP cells increased sensitivity to antiandrogens, while overexpression in 22Rv1 cells enhanced resistance. Our study reveals a novel function of TDRD1 and suggests TDRD1 as a potential therapeutic target for prostate cancer treatment.
RESULTS
TDRD1 is important for cell proliferation in TDRD1-positive prostate cancer cell line TDRD1 gene is known to be overexpressed in primary prostate tumors [1,2]. Cancer OMICS data from TCGA further showed that TDRD1 overexpression is preserved in prostate tumors regardless of nodal metastasis status, indicating that TDRD1 is likely indispensable in established prostate cancer cells (Fig. 1A). To investigate the biological function of TDRD1 in these cancer cells, we tried to deplete TDRD1 in TDRD1-positive VCaP cells using RNA-guided CRISPR-Cas9 system. We attempted but were not able to obtain single colonies of cells with a successful knockout of TDRD1, suggesting that TDRD1 might be essential for VCaP cell survival. Eventually, we obtained two pooled TDRD1 knockout cell populations from two different sgRNAs. Both pools showed high knockout efficiency (Fig. 1B).
As expected, both TDRD1-KO1 and KO2 cells showed significantly reduced growth rates than control VCaP cells (Fig. 1C), indicating that TDRD1 is important for VCaP cell proliferation. We further inoculated TDRD1-KO1 cells to the immunodeficient male NSG mice subcutaneously and monitored the tumor growth. As shown in Fig. 1D, E, a similar growth inhibitory effect was observed when TDRD1 knockout VCaP cells grew in vivo. Furthermore, we collected these tumors and performed immunohistochemical staining to examine the level of Ki67 in these tumors. As shown in Fig. 1F, we observed overall more Ki-67positive cells in TDRD1-WT tumors than in TDRD1-KO tumors, confirming that ablation of TDRD1 reduces the VCaP cell proliferation both in vitro and in vivo as xenografted tumors in mice.
TDRD1 is present in both the cytoplasm and nucleus of prostate cancer cells Previously published studies of TDRD1 have established its critical role in the regulation of piRNA biogenesis in germ cells [14][15][16]. However, in mammals, piRNA is only found in testes and ovaries [14,17], suggesting that erroneously expressed TDRD1 must have a piRNA-independent role in prostate cancer cells. To [18]. We first made a series of TDRD1 deletion mutants and fused them with the green fluorescent protein (GFP). These deletion mutants are named eTDs because they contain respective extended Tudor domains ( Fig. 2A). When transiently expressed in HeLa cells, the fulllength TDRD1 proteins formed speckles in both the cytoplasm and the nuclei, but mainly localized in the cytoplasm. eTD4 exhibited a similar pattern as the full-length TDRD1 protein (Fig. 2B), indicating that eTD4 is responsible for the accurate subcellular localization of the full-length TDRD1 protein. The cellular localization of endogenous TDRD1 was further confirmed by cell fractionation in VCaP cells (Fig. 2C). Because the TDRD1 antibody could successfully differentiate between the wild-type VCaP xenograft tumor and the TDRD1-KO tumor in IHC staining (Fig. S1A), we examined the localization of TDRD1 in prostate tumor samples from a commercially available tissue microarray, which contains 39 human prostate tumor biopsy samples. We performed TDRD1 IHC staining as previously described [1]. Based on the TDRD1 IHC score quantified by the Cytation 5 Image+ software, we were able to classify the tumor samples into three groups, with 8 TDRD1-High tumors, 19 TDRD1-Low tumors, and 12 TDRD1-Negative tumors (Fig. 2D, E). In most TDRD1-High and -Low tumors, we observed positive TDRD1 staining in both cytosol and nuclei (Fig. 2F). Figure S1B, C illustrate the quantification of TDRD1 staining in the cytoplasm and nucleus of each TDRD1-positive tumor. The frequency of tumors displaying positive TDRD1 staining is summarized in Fig. 2G. Taken together, these results show that TDRD1 is present in both the cytoplasm and nucleus of prostate cancer cells, with a predominant presence in the cytoplasm.
Cytoplasmic TDRD1 interacts with Sm proteins in a methylation-dependent manner
To further understand the function of TDRD1, we decided to generate the smaller eTD4 recombinant protein and to identify its cytoplasmic interacting proteins. We purified 6His-tagged eTD4 protein and used it as bait in a pull-down assay followed by mass spectrometry analysis. BSA protein was used as a control for the bait. Using VCaP cell cytoplasmic fraction as an input, we identified 152 potential eTD4-specific interacting proteins in total. A full list of these 152 proteins is provided in Supplementary Table 1. KEGG pathway analysis indicated that the Spliceosome pathway is the most significantly enriched pathway in eTD4 interactome (Fig. 3A). Interestingly, nearly all the identified proteins enriched in this pathway are involved in snRNP biogenesis in the cytoplasm, and most of them are the Sm proteins ( Fig. 3B), suggesting that TDRD1 may interact with these proteins through its eTD4 region. Among them, SNRPD1, SNRPD3, and SNRPB are known methylated proteins [19,20]. Their C-terminal sequences contain multiple arginine residues that are subject to symmetrical dimethylation by PRMT5 [21,22]. Because Tudor domains exert their functions by recognizing and binding methylated arginine and lysine residues, we further validated the mass spectrometry result by a peptide pull-down experiment. We synthesized three peptides based on the C-terminal 32 residues of SNRPD3, which was the most abundant eTD4-interacting Sm protein in our mass spectrometry analysis (Fig. 3B, C). The sequence harbors 4 "RG" sites that were documented as PRMT5 methylation sites previously [23]. The peptides are either unmodified, symmetrically dimethylated (sDMA), or asymmetrically dimethylated (aDMA). We tested the binding between these peptides with all five functional domains of TDRD1. The result showed that the full-length TDRD1 and eTD4 selectively bound to the symmetrically dimethylated peptide, but not the unmodified or asymmetrically methylated peptides (Fig. 3C). Moreover, the interaction is specific to eTD4, as none of the other deletion mutants had this binding activity. Similar results were obtained when peptides composing C-terminal 29 residues of SNRPD1 were tested (Fig. S2). To further validate the interaction between TDRD1 and endogenous The abundance of proteins identified by the Mass spectrum is shown as iBAQ. The enrichment of each protein has been calculated by the percentage of increased iBAQ in eTD4 pull-down. All snRNP proteins are marked in red. C Peptide pull-down assay using synthesized biotinylated peptides. The input samples are total 293T cell lysate exogenously expressing GFP-fused TDRD1 deletion mutants. sm symmetrically di-methylated, am asymmetrically di-methylated. D Co-IP experiment to validate the interaction between SNRPD3 and TDRD1. GFP-tagged TDRD1 and eTD4 were exogenously expressed in 293T cells. Anti-GFP antibody was used to co-immunoprecipitate endogenous SNRPD3. EPZ015666: a selective RPMT5 inhibitor. E Co-IP experiment to validate the interaction between endogenous SNRPD3 and TDRD1 in VCaP cells. Input: 5% of VCaP cell lysate used for each Co-IP. VCaP cells were treated with 5 μM of EPZ015666 for 24 h before harvest.
SNRPD3, co-immunoprecipitation (Co-IP) experiment was performed in cells treated with vehicle control or EPZ015666, a selective PRMT5 inhibitor. EPZ015666 significantly reduced the interaction between SNRPD3 and exogenously expressed TDRD1 or eTD4 in 293T cells (Fig. 3D). Similarly, the loss of interaction between endogenous SNRPD3 and TDRD1 was also observed in VCaP cells (Fig. 3E). These results further confirm that the interaction between TDRD1 and SNRPD3 is likely dependent on PRMT5 activity and mediated by symmetrically dimethylated arginine.
TDRD1 interacts with SNRPD3 through its core Tudor 4 domain
Since the eTD4 contains a core Tudor 4 (cTD4) domain and long flanking sequences on both sides, we further examined if the cTD4 domain is required for the interaction. Figure 4A is an alignment of all four core Tudor domains with the prototypic Tudor domain of SMN1. We deleted the entire 61 amino acids of the cTD4 domain, or only 4 conserved residues "DYGN" of cTD4 in the context of fulllength TDRD1. Alternatively, we made arginine to lysine mutations on 5 "RG" sites at the C-terminus of SNRPD3. These sites include 4 reported methylated "RG" sites and 1 potential methylation site. The Co-IP experiment in Fig. 4B showed that disruption of either the TDRD1 cTD4 domain or SNRPD3 methylation sites abolished the interaction between TDRD1 and SNRPD3. We further examined the binding of symmetrically dimethylated peptides with cTD4 mutants. Figure 4C showed that none of the methylated peptides retained the interaction with TDRD1 when cTD4 was disrupted. Collectively, these results demonstrated that TDRD1 specifically interacts with cytoplasmic PRMT5-methylated SNRPD3 proteins through its cTD4 domain. TDRD1 was reported as an ERG target gene [1,24]. We next determined if ERG would affect the interaction between TDRD1 and SNRPD3. We generated ERG-KO VCaP cells using CRISPR-Cas9 gene editing. Although the protein level of TDRD1 has been slightly reduced in ERG-KO cells (Fig. 4D), the Co-IP result shown in Fig. 4E demonstrates that ERG gene deletion does not affect the interaction between TDRD1 and SNRPD3 in VCaP cells.
Nuclear TDRD1 associates with Coilin
The interaction between TDRD1 and methylated Sm protein suggests that TDRD1 is likely implicated in the snRNP assembly. While the core snRNPs are assembled in the cytoplasm, the final snRNP assembly step takes place in the non-membrane structure Cajal bodies in the nucleus [25]. Because the protein Coilin is a marker of Cajal bodies and a small amount of TDRD1 was observed in the nucleus with a staining pattern similar to Coilin, we reasoned if TDRD1 may colocalize with Coilin (Fig. 2B). Indeed, by immunofluorescent staining, TDRD1 and Coilin exhibited a spatial colocalization in the nucleus (Fig. 5A). To further assess the interaction between endogenous TDRD1 and Coilin, we used the Coilin antibody to immunoprecipitate Coilin from VCaP cell nuclear extract, and TDRD1 was indeed co-immunoprecipitated with Coilin, but not with IgG. As a control, the nuclear marker protein PARP1 did not co-IP with Coilin despite its abundant expression in VCaP nuclear extract (Fig. 5B). By deletion mapping, we then narrowed down the Coilin-interacting region of TDRD1 to eTD4, which is the same region that interacts with methylated SNRPD3 (Fig. 5C). This interaction between nuclear TDRD1 and Coilin strongly suggests that Coilin might also be involved in the regulation of VCaP cell proliferation. We chose to knockdown Coilin by siRNA in VCaP cells and determined cell growth. As shown in Fig. 5D, E, the knockdown of Coilin significantly reduced the growth of VCaP cells, which is consistent with what we have observed from TDRD1 ablation.
TDRD1 C-terminal sequence is essential for its interaction with Coilin Next, we performed additional deletion mapping on eTD4 to precisely delineate the Coilin-binding region on TDRD1 (Fig. 6A). Interestingly, none of the deletion mutants preserved the binding activity of wild-type eTD4 (Fig. 6B). Furthermore, deletion of the cTD4 in the context of full-length TDRD1 did not affect the interaction, suggesting that the Coilin-interacting region does not overlap with the SNRPD3-interacting region on TDRD1, and the flanking regions of cTD4 may mediate the interaction with Coilin (Fig. 6C).
Because the structure of human eTD4 has not yet been reported, to gain more information on how eTD4 interacts with Coilin, we used the newly developed AlphaFold to predict TDRD1 threedimensional structure based on its amino acid sequences [26]. There is very high confidence in the predicted structure of Tudor domains of the human TDRD1 structure (Fig. S3A). Within eTD4, the per-residue confidence scores (pLDDT) of amino acid residues between Q930 and F1118 are consistently higher than 70, and most importantly, the two tandemly arranged anti-parallel betasheet structures receive the highest pLDDT, indicating that the predicted TDRD1 protein structure is highly accurate (Fig. S3B). While the cTD4 resembles the prototypical Tudor domain originally identified in SMN1 protein, the flanking sequences of cTD4 from Nand C-termini form the second anti-parallel β-sheet structure. Therefore, in the context of eTD4, deleting the flanking sequences may disrupt the structure and cause loss of interaction between TDRD1 and Coilin. We further performed Co-IP experiments to confirm this by generating additional mutants on full-length TDRD1 (Fig. 6D). As expected, the deletion of the C-terminal flanking sequence of eTD4 markedly reduced the interaction between TDRD1 and Coilin, but the deletion of N-terminal flanking sequence of eTD4 did not affect the interaction (Fig. 6E). This is a discrepancy in Coilin interaction between deletion mutants of fulllength TDRD1 and eTD4. Because TDRD1 has four extended Tudor domains and is structurally highly flexible, it is possible that certain sequences from other eTDs could form the additional anti-parallel β-sheet structure and compensate for the deletion of a.a. 911-990. Collectively, these results suggest that TDRD1 interacts with methylated snRNP proteins through the cTD4 domain in the cytoplasm and can also interact with Coilin through the extended TD4 domain in the nucleus.
Next, we set to delineate the TDRD1-interacting region on Coilin protein. It has been reported that Coilin contains a self-association domain (SA) at the N-terminus, an RG (Arginine-Glycine-rich) box RT-qPCR and Western blot analysis were performed to determine the mRNA level and protein level of Coilin. RNA samples were duplicated. ***p < 0.001 by t-test. The italicized numbers represent Western blot signals that were normalized to actin levels (E) Coilin knockdown by siRNA reduced VCaP cell growth. N = 3. ***p < 0.001 by t-test.
in the center, and a Tudor domain at the C-terminus [27]. We generated Coilin deletion mutants based on these functional domains and tested their interaction with the full-length TDRD1 as well as eTD4 (Fig. 6F). Interestingly, deletion of the RG box completely abrogated the interaction, indicating that the RG box is essential for Coilin to interact with TDRD1 (Fig. 6G, H). The RG box contains 33 amino acids harboring 6 GRG tripeptides, which are considered the consensus recognition sequence for PRMT5 [28]. Using the PRMT5-specific inhibitor EPZ015666, we observed significantly reduced interaction between TDRD1 and Coilin (Fig. 6I), indicating that the interaction between TDRD1 and Coilin is also PRMT5-dependent.
Co-expression of TDRD1 and Coilin in human tissues
The physical association between TDRD1 and Coilin proteins strongly argues that these two proteins have a functional link, which prompted us to examine their expression pattern in different human tissues. By searching the RNA expression profiles of Coilin and TDRD1 in the Human Protein Atlas, we found that both genes share very similar tissue expression patterns. They both are highly expressed in testis, but much less in other tissues, in contrast to the broad tissue expression pattern of PRMT5 (Fig. S4A). In normal testis samples and prostate tumors, the mRNA levels of TDRD1 and Coilin are positively correlated, with Pearson's R = 0.49 and 0.42, respectively. In contrast, no correlation is Fig. 6 Mapping the interacting regions between TDRD1 and Coilin. A A schematic diagram of TDRD1 eTD4 deletion mutants used in (B). B Deletion mapping to define the Coilin-interacting regions on TDRD1 eTD4 by Co-IP. eTD4 mutants were GFP-tagged, and Coilin was Flagtagged. C Co-IP experiment to determine if core Tudor 4 domain is necessary for Coilin interaction. D, E A schematic diagram of TDRD1 deletion mutants and their interactions with Coilin and SNRPD3 by Co-IP experiment. F Deletion mutants made based on the functional domains of human Coilin. SA self-association domain, RG arginine-glycine-rich box. G Co-IP experiment to determine the interaction between full-length TDRD1 and Coilin deletion mutants listed in (F). TDRD1 is GFP-tagged and Coilin mutants are Flag-tagged. H Co-IP experiment to determine the interaction between TDRD1 eTD4 and Coilin deletion mutants. I Co-IP experiment to determine if the interaction between TDRD1 and Coilin is dependent on the enzymatic activity of PRMT5. The transfected 293T cells were treated with DMSO or PRMT5-selective inhibitor EPZ015666 (5 uM) for 24 h before cell harvest.
observed in normal prostate tissue samples from GTEx or TCGA databases (Fig. S4B). We also found that TDRD1 mRNA level positively correlates with PRMT5 and SNRPD3 mRNA levels in prostate tumors, further supporting the functional cooperation of these proteins in prostate cancer (Fig. S4B).
TDRD1 ablation deregulates CB formation and activates p53
The physical interaction between TDRD1 and Coilin suggests that TDRD1 may play a role in the organization of Cajal bodies. Thus, we determined the subcellular localization of Coilin in TDRD1-KO cells. Whereas Coilin proteins usually localize in 1-3 large nuclear bodies in wild-type VCaP cells, they formed multiple nucleoplasmic microfoci when TDRD1 was ablated (Fig. 7A). The overall fluorescence signal of cellular Coilin was increased in both TDRD1-KO lines (Fig. 7B). The alteration of Coilin subcellular localization suggested that snRNP assembly might be affected by TDRD1 deficiency. We then quantified the five major Coilin-associated U snRNA by RNA-immunoprecipitation (RIP). All five snRNAs, including U1, U2, U4, U5, and U6, showed reduced interaction with Coilin in TDRD1 KO cells, indicating that ablation of TDRD1 affects the assembly of snRNP molecules (Fig. 7C).
Similar patterns of microfoci appearance were previously observed when cells were infected with adenovirus or treated with UV or RNA polymerase II inhibitor DRB [29][30][31]. Cajal Body has been considered a stress-responsive domain, and the microfoci localization of Coilin is often linked to p53 activation. We then examined the levels of p53 total protein and its activated form p53-pSer15. We observed a substantial increase in p53-pSer15 level in TDRD1-KO cells. In line with this, the protein level of p53 target gene p21 was also elevated (Fig. S5A). Because p21 is a cyclin-dependent kinase inhibitor and functions as a regulator for G1/S transition, we then examined if the cell cycle was altered in TDRD1-KO cells. As shown in Fig. S5B, The percentage of cells in the G1 phase was significantly higher in TDRD1-deficient cells, consistent with the elevated levels of p21. This result further validated the function of TDRD1 on the regulation of cell proliferation (Fig. S5C).
TDRD1 regulates the sensitivity of antiandrogens in prostate cancer cells Antiandrogens are often used to treat advanced stages of prostate cancer, especially when cancer has developed castration resistance [32,33]. Analysis of TDRD1 mRNA expression in primary prostate tumors and metastatic castration-resistant prostate cancer (CRPC) tumors from the TCGA database revealed that TDRD1 is highly expressed in metastatic CRPC tumors, suggesting that TDRD1 may play a role in CRPC cells (Fig. 8A). Given that TDRD1 regulates VCaP cell proliferation, we investigated whether TDRD1 affects cell proliferation under antiandrogen treatment. We selected Enzalutamide and Darolutamide for testing, as these two second-generation antiandrogen drugs possess distinct chemical structures [34]. As shown in Fig. 8B, VCaP cells with TDRD1 deletion, but not ERG deletion, appear to be more sensitive to antiandrogen treatment compared to control knockout cells. To confirm this observation, we transiently expressed the full-length TDRD1, TDRD1 without the cTD4 domain, and eTD4 in 22Rv1 CRPC cells. Full-length TDRD1 or eTD4 expression decreased sensitivity to Enzalutamide and Darolutamide treatment, whereas TDRD1 without cTD4 domain has the opposite effect, indicating TDRD1 regulates antiandrogen sensitivity in 22Rv1 cells (Fig. 8C). Similar experiment was performed in androgen-sensitive LNCaP cells. Although the trend was similar, the impact of TDRD1 overexpression on regulating antiandrogen sensitivity was not as pronounced in 22Rv1 cells (Fig. S6). Therefore we performed a Co-IP experiment in 22Rv1 cells. We found that full-length TDRD1 and eTD4 remain associated with SNRPD3 and Coilin in the presence of antiandrogens, and loss of the cTD4 domain abolished interaction and regulation of antiandrogen sensitivity (Fig. 8D).
DISCUSSION
In this study, our results revealed an essential role of TDRD1 in regulating the proliferation of TDRD1-positive prostate cancer cells. Ablation of TDRD1 by CRISPR-Cas9 resulted in significantly reduced cell proliferation in both cultured VCaP cells and xenografted tumors grown in mice. We further identified TDRD1-interacting proteins from cytoplasmic and nuclear fractions, and our results showed that TDRD1 is associated with important proteins involved in snRNP assembly in both cellular compartments. Moreover, depletion of TDRD1 in VCaP cells resulted in aberrant subcellular distribution of Coilin, which led to the disorganization of U snRNP complexes and reduced cell proliferation.
Our study identifies a novel PRMT5-TDRD1 signaling axis in the management of cell growth in prostate cancer cells. The involvement of PRMT5 and the high incidence of TDRD1 overexpression in clinical prostate tumor samples strongly indicate that the PRMT5-TDRD1 axis is largely relevant to prostate cancer cell survival, but this mechanism has never been explored before. Previous studies on the role of PRMT5 in prostate cancer *p < 0.05, ***p < 0.001. C Quantification of major U snRNAs associated with Coilin. Total RNA was recovered from immunoprecipitation of Coilin, and RT-qPCR was performed to quantify the major U snRNAs associated with Coilin. Samples were triplicated. *p < 0.05, **p < 0.01 by Student's t test.
have mainly focused on its nuclear functions and its implication in transcriptional regulation. For instance, PRMT5 methylates core histones to regulate the expression of androgen receptor (AR) and its target genes [9,35]. In ERG-positive cells, PRMT5 methylates AR in an ERG-dependent manner, alters AR chromatin association to genes that regulate prostatic epithelium differentiation, and consequently promotes cell proliferation [36]. In this study, we have demonstrated that the cytoplasmic PRMT5 is also important for prostate cancer cell proliferation in the presence of TDRD1. Cytoplasmic PRMT5 methylates Sm proteins to initiate snRNP complex assembly, and TDRD1 interacts with Sm proteins to facilitate this process in an arginine methylation-dependent manner. These findings are consistent with a previous report that PRMT5 mainly localizes in cell nuclei in the benign prostate epithelium but localizes to the cytoplasm in prostate cancer tissues [11].
Our work suggests an important role of TDRD1 in the regulation of Cajal bodies, which are only observed in the nuclei of proliferative cells and metabolically active cells, such as tumor cells, embryonic cells, or neurons. Cajal bodies are membrane-less condensates. Similar to other subcellular structures that are liquidliquid phase separated, Cajal bodies undergo dynamic changes in response to cellular stress or signals [37][38][39]. It was previously reported that the composition and substructure of Cajal bodies are defined by specific interactions between dimethylarginines and Tudor domain-containing proteins [40]. Similarly, TDRD1 forms condensates both in the cytoplasm and in the nucleus, and its eTD4 domain is responsible for appropriate cellular localization. At the molecular level, we found that eTD4 interacts with both Sm proteins and Coilin in an arginine dimethylation-dependent manner. Without TDRD1, Cajal bodies changed their morphology. All these observations are in agreement with the concept that dimethylarginine-Tudor interaction modules contribute to the dynamics of cellular condensates [40].
As an epigenetic enzyme that modifies both core histones and non-histone substrates, PRMT5 is an emerging cancer therapeutic target and its specific inhibitors have been developed and tested in many preclinical studies [41,42]. However, PRMT5 is broadly expressed in all major tissues in mammals and regulates multiple biological pathways, including but not limited to RNA processing, metabolism, and splicing [43]. PRMT5 knockout mice were embryonic lethal, indicating that PRMT5 is an essential gene for embryonic development [44]. Our study indicates that TDRD1 is an alternative therapeutic target in the PRMT5-TDRD1 axis. Under normal physical conditions, TDRD1 is only expressed in germ cells in men, but not in other types of cells/tissue (Fig. S3A). This tissue- Table 2. D Co-IP experiment to examine the interaction of TDRD1 mutants with SNRPD3 and Coilin in 22Rv1 cells. Cells were transfected and then treated with 15 uM of antiandrogens for 6 h before harvesting. specific expression of TDRD1 indicates that targeting TDRD1 may have fewer side effects. Consistently, complete knockout of TDRD1 in mice did not develop any observed abnormality except for the defect of spermatogenesis in male mice, indicating that TDRD1 is a non-essential gene for development and survival [45].
The overexpression of TDRD1 in more than half of the prostate tumors, including CRPC, supports its importance in prostate cancer development [1]. Overexpression experiments in 22Rv1 cells confirmed TDRD1's association with snRNP machinery proteins and reinforced its role in cell proliferation during antiandrogen treatment. When more TDRD1-positive cell lines and PDX models are available, these tools will improve our understanding of TDRD1 function and offer a potential new target for treating prostate cancer patients.
MATERIALS AND METHODS
Cell culture, transient transfection, and siRNA knockdown VCaP, HeLa, and 293T cells were grown in DMEM with 10% FBS and 1% penicillin/streptomycin. 22Rv1 and LNCaP cells were grown in RPMI1640 with 10% FBS and 1% penicillin/streptomycin. Mycoplasma contamination testing has been performed every 6 months. Transient transfection was performed using TransIT LT1 (Mirus Bio, WI, USA) for HeLa and 293T cells and using PEI-Max for 22Rv1 and LNCaP cells (Polysciences, PA, USA). ON-TARGETplus SMARTpool siRNA was used for siRNA knockdown.
TCGA data analysis
The TCGA_PRAD and WCDT_MCRPC datasets from TCGA were downloaded using the GDCquery function from TCGAbiolinks [46]. The data processing and preparation of the expression matrix were done using GDCprepare from TCGAbiolinks. The matrix was then normalized by reading depth using TCGA_normalize and visualized in Prism [47].
Xenograft tumor growth NSG (NOD.Cg-Prkdc scid Il2rg tm1Wjl /SzJ) mice from Jackson Laboratory (Bar Harbor, ME, USA) were used for subcutaneous xenografts. Control or TDRD1 KO VCaP cells were resuspended with100μl of 1×PBS and Matrigel and injected into the flank area of randomized male NSG mice 4-5 months of age. Tumor growth was measured weekly after injection by using a caliper. Tumor volume was calculated according to the following formula: 4/3π*(Length/2)*(Width/2) 2 .
Statistics
Data in this study were analyzed using Prism 8.0 (GraphPad, San Diego, CA, USA). The sample size was set to a minimum of three independent experiments (biological repeats) and experimental findings were reliably reproducible. Statistical significance of two group comparisons was determined by non-paired Student's t test, while ANOVA was used for comparing three or more groups. Differences were considered statistically significant at p ≤ 0.05. The pair-wise gene expression correlation analysis done in GEPIA uses methods including Pearson, Spearman and Kendall (http://gepia.cancer-pku.cn).
|
2023-04-13T06:17:31.375Z
|
2023-04-12T00:00:00.000
|
{
"year": 2023,
"sha1": "2984edce5200fa5a57b74be5bfe35b24c62dddb0",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-2035901/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "856813f58e375ebaaa15420b2d4311d5030ffe66",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248985797
|
pes2o/s2orc
|
v3-fos-license
|
Combining surface and soil environmental DNA with artificial cover objects to improve terrestrial reptile survey detection
Abstract Reptiles are increasingly of conservation concern due to their susceptibility to habitat loss, emerging disease, and harvest in the wildlife trade. However, reptile populations are often difficult to monitor given the frequency of crypsis in their life history. This difficulty has left uncertain the conservation status of many species and the efficacy of conservation actions unknown. Environmental DNA (eDNA) surveys consistently elevate the detection rate of species they are designed to monitor, and while their use is promising for terrestrial reptile conservation, successes in developing such surveys have been sparse. We tested the degree to which inclusion of surface and soil eDNA sampling into conventional artificial‐cover methods elevates the detection probability of a small, cryptic terrestrial lizard, Scincella lateralis. The eDNA sampling of cover object surfaces with paint rollers elevated per sample detection probabilities for this species 4–16 times compared with visual surveys alone. We readily detected S. lateralis eDNA under cover objects up to 2 weeks after the last visual detection, and at some cover objects where no S. lateralis were visually observed in prior months. With sufficient sampling intensity, eDNA testing of soil under cover objects produced comparable per sample detection probabilities as roller surface methods. Our results suggest that combining eDNA and cover object methods can considerably increase the detection power of reptile monitoring programs, allowing more accurate estimates of population size, detection of temporal and spatial changes in habitat use, and tracking success of restoration efforts. Further research into the deposition and decay rates of reptile eDNA under cover objects, as well as tailored protocols for different species and habitats, is needed to bring the technique into widespread use.
INTRODUCTION
The sharp and recent global decline in abundance and diversity of herpetofauna is the result of habitat loss, introduction of novel predators and pathogens, and intentional hunting and trapping (Böhm et al., 2013;Stuart et al., 2014). Reptiles, in particular, have relatively large numbers of species categorized by the International Union for the Conservation of Nature (2021) as data deficient (14%), and it is generally accepted that many such species are threatened with extinction, but existing monitoring data are insufficient to support such a classification (Böhm et al., 2013). Thus, statistically robust population monitoring programs are urgently needed to resolve the status of data-poor species and to evaluate temporal and spatial changes in distributions of known-threatened and endangered species (Barata et al., 2017;Sewell et al., 2012). Visual counts under artificial cover objects (e.g., wood boards)--which attract individuals for use as protection or for thermo-and osmoregulation--are a standard survey method and provide a substantial boost to survey detection rates over conventional searching techniques for terrestrial reptiles (Hoare et al., 2009). Realized detection rates, however, are often still low enough that surveys do not provide adequate statistical power to accurately assess populations or habitat associations (Crawford et al., 2020;Matthias et al., 2021). Environmental DNA (eDNA) survey methods eliminate the need to directly observe the target organism, providing a larger window of time in which evidence of the species remains present and can be detected (Ficetola et al., 2019). We posit that incorporating an eDNA step in surveys of artificial cover objects can substantially increase reptile survey detection rates and add needed statistical power to evaluate population-level conservation status, response to anthropogenic stressors, and recovery after conservation invest-ment. We designed and evaluated such a survey for little brown skink (Scincella lateralis).
Environmental DNA is DNA shed by organisms into their surroundings as they move, grow, breed, and decompose (Ruppert et al., 2019). Surveys based on collecting and detecting eDNA have revolutionized biodiversity monitoring in aquatic environments (Rees et al., 2014), providing robust and cost-effective sampling strategies, particularly for rare, cryptic, or endangered species, without having to directly observe or handle them (Yoccoz, 2014). Efforts to apply eDNA sampling for terrestrial species have recently advanced (e.g., Johnson et al., 2019;Kinoshita et al., 2019;Lyet et al., 2021;Thomsen & Sigsgaard, 2019;Williams et al., 2018) and include detection of terrestrial animal presence by sampling eDNA from vegetation and other surfaces (hereafter surface eDNA) (Valentin et al., 2020) and soil (Katz et al., 2020). Soil eDNA sampling for terrestrial reptiles is a growing field of research (Katz et al., 2020;Kucherenko et al., 2018), whereas sampling of surfaces for reptile eDNA appeared in the literature recently (Matthias et al., 2021). The "roller" method of Valentin et al. (2020), in particular, combines the strengths of both aquatic and terrestrial eDNA approaches by using dampened, commercially available paint rollers to recover eDNA across large surface areas and then bringing the eDNA into a solution where it can be easily concentrated via filtration. Surface and soil eDNA survey approaches for terrestrial reptiles, although promising, have yet to be fully vetted regarding their effectiveness in producing statistically robust spatial or temporal occupancy trends, which is the ultimate goal of improved monitoring schemes.
We integrated surface roller and soil eDNA methods with standard cover object sampling for S. lateralis, a small (8-15 cm long) lizard considered cryptic and elusive in the New Jersey Pine Barrens (DiLeo, 2016). Based on the litter-dwelling nature of S. lateralis and their known use of artificial cover objects, we hypothesized that the undersides of these objects or the soil beneath them would concentrate skink eDNA allowing enhanced detection with molecular approaches. Through our case study on S. lateralis, we sought to encapsulate several common problems encountered in monitoring terrestrial reptiles more broadly (e.g., cryptic behavior, small activity range, and small body size), the solutions to which have implications for global efforts to assess conservation status comprehensively and accurately within this vulnerable group (Cruickshank et al., 2016).
Proof-of-concept experiment and assay development
We completed a proof-of-concept experiment to establish how well the roller method could recover eDNA from commonly used cover object materials (metal and wood). This involved applying three different quantities of DNA-rich material--from a nonreptile species for which a reliable assay already existed and that did not naturally occur at the study site--to 15 plywood and 15 corrugated metal cover objects (Appendix S1). We evaluated the performance of roller sampling to detect this exogenous DNA based on the number of positive samples in each treatment group (Appendix S1). As a prerequisite for our primary study, we developed a species-specific qPCR assay for S. lateralis within the 12S mtDNA region. We then evaluated the sensitivity of this assay to detect trace amounts of DNA with standard lab-based techniques, estimated its limit of detection (LOD), and assessed specificity against closely related and co-occurring taxa with in silico methods (Appendix S2). We also obtained extracted DNA of 20 S. lateralis specimens used in Jackson and Austin (2010) to ensure the assay would amplify S. lateralis DNA across the extent of its native range (Appendix S2).
Evaluation of surface eDNA detection rates
We incorporated roller and soil eDNA sampling efforts into an existing herpetofauna cover object monitoring program in Wharton State Forest in the Pinelands National Reserve, New Jersey (United States). This landscape is an open-canopy, upland forest characterized by sandy, acidic soils, a predominantly pitch pine (Pinus rigida) overstory, and an understory of ericaceous shrubs (Collins & Anderson, 1994). We utilized an array of 82 sample sites spread along a 1163-m transect ( Figure 1). Each site consisted of one metal and one 1-cm pressure-treated plywood cover object (0.6 × 0.6 m) (Figure 1) placed on each side of drift fencing (commercially available black plastic geotextile used for erosion control). Metal and wood cover objects alternated so that no two adjacent sites had the same cover material on the same side of the fence. The drift fence was buried ∼10 cm into the ground and stood ∼0.5 m tall, making it unlikely S. lateralis could move between paired objects. Because wood objects provide a moister environment and metal objects are generally hotter and drier, this paired cover object design provided a broad array of microclimates throughout the year for use by reptiles. This array has been sampled since 2019, where from April through October each year, researchers conduct once daily visual checks. During each visual check, a cover object was lifted by a researcher, and all individual reptiles observed were counted and identified to species. The cover was then replaced in the same location.
From August 20, 2020 to October 22, 2020 (fall sample period) and from May 6, 2021 to June 24, 2021 (spring sample period), we performed weekly roller eDNA surface sampling of cover objects across a subset of the 82 sites in the array. On each eDNA sampling day, we targeted 10 sites where at least one of the two paired cover objects (wood or metal) had S. lateralis visually present within the prior 2 weeks. We sampled both cover objects at each of these 10 sites. As a result of this paired sampling scheme, our roller eDNA samples had wide variability with respect to the date at which the last skink was sighted, ranging from 0 days prior (eDNA roller sample taken immediately after visual skink sighting) to >100 days since visual sighting. This variability allowed us to document how long after a skink sighting we could detect skink eDNA. Our sampling design incorporated 81 eDNA surface samples from cover objects where no visual detections had occurred during that sampling period (spring or fall). Ultimately, our sampling scheme resulted in <50% of the 164 cover objects within the 82 array sites receiving roller eDNA sampling during each year (2020: n = 144 surface eDNA samples at 64 distinct cover objects; 2021: n = 140 surface eDNA samples at 76 distinct cover objects) ( Figure 1).
Our roller eDNA surface sampling protocol followed Valentin et al. (2020) (Appendix S1). After lifting a cover object and completing visual counts, we used chlorine-sterilized commercial paint rollers mounted on a pole and dampened with deionized water to swab the entire ground-facing surface of the object. One roller was used for each cover object sampled. After use, it was placed in a sterile bag and in a cooler (∼4 • C) to preserve DNA during transport back to the lab. On each sampling day, we also included a field negative control (i.e., a check for in-field contamination of samples) by following the rollerhandling protocol without performing the sampling. Within 2 h of initial collection, all roller samples were rinsed in the sterile bag with ∼250 ml deionized water to bring collected eDNA into an aqueous solution. The water was then passed through 10-μm polycarbonate track-etched filters with a peristaltic pump. The filters were stored in sterile 1.5 ml tubes and frozen at −20 • C until further processing. Samples were thawed and DNA extracted using the DNeasy PowerSoil Pro extraction kit (Qiagen), which includes several PCR inhibitor removal and DNA purification steps. We considered this extraction process necessary given the amount of soil transferred from cover objects to rollers. Each filter extraction included a negative control to check for in-lab contamination of samples. We tested for the presence of S. lateralis with an eDNA TaqMan-based qPCR protocol we developed for this study (described in Appendix S2). There were three replicate reactions per sample, hereafter qPCR replicates. We considered a field sample positive for S. lateralis if at least one of three qPCR replicates amplified S. lateralis DNA.
Soil and surface eDNA comparison
To compare the performance of soil and surface (roller) eDNA methods in terms of detection probability, we collected soil samples under a nonrandom subset of 20 cover objects where roller samples had just been taken. We only collected soil eDNA samples under cover objects where S. lateralis was visually observed within the 2 weeks prior. This sampling design served to standardize the time since the last observation of skinks and the time since skink DNA was likely deposited, thereby allowing a more robust statistical comparison between methods (see below). We collected 25 soil samples under 20 cover objects (some objects had >1 sampling event in the fall); 15 samples were taken in fall 2020 and 10 in spring 2021 (fall 2020: three wood and seven metal; spring 2021: five wood and five metal). In fall 2020, we collected ∼10 g of surface soil from 8 to 10 haphazardly chosen locations under each object. All soil was then placed in a single sterile 50 ml Falcon tube for transport to the lab. In spring 2021, we employed the same procedure but collected 40 g of soil from under each object to evaluate the extent to which this increase in sample volume and area covered would increase S. lateralis detection rates.
Soil samples were placed in a cooler for transport and stored at −20 • C until DNA extraction. Soil samples were thawed at room temperature and extracted using the DNeasy PowerMax Soil Kit following manufacturer protocols (Qiagen). The 40 g of soil from spring sampling had to be separated into four 10-g extractions and was, therefore, run through qPCR independently. We considered positive returns from qPCR replicates from these four analyses as evidence of skink presence under a cover object. Steps to minimize field and in-lab contamination were identical to those used in roller sampling (see above and Appendix S1). Each sample was tested for the presence of S. lateralis eDNA with the qPCR protocol described above.
Occupancy modeling
To estimate and compare detection probability for the three methods (visual, visual plus roller eDNA, and visual plus soil eDNA), we fit a series of site occupancy models (MacKenzie et al., 2018) to the cover object survey data within a Bayesian framework. In all analyses, we defined occupied areas as those cover objects that were visited at some point by one or more S. lateralis individuals within a sample period. We defined visitlevel detection probability as the joint probability of S. lateralis, or its eDNA, presence under an object during a given survey visit (i.e., availability or θ) (Nichols et al., 2008) and the visual or molecular determination of presence during a survey visit given availability, or θ ∩ P(detect | θ).
The visual-only model treated the repeated visual detection (1) and nondetection (0) data at cover objects during visits as the response variable, and covariates on both the occupancy and the detection submodels included sampling period (fall 2020 or spring 2021) and material (metal or wood). We only included detection information for visual survey visits that coincided with eDNA sampling to ensure statistical comparisons between the two methods were robust. To measure the increase in detection probability from also performing roller eDNA sampling on cover objects, we used the same model structure but treated either a visual or a molecular determination of presence (i.e., at least one qPCR replicate amplifying) as a successful detection.
In both models, we allowed sharing of information among the methods by setting the true occupancy state of skinks (z) at each object within a season to occupied at cover objects for which S. lateralis presence was confirmed by either method or if located visually at the cover object up to 30 days prior to eDNA sampling. We accomplished this by supplying the latent variable z as data in the model, with a value of 1 for known-occupied objects and not applicable (NA) otherwise (i.e., a blank value to be estimated). Finally, to evaluate whether our lab protocol of using three qPCR replicates per eDNA sample was adequate to confidently detect S. lateralis eDNA collected by rollers, we fitted the multilevel occupancy model of Dorazio and Erickson (2018). This model allows estimation of the probability of detecting eDNA in a sample by an individual qPCR replicate. This probability was, in turn, used to estimate the cumulative power of detecting S. lateralis eDNA present in roller samples with varying numbers of qPCR replicates (Appendix S3).
We compared the performance of soil versus roller eDNA sampling with multimethod occupancy models (Nichols et al., 2008). These models are multilevel and use shared detection information from multiple devices (in this case soil and roller eDNA samples) to inform availability, θ, and detection probability given availability, P(detect | θ). In this case, availability refers to the probability of S. lateralis eDNA presence under cover objects during a given visit. We incorporated visual survey information in the model as data for the latent variables z and a, where z is the true occupancy state of the cover object and a is the true availability state at cover objects during each visit (Kéry & Royle, 2015). We ran separate models per season due to the increase in soil sampling effort between fall and spring, as well as sample size limitations that precluded the use of model interaction terms. The dependent variable was the detection (1) or nondetection (0) of S. lateralis by each eDNA method by at least one qPCR replicate for each sampling event. We used the covariate device (soil vs. roller) on the detection submodel to estimate the method-specific probability of detecting S. lateralis given availability. We did not include cover material as a covariate due to sample size limitations.
All Bayesian models were fitted with noninformative priors (Kéry & Royle, 2015) in JAGS and jagsUI in R (Kellner, 2021;R Core Team, 2021). We ran three chains of 160,000 iterations each, including a burn-in period of 10,000; we kept every tenth draw. Model convergence was assessed by examining trace plots and Gelman-Rubin statistics (rˆ< 1.1). We compared estimates of visit-level detection probability for the various methods by examining and plotting posterior predictive distributions. The cumulative probability of detecting S. lateralis at least once given multiple samples (n) with a visit-level detection probability (p) was calculated using the formula 1 -(1 -p) n .
Proof-of-concept experiment and assay development
Our proof-of-concept experiment confirmed that eDNA was readily recovered from cover objects with the roller surface eDNA aggregation method, and it revealed higher detection rates for metal objects (12 of 15 samples positive, or 80%) than for wood objects (4 of 15 samples positive, or 27%) (Appendix S1). Our S. lateralis qPCR assay targeted a 65 bp sequence in the 12S mtDNA region and was highly sensitive. The 95% LOD was 28.5 fg of genomic S. lateralis DNA per reaction, based on the assumption of three qPCR replicates (Appendix S2). In silico specificity testing revealed that no co-occurring reptile species would cross-amplify with the assay and that our assay
Evaluation of surface eDNA detection rates
We found that 64% of the 284 total cover object samples had positive S. lateralis detections with the roller eDNA method (76% at metal and 51% at wood; Appendix S4), compared with only 11% for visual detections (15% at metal and 7% at wood). When we considered visual and eDNA as a combined survey method, the percentage of positive detections rose only slightly to 65% because only two samples had a visual detection but an eDNA nondetection. We found that, of the eDNA roller samples taken the same day S. lateralis was visually observed under a cover object, 91% returned positive detections (Figure 2). Of the remaining roller eDNA samples, we detected S. lateralis eDNA under 81% of the objects when the most recent visual detection at that object occurred 1-14 days prior. We detected eDNA under 57% of the cover objects when the most recent visual observation at that object was 15-127 days prior and under 39% of the objects when S. lateralis was never visually detected in a given season (Figure 2). All field and lab extraction negative control samples were negative for S. lateralis DNA, indicating no contamination.
Occupancy modeling revealed that visit-level S. lateralis detection probability for visual surveys paired with roller eDNA sampling was 3.6-15.8 times higher than for visual surveys alone, depending on sampling period (fall and spring) and cover material (metal and wood) (Figure 3 & Table 1). The 95% credible intervals of detection probability did not overlap for the two methods (roller eDNA and visual) across either sample period or cover object material (Figure 3). The models also indicated that detection probability was higher in spring
FIGURE 3
Probability of detecting Scincella lateralis (little brown skink) per visit to a cover object with only visual surveys or with incorporation of the roller surface eDNA method (points: posterior median estimates; horizontal lines: 80% and 95% credible intervals)
FIGURE 4
Cumulative probability of detecting at least one Scincella lateralis (little brown skink) individual at a metal cover object based on visual detections alone (black curves) or paired visual and eDNA roller sampling efforts (purple curves) (shading: 95% credible interval; gray dashed horizontal line: cumulative 95% certainty of detecting S. lateralis at least once) 2021 than in fall 2020 and at metal compared with wood cover objects (slope parameter 95% credible intervals did not overlap 0) (Figure 3). Based on detection probability estimates, cumulative probability analyses revealed that 2-3 visits per cover object would have been required to detect S. lateralis with 95% confidence with the visual and roller eDNA methods concurrently, whereas 12-37 visits would have been required using visual methods alone (Figure 4). With our lab protocol of three qPCR replicates per roller sample, eDNA detection probability given S. lateralis eDNA presence in the sample was high (91-100%, mean = 97%) across both materials and sample periods. This result suggests that the number of qPCR replicates we performed did not hinder the overall detection probability (Appendix S3).
Soil and surface eDNA comparison
Multimethod occupancy models revealed that eDNA methodspecific detection probability (i.e., detection based on eDNA method alone and not paired with visual) was 4.6 times higher with roller versus soil eDNA sampling in fall 2020, when only 10 g of soil was collected per object (60% vs. 13%, respectively), and 1.3 times higher in spring 2021, when 40 g of soil was collected per object (85% vs. 67%) ( Figure 5). The 95% credible intervals for roller and soil eDNA detection probability estimates did not overlap in fall 2020, indicating a high likelihood that the roller method performed better. However, there was substantial overlap in spring 2021, suggesting that the two methods performed similarly well (for soil, 95% CI 37-89; for roller, 95% CI 59-97) ( Figure 5).
DISCUSSION
The use of surface roller eDNA methods elevated survey detection rates up to 16 times higher than visual detections alone. Sampling soil eDNA under cover objects also boosted skink detection rates, provided that sufficient volumes of soil were collected. We showed that eDNA methods can detect skink presence when visual survey protocols failed to do so. Our results add to other recent eDNA applications for terrestrial reptiles, which collectively advance a promising avenue of research with the potential to reduce field time and support global monitoring efforts to gain accurate threat classification and recovery investments aimed at vulnerable terrestrial reptile species. Given that cover objects are regularly used in surveys of terrestrial amphibians (Marsh & Goicochea, 2003) and small terrestrial mammals (Lemm & Tobler, 2021), the benefits of adopting an eDNA-enabled cover object survey may extend to these taxa as well. However, like conventional surveys, any use of eDNA-enabled cover object surveys will require tailoring methods to match research questions, the local environment, and the natural history of the target species (Hampton, 2009;Hoare et al., 2009). The ability to confirm the presence of a target species "sight unseen" is the main benefit to eDNA survey approaches (Jerde et al., 2011) and is particularly valuable when the target species is difficult to observe or very rare (Hunter et al., 2015). In
FIGURE 5
Relative performance of surface roller and soil samples under cover objects that were sampled using both protocols (fall 2020, n = 15; spring 2021, n = 10) to detect Scincella lateralis (little brown skink) eDNA (points: posterior medians; horizontal lines: 80% and 95% credible intervals). The posterior distributions from a multimethod occupancy model represent the predicted probability of detecting S. lateralis in one survey with each protocol given S. lateralis eDNA presence under the object our study, only 11% of the 284 visual sampling events revealed S. lateralis sightings, compared with 65% of sampling events with the paired visual and roller eDNA approach. Similarly, low rates of visual detection are common for cover object surveys of terrestrial reptiles (e.g., Matthias et al., 2021). A low rate of skink visual detections under cover objects likely reflects the transient behavior of this species in which individuals make frequent movements among various forms of cover within their home range (DiLeo, 2016). Adding an eDNA step to artificial cover object sampling effectively extends the window of detection by revealing the eDNA trail left by target organisms even when they use a site only sporadically (Phoebus et al., 2020). Pairing visual cover object surveys with eDNA methods can thus logically only improve detection rates for skinks, and species like them, because eDNA allows recognition of skink presence under cover objects even if the object is only used briefly. The marginal benefits of adding eDNA to conventional surveys, however, are likely to vary by species depending on their rarity and behavior. Quantifying these marginal benefits for the world's terrestrial reptiles, as well as the feasibility of implementation due to costs and logistical factors, is a formidable research challenge, but it is one with important implications for improving monitoring schemes and documenting the response of terrestrial reptile populations to conservation investments and habitat management.
A key factor in evaluating any eDNA boost in detection rates is the length of time that eDNA persists in a state that remains available for detection with standard qPCR sample processing. We readily detected the presence of S. lateralis eDNA under cover objects at least 2 weeks, and up to 4 months (127 days), after the last confirmed visual observation. At a fraction of cover objects, we detected the presence of skinks despite no prior visual observations at that cover object in a season. This finding, in part, is certainly due to the sporadic use of cover objects by skinks and the nature of once-daily visual surveys. If a skink is not present at the moment a cover object is raised, it cannot be counted as present in visual surveys.
However, objects were checked daily increasing the likelihood that a visual survey captured skink presence, so some of the gaps between visual sightings and eDNA detection we observed likely also represented longer-term eDNA persistence under cover objects. The factors affecting the fluxes of eDNA into and out of an environment via deposition, degradation, and transport--the "ecology of eDNA"--have been studied in aquatic and aboveground terrestrial systems, but less so where deposited DNA is in contact with the soil surface, such as under cover objects (Barnes & Turner, 2016;Valentin et al., 2021). The environment under cover objects is shielded from UV light and most rainfall, which should favor eDNA persistence and subsequent detectability of target species. However, proximity to moist soil and its associated microbial community may accelerate DNA breakdown, eventually rendering any eDNA present on a cover object undetectable with qPCR-based assays. These questions cannot be answered with our study design, leaving an opportunity for more experimental evaluation of the rates of eDNA deposition and degradation at the soil surface (e.g., Kucherenko et al., 2018) and under cover objects specifically, including exploring how these rates vary by species and across a range of environmental conditions. The roller method we used is one of a suite of new techniques that incorporates standard aquatic eDNA sampling frameworks to bring water to terrestrial surfaces in an attempt to capture, suspend, and concentrate DNA (Valentin et al., 2020). The fact that the entire surface of the cover object can be effectively sampled using a roller may explain our relatively high detection rates. For example, 91% of eDNA surface roller samples were positive at cover objects where S. lateralis was visually detected when that object was lifted. Matthias et al. (2021) found a 57% (13 of 23) positivity rate for eDNA of the snake Contia tenuis at visual positive cover objects in a comparable experiment with smaller swabs and covering a much smaller portion of the object. Matthias et al. (2021) also found roughly similar but somewhat lower recovery rates (45% or 9 of 20 samples) for soil sampling compared with swabbing cover objects.
Ultimately, the ability to sample where terrestrial reptiles dwell and deposit DNA (e.g., burrows and cover objects; Katz et al., 2020;Kucherenko et al., 2018), as well as the ability to collect and aggregate as much eDNA as possible, will determine the success of eDNA-based surveys. Such factors likely account for why some trialed eDNA survey approaches have been very successful in realizing improvements in reptile survey power and efficiency (Hunter et al., 2015), whereas others have been less successful Halstead et al., 2017;Ratsch et al., 2020;Rose et al., 2019).
The material and size of a cover object, the microhabitat under cover objects, and other environmental conditions influence conventional survey detection rates (Hampton, 2009;Hesed, 2012), as well as eDNA recovery (Barnes & Turner, 2016;Valentin et al., 2021). In our study, the fold increase in detection under wood boards was a greater improvement on visual surveys than metal. However, metal cover objects outperformed wood cover objects in terms of S. lateralis occupancy and for visual and eDNA detection probability. The former finding demonstrates a potential preference of S. lateralis for metal over wood, which could be linked to the seasonality of our sample collection. We sampled primarily during spring and fall when conditions are generally cooler and skinks are more attracted to metal objects to aid in thermoregulation. The latter finding could result from high skink abundance and, therefore, greater residency time of skinks and eDNA concentrations under cover objects (Kéry & Royle, 2015). It could also relate to the generally drier conditions we observed under metal objects (J.F.B., personal observation), which may preserve eDNA by slowing microbial activity and degradation. Finally, it is possible that the chemicals used to treat wood cover objects inhibited PCR reactions and thus produced more false-negative eDNA results than metal objects. The results from any one study, such as ours, may not be consistent across all taxa, field methods, and locations; thus, there remains a need to optimize species-specific sampling using eDNA and cover objects (Hoare et al., 2009;Katz et al., 2020).
Our results suggest that merging surface eDNA with conventional cover object methods could become a critical tool in improving global reptile monitoring programs and thus greatly contribute to conservation, restoration, and management efforts (Ficetola et al., 2019;Halstead et al., 2017). These techniques provide an opportunity to overcome statistical noise in monitoring data, which in turn, will allow more robust estimates of changes in occupancy, site use, or habitat use (Lettink et al., 2011;Sewell et al., 2012). Our study provides proof that eDNA methods can provide much-needed statistical power boosts for terrestrial species' conservation monitoring. However, before this technique can be widely adopted, there is a need to assess the rates of eDNA deposition under cover objects, including exploring how these rates vary by species, material, sampling technique, and environmental conditions.
|
2022-05-24T06:23:20.221Z
|
2022-05-23T00:00:00.000
|
{
"year": 2022,
"sha1": "9be59e63402f29c3282fe92bd286164cefff88c5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "d6292e300d64eaaf0b8f751b5c95848aa78f8572",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118395048
|
pes2o/s2orc
|
v3-fos-license
|
Femtoscopic results in Au+Au and p+p from PHENIX at RHIC
Ultra-relativistic gold-gold and proton-proton collisions are investigated in the experiments of the Relativistic Heavy Ion Collider (RHIC). In the last several years large amount of results were revealed about the matter created in these collisions. The latest PHENIX results for femtoscopy and correlations are reviewed in this paper. Bose-Einstein correlations of charged kaons in 200 GeV Au+Au collisions and of charged pions in 200 GeV p+p collisions are shown. They are both compatible with previous measurements of charged pions in gold-gold collisions, with respect to transverse mass or number of participants scaling.
Introduction
Ultra-relativistic collisions of Au nuclei are observed at the experiments of the Relativistic Heavy Ion Collider (RHIC) of the Brookhaven National Laboratory, New York. The aim of these experiments is to create new forms of matter that existed in Nature a few microseconds after the Big Bang, the creation of our Universe.
A consistent picture emerged after the first three years of running the RHIC experiment: the created hot matter acts like a liquid [1], not like an ideal gas some had anticipated when defining the term QGP. The nuclear modification factor is ratio of yield in Au+Au collisions over the yield in p+p collisions, scaled by the number of binary nucleus-nucleus collisions in a Au+Au collision. It has been measured for several hadron species at highest p t , most recently η and φ mesons [2]. This confirms the evidence for a dense and strongly interacting matter. Direct photon measurements, which require tight control of experimental systematics over several orders of magnitude, show that high p t photons in Au+Au collisions are not suppressed [3]. This observation makes definitive the conclusion that the suppression of high-p t hadron production in Au+Au collisions is a final-state effect.
A very important tool to understand the geometry of the matter created at RHIC is that of Bose-Einstein correlations, or otherwise interferometry of bosons. In present proceedings paper we do not detail the theory, simply refer to ref. [4]. In the next sections we will detail recent measurements about pion and kaon interferometry.
Kaon interferometry in Au+Au collisions
The observations of extended, non-Gaussian, source size from two-pion correlations [5] make the measurement of two-kaon correlations important for understanding the contribution from decays of long-lived resonances.
This analysis is described in detail in ref. [6]. PHENIX used ∼ 600 million minimum bias events, triggered by the coincidence of the Beam-Beam Counters (BBC) and Zero-Degree Calorimeters (ZDC) with collision vertex |z| < 30 cm. Charged kaons were tracked and identified using the drift chamber (DC), pad chambers (PC1,PC3) and PbSc Electromagnetic Calorimeters (EMCal) to cover pseudorapidity |η| < 0.35 and azimuthal angle ∆φ = 3π/4. Momentum resolution in this case was δp/p 0.7% ⊕ 1.0% × p (GeV/c). Backgrounds were reduced by requiring 2 σ position match between track projections and EMCal hits, and 3 σ match for PC3. Until a transverse momentum of ∼0.9 GeV/c kaons and pions can be separated via timing information. Above that limit PID cuts have to be introduced, the selection in this case was that we identify particles as kaons if they are within 2 σ of the theoretical mass-squared of kaons and are at least 2 σ away from the pion and also the proton mass. With this, the contamination level is ∼4% from pions, and ∼1% from protons at p t ∼ 1.5 GeV/c. We find that the number of participants (N 1/3 part ) dependence of 3D correlation radii is linear as shown on fig. 1. The transverse momentum (m t ) dependence of these radii follows the same scaling as in case of pions, predicted from hydro models, see for example [7]. In case of imaging, a non-Gaussian tail is revealed for radii greater than 10 fm. This suggests that earlier finding of large tails of pion imaged source functions are not due to resonance decays but show truly enlarged sources.
Pion interferometry in p+p collisions
The important measurement of Bose-Einstein correlations was extended to proton-proton systems also. See more details of this analysis in ref. [8]. part scaling curve of Au+Au [9] data is also in accordance with the new p+p results.
Summary and conclusions
We measured HBT correlation functions of charged kaon pairs in Au+Au collisions and of charged pion pairs in p+p collisions. The 3D HBT radii are consistent for pions and kaons at the same number of participants and transverse mass. The 1D emission source function for kaons extracted by imaging shows a non-Gaussian tail at distances greater than 10 fm. The preliminary analysis of pion HBT correlations in p+p collisions can be analyzed via traditional 3D HBT methods. These correlation radii are also consistent with extrapolations from earlier measurements. Final Au+Au HBT data is from ref. [9].
|
2011-09-04T08:35:14.000Z
|
2011-01-11T00:00:00.000
|
{
"year": 2011,
"sha1": "e46d47d1c6e83272b50ddadfd3971e58820a453f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1101.2086",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e46d47d1c6e83272b50ddadfd3971e58820a453f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
8932668
|
pes2o/s2orc
|
v3-fos-license
|
Complex C: A Low-Metallicity High-Velocity Cloud Plunging into the Milky Way
(Abridged) We present a new high-resolution (7 km/s FWHM) echelle spectrum of 3C 351 obtained with STIS. 3C 351 lies behind the low-latitude edge of high-velocity cloud Complex C, and the new spectrum provides accurate measurements of O I, Si II, Al II, Fe II, and Si III absorption lines at the velocity of the HVC. We use collisional and photoionization models to derive ionization corrections; in both models we find that the overall metallicity Z = 0.1 - 0.3 Z_{solar} in Complex C, but nitrogen must be underabundant. The iron abundance indicates that Complex C contains very little dust. The absorbing gas probably is not gravitationally confined. The gas could be pressure-confined by an external medium, but alternatively we may be viewing the leading edge of the HVC, which is ablating and dissipating as it plunges into the Milky Way. O VI column densities observed with FUSE toward nine QSOs/AGNs behind Complex C support this conclusion: N(O VI) is highest near 3C 351, and the O VI/H I ratio increases substantially with decreasing latitude, suggesting that the lower-latitude portion of the cloud is interacting more vigorously with the Galaxy. The other sight lines through Complex C show some dispersion in metallicity, but with the current uncertainties, the measurements are consistent with a constant metallicity throughout the HVC. However, all of the Complex C sight lines require significant nitrogen underabundances. Finally, we compare the 3C 351 sight line to the sight line to the nearby QSO H1821+643 to search for evidence of outflowing Galactic fountain gas that could be mixing with Complex C. We find that the intermediate-velocity gas detected toward 3C 351 and H1821+643 has a higher metallicity and may well be a fountain/chimney outflow from the Perseus spiral arm.
C proper. The similarity of the absorption lines ratios in the HVR and Complex C suggest that these structures are intimately related. In Complex C proper we find [O/H] = −0.76 +0.23 −0.21 . For other species, the measured column densities indicate that ionization corrections are important. We use collisional and photoionization models to derive ionization corrections; in both models we find that the overall metallicity Z = 0.1 − 0.3Z ⊙ in Complex C proper, but nitrogen must be underabundant. The iron abundance indicates that the Complex C contains very little dust. The size and density implied by the ionization models indicate that the absorbing gas is not gravitationally confined. The gas could be pressure-confined by an external medium, but alternatively we may be viewing the leading edge of the HVC, which is ablating and dissipating as it plunges into the Milky Way. O VI column densities observed with FUSE toward nine QSOs/AGNs behind Complex C support this conclusion: N (O VI) is highest near 3C 351, and the O VI/H I ratio increases substantially with decreasing latitude, suggesting that the lower-latitude portion of the cloud is interacting more vigorously with the Galaxy. The other sight lines through Complex C show some dispersion in metallicity, but with the current uncertainties, the measurements are consistent with a constant metallicity throughout the HVC. However, all of the Complex C sight lines require significant nitrogen underabundances. Finally, we compare the 3C 351 data to highresolution STIS observations of the nearby QSO H1821+643 to search for evidence of outflowing Galactic fountain gas that could be mixing with Complex C. We find that the intermediate-velocity gas detected toward 3C 351 and H1821+643 has a higher metallicity and may well be a fountain/chimney outflow from the Perseus spiral arm. However, the results for the higher-velocity gas are inconclusive: the HVC detected toward H1821+643 near the velocity of Complex C could have a similar metallicity to the 3C 351 gas, or it could have a significantly higher Z, depending on the poorly constrained ionization correction.
Introduction
The Galactic high-velocity clouds (HVCs), gas clouds detected via 21 cm emission or UV/optical absorption at velocities that deviate substantially from normal Galactic rotation (Wakker & van Woerden 1997), have potentially important implications regarding the structure and evolution of the Galaxy and Local Group. For example, if the HVCs provide a sufficient quantity of infalling low-metallicity gas, they can alleviate the G-dwarf problem, the long-standing discrepancy between the observed metallicity distribution of G-dwarf stars and theoretical expectations (e.g., Larson 1972;Tosi 1988;Wakker et al. 1999;Gibson et al. 2002). The HVCs may have important cosmological implications as well. Hierarchical models of galaxy formation within the cold dark matter framework predict many more dwarf satellite galaxies within the Local Group than have been detected/identified (e.g., Klypin et al. 1999;Moore et al. 1999). A variety of solutions to this problem have been proposed, including the possibility that the HVCs are the missing dwarfs which, for some reason, have not formed readily detectable stars (e.g., Blitz et al. 1999;Klypin et al. 1999;Gibson et al. 2002).
However, the nature of most high-velocity gas is still poorly understood, mainly because the cloud distances are highly uncertain. Some of the HVCs are clearly material stripped out of the Magellanic Clouds, the "Magellanic Stream" (Mathewson, Cleary, & Murray 1974;Putman et al. 1998;Lockman et al. 2002), and some high-velocity absorption lines observed towards disk stars are most likely related to the interaction between stars/supernovae and the ISM (e.g., Cowie, Songaila, & York 1979;Trapero et al. 1996;Jenkins et al. 1998Jenkins et al. , 2000Welty et al. 1999;Tripp et al. 2002). Apart from these clouds, direct distance constraints are scarce and difficult to obtain (Wakker 2001), and consequently a variety of HVC models remain viable. The HVCs may have a galactic origin such as the Galactic fountain (Shapiro & Field 1976;Bregman 1980), or they may be extragalactic (e.g., Oort 1970;Blitz et al. 1999;Braun & Burton 1999).
Given the difficulty of direct distance measurements, it is important to explore other constraints on the nature of these objects. For example, constraints on the distribution and size of the HVCs can be derived from studies of other galaxy groups using either QSO absorption line statistics (Charlton, Churchill, & Rigby 2000) or surveys for redshifted 21 cm emission (Zwaan & Briggs 2000;Zwaan 2001;Braun & Burton 2001;Pisano & Wilcots 2003). For the Milky Way HVCs, it has been suggested that the Hα emission from the clouds constrains their distances (e.g., Bland-Hawthorn et al. 1998). If the gas is photoionized by UV flux escaping from the Galaxy, then nearby clouds should be substantially brighter in Hα than distant HVCs. Hα emission has in fact been detected from a variety of HVCs (e.g., Weiner & Williams 1996;Tufte et al. 1998Tufte et al. , 2002Weiner, Vogel, & Williams 2002), and detailed photoionization models place the clouds roughly 5 − 50 kpc away based on the observed Hα intensities (Weiner et al. 2002;Bland-Hawthorn & Maloney 2002). However, several observations cloud the interpretation of Hα intensities. First, the Magellanic Stream, which has an independently constrained distance, is much brighter in Hα than predicted by the photoionization models (Weiner et al. 2002;Bland-Hawthorn & Maloney 2002). Second, the Magellanic Stream Hα emission is spatially variable (see Figure 1 in Weiner et al. 2002), also contrary to the predictions of the photoionization models. Finally, O VI absorption has been detected in a substantial fraction of the HVCs Wakker et al. 2002). While O VI can be produced by photoionization in large, low-density intergalactic gas clouds (e.g., Tripp et al. 2001), the O VI in HVCs is almost certainly collisionally ionized (the cloud sizes required by photoionization models are excessive for HVCs, see Sembach et al. 2002). These observations suggest that collisional processes play an important role in the ionization of HVCs and the production of Hα emission. The interaction of the rapidly moving clouds with ambient magnetic fields may create instabilities that also play a role in the gas ionization and production of Hα emission (Konz et al. 2001). For these reasons, Hα distance constraints from photoionization models may need to be revisited.
Ultraviolet absorption lines in the spectra of background quasars and active galactic nuclei (AGNs) provide another sensitive probe for the study of Galactic (as well as extragalactic) HVCs. UV absorption lines provide detailed information on abundances and physical conditions in the gas, and this in turn can be compared to predictions of various models. For example, it is expected that Galactic fountain gas would have a substantially higher metallicity than infalling extragalactic gas or gas stripped from a satellite galaxy. Relative metal abundance patterns may also provide insights on the enrichment history of the HVCs; if the gas is relatively pristine, overabundances of α elements or underabundances of nitrogen might be observed, as seen in low-metallicity stars (McWilliam 1997) and H II regions (Vila-Costas & Edmunds 1993;Henry, Edmunds, & Köppen 2000).
In this paper we use the absorption line technique to explore the nature of HVC Complex C, a large HVC (roughly 20 • × 90 • ) that is at least ∼5 kpc away (van Woerden et al. 1999). A number of Complex C abundance measurements 10 have been published for a variety of species ranging from [N/H] = −1.94 to [Fe/H] = −0.3 (Bowen, Blades, & Pettini 1995;Wakker et al. 1999;Murphy et al. 2000;Richter et al. 2001;Gibson et al. 2001;Collins et al. 2002). However, some of these abundances may be confused by ionization effects. Species such as S II or Fe II can arise in ionized as well as neutral gas leading to overestimates of [S/H] or [Fe/H] if no ionization correction is applied, while other species such as N I can be more readily ionized than H I leading to underestimation of the elemental abundance. The most robust species for constraining the metallicity of HVCs is O I. Like sulfur, oxygen is only lightly depleted by dust grains (see §6.2.1 in Moos et al. 2002, and references therein), but more importantly, the ionization potential of O I is nearly identical to that of H I, and O I is strongly locked to H I by resonant charge exchange (Field & Steigman 1971). Consequently, oxygen abundances based on O I and H I are relatively impervious to ionization effects (unless the gas is quite substantially ionized). Richter et al. (2001) Richter et al. also report that nitrogen is highly underabundant (by ∼ 1 dex compared to oxygen), and the α elements are marginally overabundant compared to iron. These results suggest that Complex C is a relatively pristine extragalactic cloud plunging into the Galaxy for the first time. This notion has been challenged by Gibson et al. (2001), who have observed S II in several directions through Complex C and find evidence of spatial variability of the sulfur abundance. Sulfur abundances derived from S II alone are prone to ionization effects, as noted, but recently Collins, Shull, & Giroux (2003) have also reported variable metallicity in Complex C based on oxygen measurements.
Here we present new constraints on the metallicity as well as the physical conditions, structure, and nature of high-velocity cloud Complex C from UV absorption lines. Our analysis is mainly based on high-resolution echelle spectroscopy of 3C 351 obtained with STIS, but we make use of high-resolution observations of nearby sight lines to augment the results with spatial information, e.g., on the transverse extent of the absorbing gas. We also briefly discuss intermediate-velocity gas in the 3C 351 direction. The paper is organized as follows: We begin with a summary of 21cm emission observations of high-velocity gas in the direction of 3C 351 ( §2), followed by a brief summary of the STIS observations ( §3) and the absorption line measurements ( §4). We then argue that ionization corrections are likely to be important for these clouds, and we employ collisional and photoionization models to constrain their abundances ( §5). The abundances indicate that Complex C and the intermediate-velocity gas likely have different origins; we suggest that Complex C is an infalling, low-metallicity cloud while the IVC is outflowing gas from the Perseus spiral arm. In the discussion of our measurements, we consider the implications of the density and size of the absorbing gas derived from the ionization models including the confinement of the gas ( §6.1), and we compare 3C 351 to other nearby sight lines in the vicinity of Complex C ( §6.2). We find evidence that the lower latitude portion of Complex C is more affected by interactions with the ambient medium that the higher latitude region of the cloud. We summarize our conclusions in §7. Throughout this paper we present spectra and velocities relative to the Local Standard of Rest (LSR). 11
High-Velocity Clouds Toward 3C 351
The sight line to 3C 351 (l = 90.08 • , b = +36.38 • ) probes an intriguing region of the high-velocity sky. A portion of the Hulsbosch & Wakker (1988) 21 cm emission map of Complex C, centered on 3C 351, is shown in Figure 1. In this figure, the color scale shows the LSR gas velocity while the contours indicate brightness temperature. The lowest contour shown corresponds to N(H I) = 2 × 10 18 cm −2 . Complex C shows several well-defined cores; the 3C 351 sight line is roughly equidistant in projection from the CIB, CD, and CK cores [see Wakker (2001, and references therein) for definitions of the cores and nomenclature]. CK has a lower velocity than most of Complex C and may be more closely associated with the intermediate-velocity cloud Complex K than the high-velocity Complex C cloud (see Wakker 2001 andTufte 2001). Figure 2 shows the intermediate-velocity 21 cm sky in the vicinity of 3C 351. The map in Figure 2 plots H I column densities derived from the 21 cm Leiden-Dwingeloo Survey (LDS; Hartmann & Burton 1997) integrated over −100 ≤ v LSR ≤ −55 km s −1 . At l 100 • , the intermediate-velocity sky is dominated by the "IV Arch", but at lower latitudes it is unclear whether the emission is associated with the IV Arch, Complex K, or a lower-velocity region of Complex C (see Wakker 2001). We provide evidence below that the CK IVC and Complex C proper have different origins ( § §5.4,6). We shall refer to the 3C 351 intermediate-velocity absorption system as C/K hereafter and Complex C proper (v LSR ≈ −130 km s −1 ) as simply Complex C.
A notable feature of Complex C is the presence of 21 cm emission at substantially higher velocities (v LSR ≤ −170 km s −1 ) along the central ridge of the cloud superimposed on the main HVC emission (−155 v LSR −80 km s −1 ). The higher velocity emission is indicated with half-circles in Figure 1 (see also Figure 6 in Wakker 2001). The nature of the higher velocity component is not entirely clear, but it has a similar morphology to the lower velocity Complex C gas, and it is centered on Complex C. For convenience, we refer to this cloud as the "high-velocity ridge" (HVR) in this paper. A variety of UV absorption lines are detected from the high-velocity ridge in the 3C 351 spectrum as well as Complex C, and unlike C vs. C/K, the UV absorption lines provide evidence that the HV ridge and Complex C are closely related (see below).
The H I column densities in Complex C and the high-velocity ridge are shown in Figures 3 and 4, show 21 cm emission maps from the LDS integrated over −155 ≤ v LSR ≤ −100 km s −1 (Complex C proper) and −205 ≤ v LSR ≤ −170 km s −1 (high-velocity ridge). In these figures, the color scale and thin contours reflect N(H I) according to the scale at the bottom. The last thin contour is drawn at the 3σ limit of the LDS, N(H I) = 5 × 10 18 cm −2 . The thick line shows the 2 × 10 18 cm −2 contour from the more sensitive Hulsbosch & Wakker map in Figure 1 integrated over the same velocity Wakker (1988), in Galactic coordinates (longitude increases from right to left along the x-axis). This map is centered on the sight line to 3C 351 and shows only a portion of the cloud (for a map of the entire HVC, see Figure 6 in Wakker 2001). Emission from high-velocity gas in the Outer Arm is also evident in this map closer to the plane; at l > 90 • and b < 30 • , the high-velocity emission is dominated by the Outer Arm. At lower longitudes the distinction is less clear; the line from (l, b) = (70, 25) to (88,29) roughly delineates these structures. LSR velocities are indicated by color using the scale at the bottom of the figure, and the contours show brightness temperatures of 0.05, 0.4, and 1.0 K. The half-circles show the high-velocity ridge at v LSR ≈ −200 km s −1 ; this higher velocity feature is clearly detected in absorption toward 3C351 (see text and Figures 6-8). The emission cores CIA,CIB, CeI-CeIV, CK, CD, and D are also labeled along with other extragalactic sight lines of interest discussed in this paper. Emission in this velocity range is dominated by the "IV Arch" at l 98 • ; emission at lower latitudes may be associated with Complex K (see §4.24 and §4.27, respectively, in Wakker 2001) or the Perseus Arm (see text, § 6.2.3). Figure 1, but based on data from the Leiden-Dwingeloo Survey (Hartmann & Burton 1997). Emission at b 27 • is mainly from the Outer Arm; higher-latitude emission is associated with Complex C. In this figure, colors indicate the H I column density as shown by the scale at the bottom, integrated from v LSR = −155 to −100 km s −1 . The lowest contour from the LDS data corresponds to the 3σ limit of ∼ 5 × 10 18 cm −2 . The thick line indicates the N(H I) = 2 × 10 18 cm −2 contour from the Hulsbosch & Wakker map shown in Figure 1. range. Much of the high-velocity emission detected closer to the plane in Figures 1-3 is mainly associated with the Outer Arm and the warp of the outer Milky Way (Wakker 2001, and references therein).
A potentially serious source of systematic error in HVC abundance measurements is the large beam of the 21 cm observations usually used to estimate N(H I). The large radio beam may dilute the HVC 21 cm emission, or it may show emission from gas which is in the radio beam but is not present along the pencil beam to the UV source. Comparisons of N(H I) measurements from different radio telescopes (with different beam sizes) indicate that the 10' Effelsberg beam is often sufficiently small to provide a reliable H I column density , but there are cases where even the Effelsberg beam is too large (e.g., Mrk 205, see Wakker 2001). Therefore it is worthwhile to review the available N(H I) measurements for the 3C 351 HVCs. The HVC 21 cm emission profiles in the direction of 3C 351 observed with the NRAO 43m telescope (Murphy, Sembach, & Lockman 2002) and the Effelsberg 100m telescope ) are shown in Figure 5. H I column densities in the 3C 351 HVCs derived from these data are summarized in Table 1 along with N(H I) from the LDS data. The uncertainties listed in Table 1 are statistical errors only. Wakker et al. (2001) note that the systematic uncertainties in the 3C 351 H I column densities are likely much larger than the statistical errors: they estimate that systematic errors lead to an uncertainty of ∼ 1.5 × 10 18 cm −2 in N(H I). With this uncertainty, the N(H I) measurements in Table 1 for Complex C are in reasonable agreement. For Complex C, we adopt N(H I) from the smallest-beam observation (Effelsberg), but with the larger systematic uncertainty reported by Wakker et al., i.e., N(H I) = (4.2 ± 1.5) × 10 18 cm −2 . However, while both Complex C proper and the high-velocity ridge are apparent in the NRAO and LDS data, only Complex C is clearly detected in the Effelsberg observation. This suggests that the observations with larger beams are picking up emission which is not present along the pencil beam to 3C 351. Consequently, we take the NRAO Green Bank N(H I) as an upper limit for the high-velocity ridge, and we derive lower limits on the HV ridge abundances below. Similarly, the H I column for Complex C/K is highly uncertain given the systematic uncertainty above, and we can only place lower limits on the metallicity of this IVC. However, these lower limits will turn out to be useful ( § §5.3,5.4). (Murphy, Sembach, & Lockman 2002), and (b) the Effelsberg 100m telescope High-velocity ridge . . . . · · · · · · −182. d Listed column density uncertainties include only statistical error, and uncertainties are not reported by Lockman et al. (2002) for the Green Bank 43m measurements. Wakker et al. (2001) discuss several sources of systematic error, and they estimate that these lead to an uncertainty of ∼ 1.5 × 10 18 cm −2 in N (H I).
Fig. 6.-Portion of the FUV spectrum of 3C 351 obtained with STIS in the E140M echelle mode (FWHM ≈ 7 km s −1 ) plotted versus observed wavelength. For display purposes, the spectrum has been binned (2 pixels into 1) in this figure (all other figures, as well as the line measurements, make use of the full resolution, unbinned spectrum). This region of the spectrum shows the absorption profiles of the O I λ1302.2 and Si II λ1304.4 lines due to the ISM of the Milky Way as well as several extragalactic absorption lines. Four main components are readily apparent in the Galactic O I and Si II profiles; these are indicated with tick marks immediately above the spectrum. We are mainly interested in the highvelocity absorption lines associated with Complex C and the "high-velocity ridge" (HVR, see §2), which are marked with longer tick marks. Absorption arising in the intermediatevelocity cloud Complex C/K and local gas is also identified with short ticks. The thin solid line indicates the fitted continuum with the predicted Lyδ lines due to the associated absorption systems at z abs = 0.3709 and 0.3719 superimposed (see Yuan et al. 2002). The other extragalactic lines in this wavelength range are identified as Lyα at z abs = 0.0699 and 0.0715 (see text for further details).
STIS Echelle Spectroscopy
We now turn to the UV absorption line data. The STIS observations of 3C 351 are fully described in §2 of Yuan et al. (2002). Briefly, the QSO was observed with the E140M echelle mode with the 0. ′′ 2 × 0. ′′ 06 slit; this mode provides 7 km s −1 resolution (FWHM) and covers the 1150 − 1710Å range with only a few gaps between orders at the longest wavelengths (see Woodgate et al. 1998 andKimble et al. 1998 for further details). The data were reduced following standard procedures with software developed by the STIS Investigation Definition Team. Scattered light was removed using the procedure developed by the STIS Team (Bowers et al. 1998), which corrects for echelle scatter as well as other sources of scattered light.
A sample of the final spectrum is shown in Figure 6. This somewhat complicated portion of the spectrum shows the absorption profiles of the important Milky Way O I λ1302.2 and Si II λ1304.4 profiles as well as several extragalactic absorption lines. Absorption profiles of several species of interest are plotted versus LSR velocity in Figure 7 along with the H I 21 cm emission observed in the direction of 3C 351 by Wakker et al. (2001) with the 100 m Effelsberg telescope. The UV absorption lines show four distinct components. The two highest-velocity components at v LSR = −128 and −190 km s −1 are close to the observed 21 cm velocities of Complex C and the high-velocity ridge. Similarly, the UV lines at v LSR ≈ −82 km s −1 are close to the velocity of IVC C/K. Analysis of the Complex C/K absorption lines is complicated by substantial blending with adjacent components. Nevertheless, the data indicate that IVC C/K has a higher metallicity than HVC Complex C ( §5.4).
In this paper, we are primarily interested in Complex C and the high-velocity ridge. However, absorption from other Galactic and extragalactic clouds can introduce systematic errors that must be considered. For example, 3C 351 has a dramatic "associated" absorption system (i.e., z abs ≈ z QSO ) that affects the spectrum near the N V, O VI, and Lyman series lines at the redshift of the QSO (Yuan et al. 2002, and references therein). The associated Lyδ lines at z abs = 0.3709 and 0.3719 fall near the Galactic O I λ1302.2 profile, which is a crucial profile for our analysis. We can predict the strength of these contaminating Lyδ lines based on fits to the other Lyman series lines in the STIS spectrum from Yuan et al. (2002). The thin solid line in Figure 6 shows the predicted Lyδ lines superimposed on the fitted continuum. Fortunately, as can be seen from Figure 6, these associated Lyδ lines have no impact on the high-velocity O I lines of primary interest. The nearby Lyα lines at z abs = 0.0715 and particularly 0.0699 could be a more serious source of contamination than the associated Lyδ lines. These Lyα lines appear to be mostly separated from the Milky Way O I and Si II profiles, but since Lyα lines are clustered on velocity scales of a few hundred km s −1 (Penton et al. 2000), it remains possible that unseen Lyα lines clustered around the strong z abs = 0.0699 and 0.0715 absorbers affect the Galactic O I and/or Si II profiles in Figure 6. However, a detailed comparison of the Milky Way profiles in the following section indicates that there is little or no contamination present in the O I and Si II lines.
The Galactic Si II λ1193.3 transition is badly blended with C III λ977.0 absorption from a Lyman limit system at z abs = 0.2210, so we do not use this transition for the HVC analysis. Also, the Milky Way Si II λ1190.4 and S III λ1190.2 transitions are highly blended, and we do not trust information from either of these lines. Finally, we note that the Galactic Si II λ1260.4 and C II λ1334.5 transitions are severely saturated and consequently do not provide useful constraints.
Absorption Line Measurements
To measure the line column densities and to assess the impact of unresolved saturation on the absorption lines of interest, we mainly rely on the apparent column density technique (Savage & Sembach 1991), but some supporting measurements are also derived from Voigt profile fitting, using the software of Fitzpatrick & Spitzer (1997) with the line spread functions from the STIS Instrument Handbook (Leitherer et al. 2001). The apparent column density is constructed using the following expression, (1) This provides the instrumentally broadened column density per unit velocity [in atoms cm −2 (km s −1 ) −1 ], where f is the transition oscillator strength, λ is the transition wavelength (the numerical coefficient is for λ inÅ), I(v) is the observed line intensity and I c (v) is the estimated continuum intensity at velocity v. By comparing two or more resonance lines of a given species with adequately different f λ values, the profiles can be checked for unresolved saturation and a correction can, in some cases, be applied (Jenkins 1996). If the N a (v) profiles of the resonance lines are in good agreement, then the lines are not affected by unresolved saturation, and the profiles can be integrated to obtain the total column, N tot = N a (v)dv.
c Integrated equivalent width and apparent column density (in cm −2 ). For the High-Velocity Ridge, all quantities are integrated from v LSR = −230 to −155 km s −1 , and the Complex C absorption lines are integrated over −150 ≤ v LSR ≤ −110. Equivalent widths in parentheses have less than 3σ significance, and we do not consider these lines to be reliably detected. f Logarithmic abundance obtained by applying the ionization correction from the best-fitting models presented in the text. In this table we have used the collisional ionization equilibrium corrections, but very similar results are obtained from the CLOUDY model (see § § 5.2.1,5.2.2,5.3). Error bars include column density uncertainties and solar reference abundance uncertainties but do not reflect uncertainties in the ionization correction. g 4σ upper limit assuming the linear curve-of-growth applies.
h The Voigt profile fitting column is from a simultaneous fit to the 1304.37 and 1526.71Å lines. Similarly we combined the information from these two transitions for the directly integrated column. For the final integrated Si II column density, we adopt the following weighted averages of the 1304.37 and 1526.71Å measurements. Complex C: log N (Si II) = 13.76±0.03; high-velocity ridge: log N (Si II) = 13.46±0.06.
i Formal error bars may underestimate the true uncertainty due to strong blending with lower velocity gas (see Figure 7). j Strongly saturated absorption. Table 3. Intermediate-Velocity Absorption Lines toward 3C 351: Table 2 c Logarithmic abundance obtained by applying the ionization correction from the best-fitting CLOUDY model presented in the text ( § 5.4). Error bars include column density uncertainties and solar reference abundance uncertainties but do not reflect uncertainties in the ionization correction.
d Line is strongly blended with adjacent absorption features.
e Saturated absorption line.
f Simultaneous fit to the Si II λ1304.37 and λ1526.71 lines.
g Apparent column density may underestimate the true column due to unresolved saturation.
h Poorly constrained result due to strong blending.
As noted in §1, the most important species for metallicity measurements is O I. The only O I transition available in the direction of 3C 351 is the 1302.2Å line. 12 In the high-velocity ridge, N(O I) appears to be well-constrained from the 1302.2Å profile. In Complex C, on the other hand, the 1302.2Å line does not go to zero flux in the core, but it is quite strong (see Figs. 6-7). Consequently, it is important to consider whether the Complex C O I column might be underestimated due to unresolved saturation. We believe that the O I line is not seriously saturated based on the following evidence. We first note that the Si II lines show no indications of unresolved saturation. Figure 8a compares the N a (v) profiles of the 1304.4 and 1526.7Å lines of Galactic Si II, transitions that differ in f λ by 0.23 dex. These Si II N a (v) profiles are in good agreement, 13 which indicates that the lines are not significantly saturated. If O I is affected by unresolved saturation, then the O I N a (v) profile should differ from the Si II N a (v) profiles due to the underestimation of N(O I) in the saturated pixels. Figure 8b compares the O I apparent column density to a weighted composite Si II N a (v) profile constructed from the 1304.4 and 1526.7 profiles following the procedure of Jenkins & Peimbert (1997), with the O I profile scaled down by a factor of five. From Figure 8b, we see that the O I and Si II profiles have very similar shapes. This suggests that there is little unresolved saturation within the O I profile. We notice from Table 2 that Voigt profile fitting gives a somewhat higher N(O I) than direct N a (v) integration, although the O I columns from the two methods agree within the 1σ uncertainties. This could reflect a small amount of saturation. To be conservative, we adopt the higher N(O I) value for our subsequent analysis. However, we will consider the lower O I column in our analysis, and we will find that our conclusions are essentially the same.
The comparison of the O I and Si II N a (v) profiles in Figure 8b also indicates that there is little contamination of the O I lines from the nearby Lyα absorbers discussed in §3 and shown in Figure 6. Significant optical depth from such contamination would cause the O I and Si II profiles to have different shapes at velocities where the contamination occurs. The O I and Si II profiles have similar velocity structure, and moreover the O I/Si II column density ratio is very similar in Complex C and the high-velocity ridge. The Al II and Si II N a (v) profiles are also remarkably alike, as shown in Figure 8d. The Fe II N a (v) profile is compared to the composite Si II profile in Figure 8c. Fe II is clearly detected in Complex C, 12 In principle, weaker O I transitions can be observed with FUSE at 915 < λ < 1050Å. Unfortunately, there is not sufficient flux from the QSO in this wavelength range for absorption line measurements because the spectrum is strongly attenuated below 1115Å by an optically thick Lyman limit absorber at z abs = 0.2210 (see Mallouris et al., in preparation).
13 Unresolved saturation would be manifested by a higher apparent column in the weaker transition (1304.4) compared to the stronger transition. The highest outlying point in the N a (v) profiles in Figure 8a is from the 1526.7Å transition, the opposite of the expected effect. but the substantially greater noise limits comparisons of C and the HV ridge for this species. The similarity of the O I, Si II, and Al II profiles in Figure 8 would seem to suggest that the physical conditions and relative abundances in Complex C and the high-velocity ridge are roughly the same. However, the behavior of the higher ionization stages (Si IV and C IV) is entirely different in Complex C and the high-velocity ridge, as shown in in Figure 8e-f. Very little high-ion absorption is apparent in the velocity range of Complex C, but Si IV and C IV are clearly detected in the high-velocity ridge, with component structure similar to that of the lower ionization stages. The Si IV and C IV profiles therefore indicate that there is a important difference in the physical conditions of Complex C and the HV ridge. We discuss this issue further in the following section.
Abundances and Ionization
We next consider the implications of the measurements presented above. We first argue that ionization corrections are important in the 3C 351 HVCs ( §5.1). We then investigate the absolute and relative abundances in Complex C and the high-velocity ridge with the aid of ionization models ( § §5.2−5.4).
Preliminary Remarks
In general, the logarithmic abundance of species X with respect to species Y is where N(X i ) and f (X i ) are the column density and ion fraction of the i th ionization stage of species X (and likewise for Y), and (X/Y) ⊙ is the solar reference abundance. 14 In many situations, the ionization correction However, when we state that the "ionization correction is decreasing", this does not indicate that the ionization correction is becoming negligible; this only indicates that f (Y i )/f (X i ) is decreasing. In this paper, to distinguish between measurements that have been corrected for ionization and those which have not, we adopt the following notation: uncorrected abundances are indicated by the observed ion, e.g., [Si II/H I], while measurements which have had an ionization correction applied are indicated without reference to a particular ion, e.g., [Si/H]. The penultimate columns of Tables 2-4 list the uncorrected abundances implied by our column density measurements. The last columns of these tables summarize the ionization-corrected abundances, using the ionization models described in the sections below. For the high-velocity ridge and IVC C/K, we list lower limits on the abundances because we only have upper limits on N(H I) for these components (see § 2). Ionization corrections can be a large source of uncertainty; we present examples below. The N(H I) measurements are often the other main source of uncertainty. We include our best estimates of the N(H I) uncertainty in our abundance error bars. However, systematic errors in N(H I) due to radio beam effects can be much larger than statistical errors, and these uncertainties can be difficult to assess.
14 We adopt the solar abundances reported by Holweger (2002) for the most abundant elements. The oxygen abundance from Holweger is in excellent agreement with the independent solar oxygen measurement reported by Allende Prieto, Lambert, & Asplund (2001), and these solar abundances are close to the interstellar oxygen measurements in the vicinity of the Sun (Sofia & Meyer 2001). Solar abundances of sulfur and aluminum are taken from Grevesse, Noels, & Sauval (1996).
If we neglect the ionization correction for the Complex C absorption lines toward 3C 351, we obtain highly discrepant results from neutral-gas tracers compared to species that can persist in ionized gas, i.e., the different ions imply significantly different metallicities. If correct, these abundances would indicate a very unusual relative abundance pattern. It is more likely that these discrepancies indicate that ionization corrections are important for the 3C 351 sight line. These abundance discrepancies are larger than the uncertainties in the respective measurements, as shown in Figure 9. In this figure we plot relative abundance measurements or limits, with respect to Si II, for Complex C, the high-velocity ridge, and IVC C/K derived directly from the O I, N I, Fe II, Al II, and S II lines, i.e., without ionization corrections. We plot abundances relative to Si II because we do not have a robust N(H I) measurement in the high-velocity ridge or IVC C/K, and because Si II is the best-constrained species in Complex C. The abundances in Figure 9 are arranged according to ionization potential. Again, we see evidence of ionization effects in all three of these clouds: the neutral species O I and N I are significantly underabundant with respect to Si II while ionized-gas tracers (Fe II and Al II) are much closer to the solar pattern.
Collisional Ionization
Can we reconcile the various abundances by applying ionization corrections? Since we have reasons to believe that collisional processes may be important (see §1), and in order to make our development more clear, we begin with a simplified picture where collisional ionization equilibrium applies, with no additional photoionization from an energetic radiation field (we will add photoionization in the next section). Figure 10 shows relevant column density ratios, as a function of temperature, derived from the equilibrium collisional ionization calculations of Sutherland & Dopita (1993) with solar reference abundances from Holweger (2001) and Grevesse et al. (1996). 15 The observed ratios in Complex C are overplotted for comparison. We see that over the temperature range in which the O I/Si II ratio is reproduced (within 1σ), the model does not exactly match the other observed ratios; departures from solar relative abundances are required. Nitrogen, for example, must be made underabundant with respect to silicon by at least 0.5 − 0.7 dex (an underabundance moves the model curves down in Figure 10). This may not be surprising since there is other evidence that nitrogen is underabundant in Complex C (Richter et al. 2001;Collins et al. 2003), and nucleosynthesis in low-metallicity gas can result in N underabundances (e.g., Henry et al. 2000). Similarly, Figure 10 indicates that aluminum must be underabundant if the gas is collisionally ionized and in equilibrium. Like nitrogen, Al could be underabundant, although much less so, due to nucleosynthesis effects since it is an odd-Z element, as observed in low-metallicity Galactic field stars (e.g., Lauroesch et al. 1996). An Al underabundance in the gas phase could alternatively be due to depletion by dust since aluminum is highly refractory (e.g., Jenkins 1987). However, this would be inconsistent with the iron abundance: Fe II should also be depleted in this case, but Figure 10 seems to require a slight Fe overabundance, by ∼0.2 dex, which leaves little room for dust depletion.
The only puzzling implication of Figure 10 is, in fact, the Fe overabundance. Nitrogen and Al underabundances would imply, at face value, that the gas is relatively pristine. However, in this case Fe should be underabundant as well since iron is thought to be mainly synthesized on longer timescales in Type Ia supernovae. We note the Fe II profile is the noisiest measurement among the detected lines (compare Figure 8c to the other panels in Fig. 8), and the observed Fe II/Si II ratio is only 2σ above the model prediction at log T ∼ 4.2. The slight Fe overabundance may simply be a result of noise. In fact, we shall see below that when uncertainties in the reference abundances as well as the column density measurements are taken into account and ionization corrections are applied, the final abundances agree within 1σ, except nitrogen. However, it is clear that iron is not highly underabundant due to dust depletion or nucleosynthesis effects. We also note that Murphy et al. (2000) find a relatively high Fe II/H I ratio for Complex C in the direction of Mrk 876. For Figure 10, we assumed solar relative abundances. If we were to adopt overabundances of the α−elements (O I, Si II, and S II) by ∼0.3 dex, on the grounds that such patterns are observed in lowmetallicity stars (McWilliam 1997, and references therein), then we would have a substantial discrepancy between the observed and model Fe II columns.
For the calculation of abundances using equation 2, we present in Figure 11 the ionization corrections obtained from the collisional ionization equilibrium calculations of Sutherland & Dopita (1993). The O I/Si II ratio suggests that log T ≈ 4.20; taking the ion fractions from Figure 11 at this temperature, with the column densities from Table 2 and N(H I) = (4.2 ± 1.5) × 10 18 cm −2 , we obtain the following abundances for Complex C proper: [Si/H] C = −0.73 +0.20 −0.14 , Fig. 10.-Column density ratios predicted in collisional ionization equilibrium vs. gas temperture, based on the calculations of Sutherland & Dopita (1993) with solar relative abundances from Holweger (2001) or Grevesse et al. (1996). Ratios of various species to N(Si II) are shown with small symbols according to the key on the right side. Observed ratios are plotted at log T = 4.2 with larger symbols and 1σ error bars. Arrows indicate 4σ upper limits. Fig. 11.-Ionization corrections for collisionally ionized gas in equilibrium from the calculations of Sutherland & Dopita (1993) vs. log T . As in Figure 10, the species corresponding to each curve is indicated by the key at the right. The H I ion fraction (f = H I/H total ) is shown with a heavy black line using the scale on the right axis. The Al II and S II curves are very similar because over this temperature range, f (Al II) ≈ f (S II) ≈ 1, so the ionization correction is nearly equal to f (H I).
[Fe/H] C = −0.54 +0.22 −0.17 , and The error bars in these abundances include the uncertainties in the column densities and the solar reference abundances but do not include the uncertainties in the ionization correction.
We note that application of the ionization correction removes the abundance discrepancies discussed above. After the ionization correction at log T = 4.2 has been applied, all of the abundances agree within the 1σ uncertainties, with the notable exception of nitrogen. The implied metallicity is Z = 0.1 − 0.3 Z ⊙ . The nitrogen underabundance suggests that the gas has undergone relatively few cycles of metal enrichment from stars. The magnitude of the nitrogen discrepancy decreases if we use the lower value for N(O I) from N a (v) integration (see Table 2), and the best estimate of [O/H] decreases as well, by ∼ 0.2 dex. However, since we only have an upper limit on N(N I), we would still be faced with a significant nitrogen deficit. It would be valuable to obtain additional observations of 3C 351 to more securely constrain the nitrogen situation (see additional discussion in § 6.2.1), and to measure the Fe abundance with less noise.
The combination of low overall metallicity and a nitrogen underabundance argue that Complex C is not ejecta produced in a Galactic fountain, which would produce gas with a higher metallicity. One might suppose that the HVC could still be part of a Galactic foutain if during the foutain cycle the gas is substantially diluted with low-metallicity gas or if the fountain flow started at large Galactocentric radii, as suggested by Gibson (2002). However, both of these hypotheses have trouble explaining the nitrogen underabundance, because N is not underabundant in the disk ISM (e.g., Meyer, Cardelli, & Sofia 1997), even at larger Galactocentric radii (Afflerbach, Churchwell, & Werner 1997). It seems more likely that Complex C has an extragalactic origin. It could be stripped gas from a satellite galaxy, or it could have a more distant origin.
Photoionization
Although there is strong evidence that collisional processes are important in HVCs (see §1), it is worthwhile to investigate how additional photoionization could alter the observed column densities. In addition, it remains possible that the Hα emission and high-ion absorption arise from a collisionally ionized phase (e.g., an interface on the surface of the HVC) while the low ionization lines originate inside the cloud where the gas is mainly photoionized.
For this purpose, we have constructed photoionization models using CLOUDY (v94.0; Ferland et al. 1998). The character of the radiation field to which an HVC is exposed is not entirely clear. This depends on the location of the HVC and the fraction of the ionizing photons that escape from the Galactic disk (Weiner et al. 2002;Bland-Hawthorn & Maloney 2002). However, the extragalactic UV background from quasars and active galactic nuclei provides a floor; additional photons from local sources only add to this background. Consequently, we begin with the UV background from QSOs+AGNs only. We model the Complex C absorber as a plane-parallel, constant-density slab exposed to the QSO background at z = 0 from Haardt & Madau (1996) with J ν = 1 × 10 −23 ergs s −1 cm −2 Hz −1 sr −1 at 1 Rydberg (see Weymann et al. 2001 and references therein for observational constraints on the UV background intensity). The absorber thickness is adjusted to reproduce the observed N(H I), and the metallicity and ionization parameter U (= H ionizing photon density/total H number density) are varied to match the metal column densities. It is important to note that the H I column is high enough so that self-shielding effects are important. These models should not be simply scaled for use on higher (or lower) N(H I) absorption systems.
Of course, these models are necessarily simplified compared to a real absorption system. However, a more detailed treatment of radiation transfer, geometry, and multiphase effects in the model presented by Kepner et al. (1999) in the end predicts very similar column densities to an analogous CLOUDY model. While this is only one test case, and more detailed modeling is highly warranted, the good agreement with CLOUDY is encouraging. Figure 12 shows the metal column densities predicted by the CLOUDY model as a function of U, and Figure 13 shows the corresponding ionization corrections from the same model over the same ionization parameter range. The best fit is obtained with log U ≈ −4.45, as shown in Figure 12. At this value of U, the derived abundances are nearly identical to those derived above from the collisional ionization model. The only substantial difference between the photoionized and collisionally ionized models is for nitrogen. In the collisionally ionized model, all of the ionization correction factors of interest decrease as T increases and the gas becomes more highly ionized (see Figure 11). In the photoionized model, on the other hand, the N ionization correction increases while the other ionization corrections decrease as the gas becomes more highly ionized (see Figure 13); this is due to the relatively high photoionization cross section of N I. This reduces the nitrogen underabundance implied by the measurements somewhat compared to the collisional model. However, the bestfitting photoionized model still suggests an N underabundance: we find [N/H] C ≤ −1.01 vs. [O/H] C = −0.75 +0.17 −0.29 . If we take the lower N(O I) from direct integration (see Table 2 and §4), then the best fit is obtained with a somewhat higher ionization parameter, log U ≈ −3.95 (log n H ≈ −2.55). As in the collisionally ionized model, this requires a lower [O/H] by ∼ 0.2 dex, and the nitrogen deficiency is reduced. The iron problem is also present Fig. 12.-Model of gas photoionized by the extragalactic UV background at z ∼ 0 with log N(H I) = 18.62, Z = 0.18Z ⊙ , and relative abundances following the solar pattern from Holweger (2002). Model column densities are plotted with small symbols, shown in the key at the right, as a function of the ionization parameter U (lower axis) and the particle density (middle upper axis). The mean gas temperature is shown on the uppermost axis; note that the temperature scale is not linear. Observed column densities are indicated with larger symbols with 1σ error bars. Points with arrows are 4σ upper limits. Figure 11, the curves are identified at right, and the thick black line indicates f (H I) using the right axis.
in the photoionized calculation: if we make α−elements overabundant in the model, then we find that the observed N(Fe II) is substantially greater than the predicted column density. This would be unexpected in low-metallicity gas, especially if nitrogen is underabundant.
The similarity of the photoionization and collisional ionization results is not necessarily surprising. CLOUDY includes collisional processes. The gas temperature in the photoionization model is governed by the balance of photoheating and various sources of cooling, and with the right combination of temperature and density, the collisional processes may dominate. The different behavior of N in the photoionized vs. collisionally ionized models is also not surprising. Nitrogen has a relatively large photoionization cross section, and the nitrogen + hydrogen charge exchange reaction is much weaker than that of oxygen. Consequently, N I is more readily photoionized than most species, and N I is a sensitive indicator of partially ionized gas (Sofia & Jenkins 1998). We experimented with other radiation fields in the photoionization model with various amounts of stellar flux added to the QSO background, and we found that the low-ion results did not change dramatically.
High-Velocity Ridge
The low-ion column density ratios in Complex C proper and the high-velocity ridge are strikingly alike (see Figures 8-9 and Table 2). This suggests that the physical conditions and abundances in Complex C and the HVR are quite similar. As discussed in §2, the H I column in the high-velocity ridge is not securely measured, but if we take the Green Bank measurement in Table 1 Figures 11 and 13). This is consistent with the absolute metallicity derived for Complex C. However, the HV ridge contains substantially more highly ionized gas relative to the lower ionization stages. At the gas temperature and/or ionization parameter implied by the low-ion ratios in the HVR, Si IV and C IV have very small ion fractions and should be undetectable (see Table 5 in Sutherland & Dopita 1993 and Figure 12). Consequently, we conclude that the significant high ion absorption lines observed in the high-velocity ridge originate in a separate phase from the low ionization stages. The similar component structure of the low and high ions in the HVR (see Figure 7 and Figure 8e-f) suggests that there is some association between the low-ionization and high-ionization phases; this could occur if the high ions arise in an interface between the low ion-bearing gas and a hotter ambient medium.
IVC Complex C/K
The absorption-line measurements for the IVC C/K are substantially more uncertain than the HVC measurements due to line saturation and blending with adjacent components. Nevertheless, the 3C 351 spectrum indicates that C/K has a higher metallicity than Complex C proper and the HV ridge. The equivalent widths and column densities are significantly higher in the C/K component than in the high-velocity components (compare Tables 2 and 3) despite the fact that the H I column is lower in C/K than in Complex C. Adopting the O I column from profile fitting (which provides better compensation for blending and saturation) and taking N(H I) < 4 × 10 18 cm −2 (see § 2), we find that [O/H] C/K > −0.5. As we have shown in the previous sections, ionization corrections are likely to be important for the other species detected in C/K. We have modeled the ionization of the C/K gas using CLOUDY, and we find that the large deficit of N I with respect to Si II shown in Figure 9 is predominantly due to ionization. At the ionization parameter that provides the best fit to the nominal O I/Si II ratio (i.e., log U = −3.6), the implied nitrogen underabundance is small and marginally significant, The ionization-corrected abundances of the other detected species at this value of U are listed in the last column of Table 3. All abundance estimates in Table 3 are listed as lower limits since we only have an upper limit on N(H I) ( § 2).
Discussion
The measurements and modeling presented in the previous sections have some interesting implications. We begin our discussion with some comments on the structure and confinement of Complex C ( §6.1). We then compare the 3C 351 sight line to other nearby sight lines through this HVC ( §6.2). We examine the case for metallicity variations in Complex C ( §6.2.1), and then present evidence that the lower latitude region of Complex C is interacting more vigorously with the ambient medium ( §6.2.2). Finally, we investigate the relationship between intermediate-and high-velocity gas observed toward H1821+643 and the HVCs in the 3C 351 spectrum ( §6.2.3).
The Structure and Confinement of Complex C
Our ionization models provide constraints on the physical conditions and dimensions of the absorber. The size of the absorber along the line-of-sight is L = N H /n H . For the high (low) values of N(O I) in Complex C from Table 2, the CLOUDY models that provide the best fits to the Complex C column densities (see § 5) have n H = 9 × 10 −3 (3 × 10 −3 ) cm −3 , and L = 0.30 (2.0) kpc, and T ≈ 9300 (10, 700) K. These densities and temperatures suggest that the cloud may not be gravitationally confined. Using eqn. 3 from Schaye (2001), we see that a self-gravitating cloud with these n H and T values would have a much larger size, L = 5.3 (10) kpc. The self-gravitating cloud size may change by a small amount depending on the absorber geometry, as noted by Schaye. However, even with this small scale factor, the uncertainties in the parameters derived from the CLOUDY calculations do not appear to be sufficient to reconcile the large difference between the expected size for a self-gravitating cloud and the size implied by the ionization models. The HVC could still be self-gravitating if it is predominantly composed of dark matter. Following Schaye (2001), we have assumed that the fraction of the mass in gas f g ≈ Ω b /Ω m = 0.16 to calculate the self-gravitating cloud size above. The cloud could be made self-gravitating by reducing f g to the order of 10 −2 − 10 −4 . This may be unlikely, but it is not inconceivable. As an HVC plunges into a galaxy halo, ram pressure can separate the gas from its original dark matter halo (e.g., Quilis & Moore 2001). We may be viewing the small amount of residual gas in a dark matter halo that has already had most of its baryons stripped away. However, Quilis & Moore find that this requires the ambient medium to have a relatively high density, 10 −4 cm −3 , which they deem "unrealistic for a Galactic halo component". However, their analysis mainly considers HVCs at distances of 100 kpc. If Complex C is only ∼10 kpc away, the ambient halo density could be much higher, making this hypothesis more plausible. We note that the requirement of a small value for f g to make the cloud self-gravitating does not strictly require dark matter. However, searches for other components in HVCs such as stars (e.g., Willman et al. 2002) or molecular hydrogen (e.g., Richter et al. 2002) have not produced detections despite good sensitivity.
On the other hand, we may be observing the other side of this process, i.e., our gas may have been recently stripped out of a dark matter halo and is now largely devoid of dark matter. In this case, there is no particular need to confine the gas; the stripped gas could then be an ephemeral cloud that will rapidly evaporate in the hot halo.
The purely collisionally ionized model ( §5.2.1) does not provide a direct constraint on the number density and hence the size of the absorber, but we argue that the situation is similar. The collisional model does provide an estimate of the H I ion fraction f H I . Substituting The best-fit from the collisional ionization model with the high N(O I) has f H I = 0.43 and T = 10 4.2 K, so with N(H I) = 4.2 × 10 18 we derive L = 130 kpc and n H = 2.5 × 10 −5 cm −3 from equations 9 and 10 for a gravitationally confined cloud. The angular extent of Complex C (roughly 20 • × 90 • ) implies that the transverse size of the entire complex is ∼ 3.5 × 20 kpc if the cloud is 10 kpc away. Therefore L = 130 kpc requires an unlikely geometry: the cloud must be vastly larger along the line-of-sight than in the transverse direction. We reach the same conclusion with the parameters resulting from the lower O I column. This discrepancy could be reconciled, once again, by invoking a relatively small fraction of the mass in gas, f g . The implied geometry would also be more plausible if Complex C was considerably farther than 10 kpc, but there are arguments against placing this HVC much farther away (Wakker et al. 1999a;Blitz et al. 1999). We conclude that the collisional ionization equilibrium model faces the same problems as the CLOUDY model.
Of course, there are alternatives to gravitational confinement, and there is no requirement for HVCs to contain dark matter at all. Magnetic confinement may be important (Konz, Brüns, & Birk 2002). Pressure confinement by a hotter external medium is also a reasonable alternative. The widespread detection of high-velocity O VI absorption suggests that the Milky Way has a large, hot corona. Such a corona could provide the pressure confinement that we require. The densities and temperatures from the best CLOUDY models imply that the gas pressure in the Complex C component is p/k ≈ 60 − 170 cm −3 K. If the external confining medium has T ext 10 6 K, then its density n ext 6 × 10 −5 − 2 × 10 −4 cm −3 in order to pressure confine the HVC. This density upper limit is in agreement with density constraints from other arguments (e.g., Wang 1992; Moore & Davis 1994;Weiner & Williams 1996;Murali 2000;Brüns, Kerp, & Pagels 2001). We note that these pressures are also consistent with constraints derived by Sternberg, McKee, & Wolfire (2002) on the pressure in the intragroup medium in the Local Group (based on H I structure properties in nearby dwarf galaxies).
We can apply the same analysis to the high-velocity ridge component, but we only obtain limits since we only have an upper limit on N(H I). For this high-velocity component, the best-fitting CLOUDY model implies an absorber thickness L ≤ 2.6 kpc (decreasing N(H I) also decreases L). With n H ≈ 2.5×10 −3 cm −3 and T ≈ 11, 300 K, the estimated size for a self-gravitating cloud (assuming f g = 0.16 again) is L = 11 kpc. In this case, decreasing f g by a factor of 2 − 3 reconciles the self-gravitating cloud size with the size implied by the ionization model. Gravitational confinement appears to be viable for the HVR, as long as the actual N(H I) is not too much lower than our upper limit (see § 2).
Summarizing this section, our constraints on the physical conditions and dimensions of the Complex C gas toward 3C 351 indicate that it is unlikely that this absorption arises in a gravitationally confined cloud. It is much more probable that the gas is pressure confined by an external medium or is an ephemeral entity that will soon dissapate. The HVR could be gravitationally confined, but if the HVR and Complex C proper are indeed closely related, then the physical processes affecting Complex C are likely to affect the HVR as well, and the situation with the HVR may be similar to that of Complex C proper.
Mrk 279, Mrk 817, and PG1259+593
We can gain additional insights by comparing the 3C 351 results to other sight lines through the Complex C high-velocity cloud. Richter et al. (2001) and Collins et al. (2002) have published accurate column densities, including a variety of species, for several extragalactic sight lines through Complex C. These papers have emphasized abundance results, but the observations have implications regarding the physical conditions/structure of the HVC as well. From these papers, the spectra of Mrk 279, Mrk 817, and PG1259+593 have the highest S/N and provide the best constraints, so we shall concentrate on these sight lines.
An important difference between 3C 351 and Mrk279/Mrk817/PG1259+593 is that the latter sight lines have substantially higher H I column densities. The increased self-shielding resulting from the higher N(H I) has advantages and disadvantages: on the one hand, the ionization corrections are smaller, but the ion ratios also provide less leverage on the density and physical conditions of the gas. Figure 14 shows an example. This figure shows a CLOUDY calculation, analogous to the model presented in §5.2.2, of metal columns vs. log U and log n H for the sight line to PG1259+593. The only difference from the 3C 351 model is that the H I column is higher (in accord with 21cm observations), and the overall metallicity has been reduced by a small amount to provide the best fit to the observed column densities. Because of the higher N(H I) and greater self-shielding, most of the curves in Figure 14 are relatively flat; f (X i ) = 1 over a large range of U for most species shown. Consequently, the derived abundances are insensitive to U, but also most of the measurements allow a large range of n H , L, and T . However, Ar I is a notable exception. As discussed by Sofia & Jenkins (1998), Ar I is susceptible to photoionization, and this can be useful for investigation of gas ionization. This can be seen in Figure 14. While most of the curves change by 0.1 dex or less, N(Ar I) decreases by ∼ 0.8 dex over the range of U shown in the figure. Unfortunately, the argon lines are relatively weak, and of the eight sight lines presented by Collins et al. (2002), Ar I is only detected toward PG1259+593. Fig. 14.-Model of gas photoionized by the extragalactic UV background, as in Figure 12, but with parameters appropriate for the sight line to PG1259+593: log N(H I) = 19.92, Z = 0.12Z ⊙ , and solar relative abundances. The large points show the observed Complex C column densities in the direction of PG1259+593 from Collins et al. (2003). At log U = −3.9, all of the observed columns are within 1σ of the model column densities except N I.
Taking advantage of the small ionization corrections, we confirm the main abundance results reported by Collins et al. (2002), with some additional important comments: 1. The Complex C oxygen abundances 16 that we derive for these three directions show some dispersion: and However, including the 3C 351 oxygen abundance, we find that the weighted mean metallicity for Complex C is <[O/H]> = −0.68 with a reduced χ 2 ν = 0.85 (ν = 3). Therefore the current measurements are consistent with a constant metallicity throughout Complex C with Z ≈ 0.2Z ⊙ . Furthermore, the error bars in 11 -13 do not fully reflect the N(H I) uncertainties. For example, the H I 21 cm profile toward Mrk279 is very complex, and it is difficult to separate the Complex C H I emission from the lower-velocity emission. Also, the Mrk817 N(H I) was derived from the larger-beam LDS data, which introduces a systematic error of 0.3 − 0.5 dex. With these additional uncertainties, we conclude that there is currently no compelling evidence of oxygen abundance variations in Complex C. We cannot rule out the possibility that the oxygen abundances are spatially variable in Complex C, but better measurements are required to support this claim.
2. Nitrogen is underabundant in Complex C: and where the upper limits are at the 4σ level. Again, this indicates that intermediate-mass stars have not contributed significantly to the gas enrichment. The nitrogen detections toward Mrk876 and PG1259+593 provide stronger evidence of abundance variations in Complex C than the oxygen measurements: these two sight lines yield a weighted mean of <[N/H]> = −1.40 with a reduced χ 2 ν = 5.60. Some caution is warranted, though, because N(N I) toward Mrk876 is entirely based on a weak and blended line (see Figure 8 in Collins et al. 2002), and the measurement may be more uncertain than the formal error bars suggest. 17 3. The current error bars are too large to provide significant evidence of α−element overabundances. The model shown in Figure 14 provides a satisfactory match of the observed column densities with solar relative abundances, for example. The fact that the S II and Fe II column densities are comparable suggests that the α−elements may be overabundant, but within current uncertainties, both [α/Fe] = 0 and [α/Fe] = +0.3 provide acceptable fits to the observations. 4. However, the N(Fe II)/N (S II) ratios do indicate that there is little depletion by dust in Complex C since dust strongly reduces the gas-abundance of Fe but has little effect on S in a variety of environments (e.g., Jenkins 1987;Sembach & Savage 1996).
As noted above, most metals detected along these higher N(H I) sight lines allow a wide range of densities and sizes. However, Ar I is useful. Toward PG1259+593, the best fit to the metals in Figure 14, including Ar I but excluding N I, has n H = 2.5 × 10 −3 , T = 8800 K, and L = N H /n H = 13.9 kpc. This implied size is vastly larger than the size derived from the 3C 351 data. Furthermore, in the direction of PG1259+593, gravitational confinement appears to be quite viable: this only requires f g = 0.32, i.e., a factor of two larger than Schaye's fiducial value of 0.16. The upper limits on Complex C N(Ar I) for the Mrk 279 and Mrk 817 sight lines do provide the following constraints. For Mrk 279, n H 5 × 10 −3 cm −3 and L 2.5 kpc. For Mrk 817, n H 1 × 10 −3 cm −3 and L 14 kpc.
3C 351: at the Leading Edge of Complex C
Why do the sight lines to 3C 351 and PG1259+593 have such starkly different implications regarding the size and confinement of the absorber? We propose a simple answer: 17 The Mrk876 N I column has also been estimated from the N I λ1134.17 line observed with FUSE (Murphy et al. 2000). However, the feature at the expected velocity of N I λ1134.17 in Complex C is strongly blended with low-velocity Fe II λ1133.67, and in fact the line is predominantly due to Fe II. Murphy et al. (2000) have subtracted this Fe II line based on other Fe II transitions in the FUSE bandpass, but the resulting N (N I) estimate is considerably uncertain. Table 8 in Sembach et al. (2002). The column densities are shown with filled circles, with a line marking the sight line location within Complex C in the contour plot in the xy-plane. N(O VI is also projected onto the xz-plane. The location of the 3C 351 sight line is marked with a +; O VI cannot be measured in the 3C 351 spectrum because of an extragalactic Lyman limit absorber at z abs = 0.221. Complex C is probably interacting with the thick disk/lower halo of the Milky Way much more vigorously in the direction of 3C 351 than in the direction of PG1259+593. This would occur naturally if the 3C 351 pencil beam is near the leading edge of the HVC, and this would explain several observations. First, the 21cm observations show a steep gradient in N(H I) in the vicinity of 3C 351 (see Figures 1-3). This likely reflects the transition from the mostly neutral inner region to the fully ionized periphery of the HVC. Such a transition is expected at the leading edge of a cloud moving through an ambient halo (e.g., Quilis & Moore 2001). Second, ram pressure could separate the baryons from the underlying dark matter (Quilis & Moore 2001) and thereby alleviate the confinement problem discussed above. In this case there is no compelling requirement to confine the gas; this edge of the cloud would be rapidly evaporating as it plunges toward the disk. Third, observations of nine sight lines through Complex C reported by Sembach et al. (2002) show that the O VI column densities at the velocities of Complex C are highest near 3C 351 (several of these nearby sight lines are marked in Figures 1-4). These O VI observations are summarized in Figure 15, which shows N(O VI) measured toward the nine extragalactic sources marked on a map of Complex C. Note that these are O VI columns in Complex C proper only; recall also that O VI cannot be measured in the 3C 351 spectrum due to a Lyman limit absorber at z abs = 0.221 that severely attenuates the FUSE 3C 351 spectrum below ∼1117Å. As shown in Figure 16, an even more pronounced trend is evident in the N(O VI)/N (H I) ratio in Complex C. This ratio increases steadily with decreasing longitude and latitude, and the lowest-longitude Complex C sight line (Mrk 501; see Table 8 in Sembach et al. 2002) has an N(O VI)/N (H I) ratio which is ∼43 times larger than the ratio measured toward the highest-longitude sight line (PG1259+593). This trend also suggests that the lower-longitude section of Complex C is more strongly interacting with the ambient medium; as the gas is ionized to a greater degree, N(H I) will decrease while N(O VI) increases. The lower-longitude region is also at lower latitude, so this result is not surprising: as the HVC approaches the plane of the Milky Way, it is becoming more fully ionized and is probably ablating and dissipating, and the trailing (higher-longitude) portion of the cloud has not yet entered the higher density region of the ambient medium where the interactions are more vigorous, or at least has not suffered the effects of ablation for the same duration as the lower latitude/longitude regions.
H1821+643
An alternative to the interpretation presented above is that gas flowing up out of the disk (e.g., Galactic fountain/chimney gas) is colliding with Complex C at lower longitudes and latitudes, and the resultant shock-heating is ionizing and evaporating the cloud. Gibson et al. (2001) and Collins et al. (2002) have suggested that such interactions might be required to explain the metallicity variations observed in the HVC (but see point 1 in §6.2.1). In this regard, it is interesting to compare the 3C 351 to the H1821+643 sight line, which is near 3C 351 but closer to the plane (see Figures 1-4). The H1821+643 line-of-sight passes just outside the lower-latitude 21cm boundary of Complex C (see also Figure 1 in Wakker & van Woerden 1997) but through the region of high-velocity emission associated with the Outer Arm. A good spectrum of H1821+643 could reveal gas flowing from the plane toward Complex C. Savage, Sembach, & Lu (1995) have discussed Galactic absorption features in the spectrum of H1821+643 based on an intermediate-resolution GHRS spectrum. Subsequently, obtained a higher resolution STIS spectrum of H1821+643 with the same mode (E140M) used to observe 3C 351, so a detailed comparison of these sight lines can be made without confusion due to differing spectral resolution. Figure 17 compares selected absorption profiles observed toward H1821+643 (upper profile in each panel) to the same line from the spectrum of 3C 351. For reference, the velocities of the HVR, C, and C/K components are marked at the top of each panel. Some of the profiles appear to be remarkably similar, e.g., the Fe II lines in Figure 17d. We defer a full analysis of the H1821+643 data to a later paper, but a few comments on this figure are worthwhile.
The v = −80 km s −1 component: Complex C/K. Both sight lines show components near the velocity of IVC C/K with similar equivalent widths. Toward H1821+643, Savage et al. (1995) associate this absorption with the Perseus spiral arm, which has a similar velocity at b ≈ 0 and appears to extend ∼ 1 kpc above the plane based on 21cm and Hα emission (Kepner 1970;Reynolds 1986). We have derived [O/H] > −0.5 for the C/K component toward 3C 351, implying a higher metallicity than the adjacent Complex C and HVR components. The higher C/K metallicity suggests that this gas may indeed have originated in the disk and is being driven into the halo. If the Perseus arm is ∼ 3 kpc away and the 3C 351 C/K component is associated with Perseus, then this gas is ∼ 2 kpc above the plane. This is consistent with the z−heights that gas is expected to attain in some Galactic fountain models (e.g., Houck & Bregman 1990). Very recently, Otte, Dixon, & Sankrit (2003) have detected O VI emission at v LSR = −50 ± 30 km s −1 from l = 95.4 • , b = 36.1 • , a direction very close to the 3C 351 and H1821+643 sight lines. They also show bright Hα filaments extending up from the plane in this direction. They attribute both the O VI and Hα emission to an outflow from the Perseus arm. Regardless of the nature of the gas, it is quite likely that the absorption lines at v ≈ −80 km s −1 toward 3C 351 and H1821+643 are related to the emission reported by Otte et al. (2003). 351. However, unlike IVC C/K, the line equivalent widths in the H1821+643 spectrum are substantially larger than the same lines toward 3C 351 at v = −130 km s −1 (see Figure 17). The H1821+643 and 3C 351 21 cm H I column densities at this velocity are similar: Wakker et al. (2001) report N(H I) = 3.3×10 18 toward H1821+643, compared to N(H I) = 4.2×10 18 toward 3C 351 (both measurements are based on Effelsberg observations). However, there is a gap in the 21 cm emission between the two sight lines (see Figures 1 and 3), and the relationship (if any) between the 3C 351 and H1821+643 HVCs at this velocity is unclear. We list in Table 4 the equivalent widths and column densities of absorption lines at v = −130 km s −1 in the H1821+643 spectrum, measured as described in § 4. Several pixels in the core of the O I line toward H1821+643 approach zero flux. This line may be significantly saturated, so we consider the O I measurements highly uncertain. The most conservative constraint on N(O I) is a lower limit from direct integration, but we also provide an estimate from profile fitting in Table 4, which can correct somewhat for saturation but with large uncertainties. We also summarize the implied abundances in Table 4 as before, without and with ionization corrections applied (using the ionization model presented below).
The larger equivalent widths toward H1821+643 at v LSR ≈ −130 km s −1 ostensibly indicate that the metallicity at this velocity is greater in the H1821+643 component than in the 3C 351 HVC since the H I columns are similar toward both QSOs. If we neglect ionization corrections, we indeed obtain high abundances from the column densities in Table 4 97. This would be an unusual abundance pattern; N underabundances are expected in low-metallicity gas, but as Z −→ Z ⊙ , the contribution from intermediate-mass stars is expected to bring the N relative abundance more in line with the solar value. However, we have shown in § 5 that ionization corrections must not be neglected when N(H I) is at the level observed toward H1821+643. When ionization corrections are applied, the observed columns can be reconciled with a lower overall metallicity and smaller N underabundances. Figure 18 shows a CLOUDY model in satisfactory agreement with the observed columns with Z = 0.24Z ⊙ and only small underabundance of N and Al. The model shown in Figure 18 works if N(O I) is close to the value from direct integration, i.e., N(O I) ≈ 14.7. Higher O I column densities would require higher metallicity, lower ionization parameters (to match, e.g., the O I/Si II ratio), and greater nitrogen underabundances, as can be seen from Figure 18. A slight depletion of Fe might also be required in higher metallicity models. Due to the large range of U allowed by the current data, we cannot usefully bracket the size of the absorber at v LSR = −130 km s −1 .
At this juncture, it remains possible that the Outer Arm high-velocity gas and Complex C have similar metallicities, but the data also allow the Outer Arm to have a substantially higher metallicity than Complex C if N is underabundant. Consequently, it is difficult to Table 2 c Logarithmic abundance obtained by applying the ionization correction from the CLOUDY model shown in Figure 18 and discussed in § 6.2.3. Error bars include column density uncertainties and solar reference abundance uncertainties but do not reflect uncertainties in the ionization correction. In this case, the ionization correction allows a large range of abundances (see § 6.2.3).
d Saturated absorption line. e Due to considerable saturation, the formal error bars from Voigt-profile fitting may underestimate the true uncertainty.
f Simultaneous fit to the Si II λ1304.37 and λ1526.71 lines. Figure 17 show the contrast between the sight lines at this v). However, Savage et al. (1995) tentatively identified C IV absorption near this velocity in a GHRS spectrum of H1821+643. As shown in Figure 19, we confirm this detection: both lines of the C IV λλ1548.2, 1550.8 doublet are well-detected at v = −212 km s −1 . These are the only lines that we detect in the STIS spectrum at this velocity. The C IV equivalent width and column density measurements are listed in Table 5 along with upper limits on selected dominant ions and higher stages. However, Oegerle et al. (2000) and Sembach et al. (2000) detected O VI at this velocity in FUSE spectra of H1821+643. The O VI 1031.9Å line is blended with low-velocity H 2 , but Sembach et al. (2002) have removed the H 2 line (based on fits to other H 2 lines) and find log N(O VI) = 13.72 ± 0.14.
These measurements have useful implications. The fact that N(O VI) ≫ N(C IV) toward H1821+643 would indicate that T ≫ 10 5 K if the gas were collisionally ionized and in equilibrium (see, e.g., Figure 7 in . However, this temperature is not compatible with the widths of the C IV lines. Figure 19 shows our Voigt-profile fit to the HVR features in the H1821+643 spectrum; a single component with b = 9±2 provides an excellent fit. This implies that T < 10 4.77 K. The C IV line width is only marginally consistent with the O VI/C IV ratio given the uncertainties in N(O VI) and b(C IV). It seems more likely that either (1) much of the O VI absorption is not associated with the C IV-bearing gas (in which case the O VI-gas must be relatively hot to satisfy the lower limit on O VI/C IV), or (2) the gas is not in ionization equilibrium. The different centroids of the C IV and O VI lines (see Table 5) support the idea that at least some of the O VI originates in different gas. It is unlikely that the C IV + O VI gas is photoionized because of the long pathlengths required (e.g., Sembach et al. 2002), but it is possible that the gas is collisionally ionized but out of equilibrium because it is cooling faster than it can recombine (Edgar & Chevalier 1986). Heckman et al. (2002) have recently shown that a non-equilibrium, radiatively cooling gas model provides a good fit to the O VI column densities and b−values in HVCs (as well as other contexts). Their model also predicts N(O VI) ≫ N(C IV), as observed in the −200 km s −1 feature toward H1821+643. Savage et al. (1995) have noted that v LSR ≈ −200 km s −1 is near the expected velocity for distant Milky Way gas in a corotating disk/halo in the direction of H1821+643; the velocity would then imply a large Galactocentric distance (see their Figure 2). However, it is quite possible that this high-velocity feature is associated with the high-velocity ridge observed toward 3C 351 and other sight lines. In this case, the fact that the feature is only detected in C IV and O VI would indicate that the HVR has an extended ionized periphery. Evidence for highly-ionized layers on the surface of HVCs has been reported for other clouds (e.g., Sembach et al. 1999). We have argued that the absorption lines toward 3C 351 indicate that the HVR and Complex C proper are related. If the v LSR = −212 km s −1 component toward H1821+643 is also part of the HVR, then this gas is likely much closer than implied by its velocity and the assumption that it is corotating (various arguments suggest that Complex C is ∼10 kpc from the Sun).
Summary
We have investigated the physical structure, conditions, metallicity, and nature of the high-velocity cloud Complex C using high-resolution recordings of absorption lines in several directions through the cloud. Our study is mainly based on STIS echelle spectroscopy of 3C 351, which shows a wide variety of absorption lines from Complex C proper (v LSR = −128 km s −1 ) as well as an intermediate-velocity cloud (C/K, v LSR = −82 km s −1 ) and a highervelocity component that we refer to as the high-velocity ridge (v LSR = −190 km s −1 ). We also make use of other sight lines through Complex C from the literature as well as new STIS echelle observations of H1821+643, which is just outside the 21 cm boundary of Complex C but shows absorption lines at very similar velocities to those seen toward 3C 351. From our analysis, we reach the following conclusions: 1. The high-velocity ridge is closely related to Complex C proper. This is suggested by the similar morphologies of the HVR and Complex C; the HVR has a similar shape and is roughly centered on the larger, lower-velocity Complex C. This idea is supported by the absorption lines, which show remarkably similar column density ratios in the HVR and Complex C. However, very little high-ion absorption is detected at the velocity of Complex C proper, but Si III, Si IV, and C IV absorption is strong and easily detected in the HVR. Moreover, the high-ion absorption lines in the HVR have centroids and line widths that are similar to those of the low ions in the HVR. We show that the high and low ionization stages in the HVR cannot arise in the same phase, and we conclude that the high ion absorption occurs in an interface between the low-ionization phase and a hotter ambient medium.
2. The relative abundances in all components with N(H I) < 10 19 cm −2 indicate that ionization corrections are important. We consider collisional ionization equilibrium as well as CLOUDY photoionization models to derive ionization corrections, and we find in both cases that Z = (0.2 ± 0.1)Z ⊙ in Complex C in the direction of 3C 351. We also find that nitrogen must be underabundant toward 3C 351. The low metallicity and nitrogen underabundance indicate that Complex C is not ejecta generated in a Galactic fountain. It seems more likely that Complex C is either tidally stripped material from a satellite galaxy (analogous to the Magellanic Stream), or that it is gas with a more distant extragalactic origin. The absolute metallicity of the 3C 351 HVR component is less constrained because we only have an upper limit on the HVR H I column. The lower limits on the HVR metallicity are consistent with the metallicity derived for Complex C proper.
3. We find similar oxygen abundances at Complex C velocities toward Mrk 279, Mrk 817, and PG1259+593. While there is some dispersion in the [O/H] measurements, within the current uncertainties the measurements are fully consistent with a constant metallicity throughout the HVC. These sight lines as well as the Mrk 876 sight line also provide strong evidence of N underabundances in Complex C. Comparison of the nitrogen abundances toward Mrk 876 and PG1259+593 provides the strongest evidence of spatial abundance variability in Complex C, but this requires confirmation with additional measurements. There are hints of α−element overabundances in some directions, but the evidence is not statistically significant.
4. The derived iron abundances indicate that Complex C contains little or no dust.
5. Toward 3C 351, the Complex C absorber size, density, and temperature implied by the ionization models indicate that the gas is not gravitationally confined, while toward other sight lines such as PG1259+593, the implied path length through Complex C is much larger and gravitational confinement is viable. Pressure confinement by an external medium may play a role. However, we suggest that the 3C 351 sight line passes through the leading edge of Complex C, which has been ablating and dissipating as it approaches the plane of the Milky Way. This idea is supported by O VI observations, which show the highest O VI column densities near the 3C 351 sight line. Furthermore, the O VI/H I ratio increases dramatically with decreasing longitude and latitude within Complex C, which also suggests that the lower longitude and latitude regions are interacting more vigorously with the ambient medium.
6. To look for evidence of outflowing Milky Way gas that might interact with Complex C, we compare the STIS echelle observations of the sight lines to 3C 351 and H1821+643, which are both near the lower-latitude edge of Complex C at l ≈ 90 • . We find that the intermediate-velocity gas observed toward both of these QSOs likely has a higher metallicity and may indeed be a fountain/chimney outflow from the Perseus spiral arm. Unfortunately, the results are less clear for the high-velocity gas toward H1821+643. The H1821+643 HVC could have a similar metallicity to that derived for Complex C, or it could have a much higher abundance depending on the O I column and ionization correction, both of which are highly uncertain.
As usual, we are grateful to Gary Ferland and collaborators for the development and maintenance of CLOUDY. The observations reported here were obtained by the STIS Investigation Definition Team, and this research was supported by NASA through funding for the STIS Team, including NASA contract NAS5-30110. TMT appreciates additional support for this work from NASA Long Term Space Astrophysics grant NAG5-11136 as well as NASA grant GO-08695.01-A from the Space Telescope Science Institute. BPW was supported by NASA grant NAG5-9024.
|
2014-10-01T00:00:00.000Z
|
2003-02-25T00:00:00.000
|
{
"year": 2003,
"sha1": "9650b58e7c30195c9b2a028e9a6e013be87440d3",
"oa_license": null,
"oa_url": "https://works.bepress.com/cgi/viewcontent.cgi?article=1003&context=todd_tripp",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9650b58e7c30195c9b2a028e9a6e013be87440d3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
211065428
|
pes2o/s2orc
|
v3-fos-license
|
Variable Strength in Thoron Interference for a Diffusion-Type Radon Monitor Depending on Ventilation of the Outer Air
Thoron interference in radon measurements using passive diffusion radon detectors/monitors is a crucial problem when it comes to assessing the internal exposure to radon precisely. The present study reported, as one of the potential factors, the effects of air flow conditions on changes in thoron interference. Rates of thoron infiltration (as thoron interference) into the diffusion chamber of the monitor were evaluated. The temporal variation was obtained based on measurements of the underfloor space of a Japanese wooden dwelling using a diffusion-type radon monitor, a reference radon monitor which was not affected by thoron interference, and a thoron monitor. The thoron infiltration rate for the diffusion-type monitor varied from 0% to 20%. In particular, it appeared to increase when ventilation of the underfloor space air was forced. The variable thoron infiltration rate, with respect to ventilation strength, implied that not only a diffusive process, but also an advective process, played a major role in air exchange between the diffusion chamber of the monitor and the outer air. When an exposure room is characterized by the frequent variation in air ventilation, a variable thoron response is considered to occur in radon–thoron discriminative detectors, in which only diffusive entry is employed as a mechanism for the discrimination of radon and thoron.
Introduction
The inhalation of radon ( 222 Rn, Rn; half-life: 3.8 days) is one of the most important pathways causing prolonged exposure from natural radiation sources [1]. Radon detectors/monitors with diffusion chambers, such as those with solid-state nuclear track detectors and a pulse-ionization chamber, have been used to measure radon concentration and assess the dose of radon. However, in some cases, the measured values should be analyzed carefully because of their overestimation, caused by the naturally coexisting radon isotope thoron ( 220 Rn, Tn), which has a half-life of 55.6 s (i.e., thoron interference). Tokonami et al. [2] reported on the thoron interference in radon measurements in thoron-prone areas of China, which led them to reevaluate the internal dose of radon, and which later led Doi et al. [3] and Akiba et al. [4] to reconsider possible risks due to radon exposure. To avoid thoron interference, a number of radon-thoron discriminative detectors have been developed (e.g., [5][6][7][8]) and used in recent national and regional surveys (e.g., [9][10][11][12][13][14][15]).
Thoron interference has been reported in previous studies (e.g., [16][17][18][19][20][21]) for integration type radon detectors and continuous measurement type radon monitors. In these detectors/monitors, natural air exchange (i.e., diffusion) is a mechanism which works by introducing air into the diffusion chamber. Tokonami et al. [16] examined the former type of detectors, which were composed of radon detectors and radon-thoron discriminative detectors. Their experiment revealed that rates of thoron infiltration into the diffusion chamber of the detectors varied from 0% to 100%. That is, radon concentration was overestimated by a factor of up to two when radon and thoron concentrations were the same as each other. Ishikawa [17] carried out a different experiment, in which he investigated a pulse-ionization chamber monitor, the AlphaGUARD PQ2000Pro (Genitron Instruments, GmbH (Frankfurt, Germany)-now Bertin Technologies (Montigny-le-Bretonneux, France)); this monitor is widely used in environmental studies (see, for example, [22][23][24]). In Ishikawa's study [17], the chamber monitor was exposed to radon-bearing and thoron-bearing air in a calibration chamber constructed at the Environmental Measurements Laboratory in New York, and a thoron infiltration rate of about 10% was seen. A similar result was obtained by Kochowska et al. [18] and Sumesh et al. [19]. Based on all of these studies, when such detectors/monitors without the function of discrimination between radon and thoron are used, overestimation of radon concentration should be presumed in surveys conducted in, for instance, thoron-prone areas.
Experiments on thoron interference in radon measurements have been carried out under controlled environments, such as in calibration chambers. However, in real (natural) environments, rates of thoron infiltration into the diffusion chamber of detectors and monitors may vary with time and the environmental conditions surrounding them (e.g., air temperature, humidity, and circulation in a room). This is connected to the properties of filters and sponges covering the inlet of detectors/monitors to prevent ambient aerosols infiltrating. A possible increase in the thoron infiltration rate with an increase in air temperature is inferred from the finding that the permeability of radon through a membrane filter increased with membrane temperature (e.g., [25]). Sorimachi et al. [26] reported a decreasing thoron infiltration rate with an increasing relative humidity due to the adsorption of air water into the filter/sponge. The present study reported that the thoron interference in the radon measurements varied depending on the ventilation of the air surrounding the diffusion-type radon monitor.
Materials and Methods
The present study examined thoron interference in radon concentrations indicated by a diffusion-type radon monitor. Natural air exchange (i.e., diffusion) is a mechanism which works by introducing air through a filter into the diffusion chamber of the monitor. Under exposure to the mixed air of radon and thoron, radon and thoron concentrations inside and outside of the diffusion chamber of the monitor satisfy the following derivative equations with respect to time t dC Tn, in dt = −λ Tn C Tn, in + γ(C Tn, out − C Tn,in ), where C Rn, in , C Rn, out , C Tn, in , and C Tn, out are radon and thoron concentrations inside and outside of the diffusion chamber of the monitor, respectively, λ Rn and λ Tn are the decay constants of radon and thoron, respectively, and γ is air exchange rate [19,27]. As seen in Equations (1) and (2), thoron interference is connected to the rate of air exchange through a filter. The rate of thoron infiltration into the diffusion chamber of the radon monitor as thoron interference in radon measurement was examined in comparison to radon concentrations (C Rn-T ) possibly affected by thoron interference, reference radon concentrations (C Rn-R ) not affected by thoron interference, and thoron concentrations (C Tn, out ). Similar to the studies conducted by Ishikawa [17] and Sumesh et al. [19], differences in C Rn-T and C Rn-R are regarded as values enhanced by thoron infiltration into the diffusion chamber. Hence, thoron infiltration rate (R) can be formulated as The radon monitors used in the present study were AlphaGUARD PQ2000Pro pulse-ionization chambers, which are the same as those used in the aforementioned studies [17,19]. The AlphaGUARD monitors can run in diffusion or flow mode. The AlphaGUARD monitor in the diffusion mode was used as a diffusion-type radon monitor for investigating thoron interference-that is, the radon monitor in the diffusion mode measured C Rn-T . The sampling interval was 1 h. In contrast, the other AlphaGUARD monitor was set to flow mode to run as a reference radon monitor with radon thoron discrimination (e.g., [28]). The air was pumped with a flow rate of 1.0 × 10 −4 m 3 min −1 into the radon monitor through a polyolefin tube (Tygon LMT-55 SCFJ00033, Saint-Gobain K.K., Tokyo, Japan), and passed through a glass fiber filter (Whatman GF/F, 47 mm in diameter, GE Healthcare Japan Corporation, Tokyo, Japan). The length of the tube was adjusted so that the travel time of the sampled air was 10 min. This is equivalent to about 10 half-lives of thoron, so that thoron interference in the detection unit would be negligible and the radon monitor in flow mode would be able to measure C Rn-R . The radon concentration was measured every 10 min and was converted into a one hour averaged value. These AlphaGUARD radon monitors were calibrated internally using a radon calibration chamber established in the National Institute of Radiological Sciences (NIRS; now a part of the National Institutes for Quantum and Radiological Science and Technology, Japan). The properties of the radon monitors are summarized in Table 1. Table 1. Properties of the radon and thoron monitors used in the present study.
Instrument
Detection Principle Measurement Mode
RTM2200
Alpha spectrometry with a semiconductor detector Flow 1 h Thoron (C Tn, out ) 1 The radon concentration possibly affected by thoron interference. 2 The reference radon concentration not affected by thoron interference.
Thoron concentration (C Tn, out ) was measured by the RTM2200 (SARAD GmbH, Dresden, Germany). This thoron monitor used alpha spectrometry with a semiconductor detector (Table 1), which had been calibrated against a reference instrument in the German Federal Office for Radiation Protection Bundesamt für Strahlenschutz accredited by the Physikalisch Technische Bundesanstalt. The filtered air was pumped into the detection unit with a flow rate of 3.0 × 10 −4 m 3 min −1 and thoron concentration was measured every hour.
Simultaneous measurement using these three types of radon and thoron monitors was carried out in a Japanese dwelling located in Gifu Prefecture. The selected dwelling was a two-story wooden structure with an underfloor space, a characteristic which is common to many residences in Japan. The underfloor space was approximately 70 cm in height and above the ground surface (there was no building material covering the surface), and the air in the space was ventilated naturally from openings or by a forced fan exhaust system. The radon and thoron monitors were installed in the underfloor space. These monitors were located 22 cm above the ground surface and 200 cm away from the fans' covering openings. The altitude was almost the same between these monitors and the fan. When the forced-fan exhaust system was run, the wind speed increased from around 0.03 to 0.12 m s −1 in front of the fan, which was measured by a handheld directional anemometer (Model 6531, KANOMAX Japan Incorporated, Osaka, Japan).
Results and Discussion
Concentrations of radon (C Rn-T and C Rn-R ) and thoron (C Tn, out ) in the underfloor space were analyzed from 5 April to 4 July, 2013. Figure 1 presents temporal variations in radon concentrations measured with the radon monitors in the diffusion and flow modes together with thoron concentration. A five hour moving average was applied to smooth their temporal variations. It is noted again that radon concentrations measured in the diffusion mode were the ones (C Rn-T ) possibly affected by thoron interference, whereas radon concentrations measured in the flow mode were reference radon concentrations (C Rn-R ) not affected by thoron interference. As seen in Figure 1, the forced-fan exhaust system was run to ventilate the underfloor space air repeatedly for around 10 days, and totally during about 60% of the measurement period. The C Rn-T values varied in the range of 20-50 Bq m −3 , and they were nearly constant throughout the observation period. In contrast, the C Rn-R values varied in the range of 10-40 Bq m −3 , and they were lower when the forced-fan exhaust system was run, because the underfloor space air was replaced with fresh outdoor air with a lower radon concentration. Specifically, the differences in these two quantities depended on the ventilation strength of the underfloor space air. Significant differences were found during the forced ventilation periods, whereas only small differences were found during the natural ventilation periods. The exact reason why thoron concentrations were higher during the forced ventilation periods is unclear, but thoron exhalation from the ground surface may increase due to induced pressure gradients which can draw soil gases (i.e., the pumping effect), and some thoron atoms exhaled may reach the thoron monitor without significant radioactive decay.
The changes in the difference in the values between the two radon monitors were partially attributed to thoron interference, due to the changes in thoron concentration in the underfloor space air. Figure 2 presents a scatter plot of differences in the radon concentrations (i.e., C Rn-T minus C Rn-R ) in the diffusion and flow modes, against thoron concentrations during the natural ventilation and forced ventilation periods. The differences with negative values, and those from between 19 and 25 April, when strong ventilation seems to have occurred, as inferred from the measured thoron concentrations, were excluded for the analysis. There was a positive correlation between them during the forced ventilation periods; thoron infiltrated into the diffusion chamber and disturbed radon measurement in the diffusion mode. However, there did not appear to be a clear correlation during the natural ventilation periods. The lack of correlation can probably be attributed to the narrower range of thoron concentrations; thoron concentration was mostly distributed between 20 and 100 Bq m -3 during the natural ventilation periods. The lack of correlation may also be attributed to natural gas circulation in the underfloor space; this can be influenced by outdoor wind field (intensity and direction) causing pressurization and depressurization in indoor and underfloor spaces. The difference could not be caused only by the change in thoron concentration. Figure 1 presents a temporal variation in thoron infiltration rate, calculated from the C Rn-T , C Rn-R , and C Tn, out values based on Equation (3). Thoron infiltration rate fluctuated around 10% and its variation was less during the forced ventilation periods. In contrast, although the rate was 10% or above some of the time, the rate of thoron infiltration into the diffusion chamber of the monitor was only a few percent or zero during the natural ventilation periods. The box and whisker plots drawn in Figure 3a support the finding that there was a clear difference with respect to ventilation strength. The median values of the thoron infiltration rate were 5.5% and 11% in natural and forced ventilation, respectively. Figure 3b does not confirm that the change in the thoron infiltration rate was caused by thoron concentration. These results indicate that the thoron infiltration rate was partially constrained by the ventilation strength of the air surrounding the radon monitor, and that the change of the rate affected the radon measurements. In the present study, thoron infiltration rates obtained during natural ventilation and forced ventilation periods were comparable to other reported experimental results (5-10%) in radon and thoron exposure [17][18][19]. Furthermore, the findings of the present study point out the process of air exchange in the diffusion-type monitor. Previous studies [6,29,30] expressed the air exchange rate as follows where D p is the diffusion coefficient in a porous medium such as a filter paper, d is the thickness of the medium, A is the opening area and V is the volume of the monitor. The diffusion coefficient depends on air temperature and the porosity and tortuosity of the medium. In the present study, thoron infiltration rate, which is linked to air exchange rate in the radon monitor, was affected by the ventilation strength of the underfloor space air. This implies that the migration in the porous media is constrained not only by a diffusive process but also by an advective one, so that Equation (4) may be rewritten as follows where u is air velocity induced by pressure difference in the medium. This relation can also be assumed in integration-type radon-thoron discriminative detectors, in which only diffusive entry is employed as a mechanism for the discrimination of radon and thoron (e.g., [8,26,30]). Thus, the results discussed above suggest that, when an exposure room is characterized by a frequent variation in air ventilation, a variable thoron response is considered to occur in those detectors. Unlike the previous studies (e.g., [17][18][19][20]) conducted using calibration chambers, the present study was carried out in the natural environment of the underfloor space of a Japanese dwelling. Due to this, thoron concentration was relatively low, 10-100 Bq m -3 , and fluctuated as diurnal variations in the present study. These factors may have influenced part of the present results, for instance, the wide distribution of thoron infiltration rate. In a future study, a change in thoron infiltration rate with respect to ventilation strength would be examined based on measurements using a thoron calibration chamber, in which thoron concentration can be controlled to be constant at a higher level, around 1000-10,000 Bq m −3 (e.g., [31,32]).
Conclusions
In the present paper, a diffusion-type radon monitor was used to examine thoron interference in radon measurements in the underfloor space of a Japanese wooden dwelling. The result showed that the thoron infiltration rate as thoron interference varied and was about 5.5% and 11% in natural ventilation and forced ventilation of the underfloor air, respectively. This difference might have been caused by the change in advective process during infiltration in air exchange between the diffusion chamber of the radon monitor and the outer air. The results suggest that, when an exposure room is characterized by frequent variations in air ventilation, variable thoron response is considered to occur in radon-thoron discriminative detectors, in which only diffusive entry is employed as a mechanism for the discrimination of radon and thoron.
|
2020-02-06T09:08:46.061Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "0e9c975fa8e853721c82ffe10e71998761437704",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijerph17030974",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a6a992dc8663051660174021d98b87ce26b5e0cc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
}
|
270266496
|
pes2o/s2orc
|
v3-fos-license
|
Identification of novel biomarkers for anti-Toxoplasma gondii IgM detection and the potential application in rapid diagnostic fluorescent tests
Toxoplasmosis, while often asymptomatic and prevalent as a foodborne disease, poses a considerable mortality risk for immunocompromised individuals during pregnancy. Point-of-care serological tests that detect specific IgG and IgM in patient sera are critical for disease management under limited resources. Despite many efforts to replace the T. gondii total lysate antigens (TLAs) by recombinant antigens (rAgs) in commercial kits, while IgG detection provides significant specificity and sensitivity, IgM detection remains comparatively low in sensitivity. In this study, we attempted to identify novel antigens targeting IgM in early infection, thereby establishing an IgM on-site detection kit. Using two-dimensional gel electrophoresis (2DE) and mouse serum immunoblotting, three novel antigens, including EF1γ, PGKI, and GAP50, were indicated to target T. gondii IgM. However, rAg EF1γ was undetectable by IgM of mice sera in Western blotting verification experiments, and ELISA coated with PGKI did not eliminate cross-reactivity, in contrast to GAP50. Subsequently, the lateral flow reaction employing a strip coated with 0.3 mg/mL purified rAg GAP50 and exhibited remarkable sensitivity compared with the conventional ELISA based on tachyzoite TLA, which successfully identified IgM in mouse sera infected with tachyzoites, ranging from 103 to 104 at 5 dpi and 104 at 7 dpi, respectively. Furthermore, by using standard T. gondii-infected human sera from WHO, the limit of detection (LOD) for the rapid fluorescence immunochromatographic test (FICT) using GAP50 was observed at 0.65 IU (international unit). These findings underline the particular immunoreactivity of GAP50, suggesting its potential as a specific biomarker for increasing the sensitivity of the FICT in IgM detection.
Introduction
Toxoplasma gondii is a unicellular spore-forming organism that is an obligate endoparasite in virtually all warm-blooded animals.It is estimated that one-third of people across the world are exposed to T. gondii, mainly through consuming raw and contaminated foodstuffs or accidentally swallowing the parasite after coming in contact with cat feces (Dubey, 2010;Lourido, 2019).Toxoplasmosis is typically asymptomatic but is a prevalent foodborne disease that leads to mortality because of the development of severe clinical symptoms in immunocompromised individuals, such as AIDS patients, those undergoing chemotherapy, organ transplantation recipients, or infants born to mothers who were recently infected with T. gondii during or just before pregnancy (Ajzenberg et al., 2009;Centers for Disease Control and Prevention, 2023).There is minimal risk if the mother acquired the infection before conception within a few months, but risks increase in later stages, potentially causing preterm delivery or fatalities.Congenital infection is common in the last trimester, with newborns often asymptomatic at birth but susceptible to suffering later symptoms such as blindness or mental disorders (Montoya and Liesenfeld, 2004;Arranz-Solis et al., 2021).Early diagnosis in pregnant women with disease onset reduces congenital transmission in neonates and allows for timely treatment.
Toxoplasmosis is mainly diagnosed using serological tests that detect specific IgG and IgM antibodies in the patient's sera.Anti-T.gondii-specific IgG appears nearly 2 weeks after pathogen contact and persists for a long time, making it useful for assessing whether a person has been infected (Montoya and Liesenfeld, 2004;Teimouri et al., 2020).On the other hand, IgM typically appears earlier than IgG and experiences a dramatic increase in levels that peaks after 2-3 weeks of infection before declining to background at 2-3 months (Turunen et al., 1983;Trees et al., 1989;Teimouri et al., 2020).However, unlike other infectious diseases, it became evident that anti-T.gondii IgM responses can persist for months or even years, as reported in numerous clinical cases (Bobic et al., 1991;Vargas-Villavicencio et al., 2022).Although other methods such as IgG avidity may need accurate determination of acute infection, the assessment of specific IgM remains mandatory (Reis et al., 2006;Rahimi-Esboei et al., 2018).
The first-line point-of-care (POC) commercial tests are designated Toxoplasma ICT IgG-IgM (LDBIO Diagnostic, Lyon, France), which integrates the tachyzoites total lysate antigen (TLA) derived from mouse proliferation or in vitro tissue cultures, providing 100% sensitivity and specificity in the USA, while sensitivity and specificity were reported to be 97 and 96%, respectively, in France (Begeman et al., 2017;Chapey et al., 2017;Khan and Noordin, 2020).Nevertheless, using TLA in immunoassays is time-consuming, expensive, and difficult to standardize during propagation and lysate preparation (Ybanez et al., 2020).
Thus, many efforts have been made to replace TLA with recombinant antigens (rAgs) for the diagnosis of toxoplasmosis in humans and animals (Holec-Gasior, 2013).These include antigens from the parasite surface (SAGs), matrix (MAGs), dense granules (GRAs), rhoptry (ROPs), micronemes (MICs), and even a combination of several antigens and chimeric antigens (Khan and Noordin, 2020).
Two commercial POC kit: Biopanda Toxo IgG/IgM or OnSite Toxo IgG/IgM tests, both of which use colloidal gold-conjugated recombinant proteins, provide 100% sensitivity and 96.3 and 97.5% specificity, respectively, for IgG detection.However, IgM detection using the Biopanda kit is only 88.5% specific and 62.2% sensitive, while IgM detection using the OnSite kit is only 97.6% specific and 28% sensitive (Gomez et al., 2018;Khan and Noordin, 2020).Sometime natural IgM antibodies produce false positives when reacting with T. gondii antigens in the absence of infection (Liesenfeld et al., 1997).Although the LDBIO has great sensitivity, it uses a single line-coated TLA to detect both IgG and IgM antibodies on the strip, whereas the Biopanda and OnSite tests employ separate lines for each, allowing for further assessment of acute or chronic infections (Begeman et al., 2017;Chapey et al., 2017;Gomez et al., 2018).Thus, since rAgs have not yet entirely replaced native tachyzoite antigens in POC testing, improvement in the toxoplasmosis IgM diagnosis using rAg is needed.
Due to clinical silence of T. gondii, in most cases, studies on the estimation of disease onset often face limitations of insufficient data (Nayeri et al., 2020).To address this challenge, animal models of infected parasites can imitate human-like immune responses and provide insights into the timeline of disease onset (Flegr et al., 2014;Anand et al., 2015;Poshtehban et al., 2017;Doskaya et al., 2018;Lutshumba et al., 2020;Anand et al., 2022).Cysts of the 76 K strain are produced antibodies after 2 weeks, peaking at 6 weeks post-infection (wpi).The IgG titer remains constant until 8 wpi, while the IgM titer rapidly declines after 7 wpi (Kang et al., 2006).Mice exposed to oocysts or tissue cysts exhibited a robust IgM response by day 10, but their IgM dropped in oocyst-infected mice and rose in tissue cystinfected mice until day 15, suggesting that bradyzoites from tissue cysts penetrate host cells for a long time (Doskaya et al., 2018).In comparison, both virulent (Ck2) and non-virulent (ME49) strains exhibited increased IgM reactivity during in the first 15, but the decline in IgM levels of virulent strain was faster than in the non-virulent strain (Alvarado-Esquivel et al., 2011;Oliveira et al., 2016).Experimental infection of dogs with the virulent T. gondii RH strain (Silva et al., 2002) showed a similar pattern.Despite limited IgM monitoring in animals, often lasting up to 10 wpi with unclear longterm duration due to sporadic trials (Vargas-Villavicencio et al., 2022), rodent models offer a controlled environment for studying infection progression.
In the present study, using two-dimensional gel electrophoresis (2DE), mice sera immunoblotting, and mass spectrometry (MS) analysis, we attempted to discover three previously uncharacterized antigens of T. gondii, EF1γ, PGKI, and GAP50-which have the potential to distinctly target T. gondii IgM antibodies during the early infection of rodent model.The diagnostic capabilities of the novel generated rAgs were assessed using immunoblot analysis and, ultimately, a rapid fluorescence immunochromatographic test (FICT) to detect IgM in both experimentally infected mice and human serum samples.
Toxoplasma gondii TLA preparation
The supernatant and scraped infected cells were centrifuged at 1,200g for 10 min at room temperature (RT) to collect the parasites, which were then repeatedly passed through a 25 gauge needle.Tachyzoites were subsequently isolated by gradual and careful layering them on a 40% Percoll (GE17089101, MERK) solution and centrifuging at 1,200g for 20 min at RT.Following the removal of Percoll, the parasite was washed three times in PBS (ML 008-01, Welgene, Korea) to remove cell debris.
Sera from mice infected with
Toxoplasma gondii 8-12-week-old female BALB/c mice were purchased from Orient Bio (Seongnam, Gyeonggi, Korea) and were maintained under conventional conditions.Mice from each group (n = 5) were intraperitoneally (i.p.) inoculated with 10 1 -10 6 tachyzoites of RH strains per mouse.After 2-10 days of infection, mouse blood was collected and sera were obtained for further experiments (Greenfield, 2017).Negative controls included sera from uninfected BALB/c mice of the same age and sex (n = 10), and additional serum groups of P. yoelii (n = 10) were used to assess cross-reactivity.P. yoelii was i.p. injected in the mouse belly with 10 4 infected RBCs per mouse, and sera from blood were collected on day 5 post-infection at 20-30% parasitemia.
Human sera
The WHO International Standard 4th IS for Antibodies to Toxoplasma gondii in human plasma (NIBSC code 13/132) is a reference reagent comprising 160 IU (international unit)/ 0.5 mL of pooled plasma obtained from six donors who were recently infected with T. gondii.This freeze-dried preparation contains a significant amount of both IgG and IgM antibodies and is used as a benchmark for diagnostic testing (Rijpkema et al., 2016;NIBSC, 2020).
All bubbles were removed from underneath the IPG strip before adding 1 mL of strip cover fluid mineral oil (163-2129, Bio-Rad).Subsequently, isoelectric focusing was carried out using a multistep protocol on a 7-cm strip: 250 V for 15 min in linear voltage mode, followed by 1,000 V for 30 min, 4,000 V for 2 h, and then 4,000 V for 5 h in rapid voltage mode.An optional step at 50 V for 15 h was also included.
After the focusing step was completed, individual strips were removed from the IEF machine and then equilibrated for 25 min with equilibrated buffer (1.5 M Tris HCl, pH 8.8, 6 M urea, 30% glycerol, 2% SDS, and 0.04% bromophenol blue) containing 65 mM DTT followed by additional equilibration for 25 min with the same buffer as used for DTT with 135 mM iodoacetamide.For the second dimension, proteins were separated by SDS-PAGE in a 10% resolving gel.Electrophoresis was performed at 50-70 V for 3 h.
Immunoblot analysis
The protein samples obtained from the 2DE gels were transferred onto PVDF membranes, which were previously activated with 100% methanol to enable the optimal transfer of proteins.The membrane was treated with a blocking solution containing 5% skim milk in PBS with 0.1% Tween 20 (PBST) for 2 h at RT, followed by three 5-min washes to minimize non-specific binding.Subsequently, the membranes were incubated at RT for 1 h with a 1:100 dilution of pooled sera obtained either from a normal mouse or 10 6 tachyzoites of T. gondii RH-infected mice, which were collected 5 days postinfection and prepared in 5% BSA.After washing three times with PBST, the blots were incubated with the corresponding goat antimouse IgM (heavy chain) secondary antibody, HRP (Thermo Fisher 62-6820), and diluted 1:5000 in blocking buffer for 1 h at RT.The membranes were washed five times before detection using Clarity Western ECL Substrate (Bio-Rad Cat# 170-5060).
Determining specific proteins using liquid chromatography/tandem mass spectrometry (LC/MS-MS)
Consistent spots containing the aligned immunoreactive proteins that detected only in T. gondii-infected mouse sera were excised from the five 2DE gels of tachyzoite samples of IPG strip pH3-10 and sent to the LC/MS-MS service for analysis.
In silico analysis of the novel biomarker
To characterize the antigenicity of the proteins, the Linear B cell epitopes of the predicted proteins were determined using ABCprep, BCPreds, and BepiPred-2.0, while the peptides expected to be subjected to T-cell epitope processing and bind to MHC class I and 10.3389/fmicb.2024.1385582Frontiers in Microbiology 04 frontiersin.orgII molecules were analyzed using the EIDB analysis resource tool. 1 The 3D structure was generated using the I-TASSER server, and the 3D structure was modeled and visualized using the PyMOL molecular graphics system.
Expression and purification of recombinant proteins
DNA fragments of three discovered proteins of T. gondii, EF1γ, PGKI, and GAP50, with lengths of 1,182 bp, 1,248 bp, and 1,293 bp, respectively, were sub-cloned into the plasmid pGEM-T Easy Vector (A1360, Promega, Madison, WI, United States) before being cloned into the pET21b (+) plasmid vector.Next, the recombinant plasmid encoding T. gondii EF1γ was transformed into Escherichia coli BL21 DE3 PlysS, while the plasmid DNAs harboring T. gondii PGKI and GAP50 were transduced into the E. coli BL21 DE3 strain, followed by induction with 1 mM isopropyl-b-D-thiogalactopyranoside (IPTG) (Sigma).PGKI was mainly expressed as soluble form in PBS (pH 7.4) lysis buffer.EF1γ and GAP50 proteins were predominantly expressed as inclusion bodies (IB), and then, the protein pellet forms were solubilized and refolded according to previously reported methods (Nguyen et al., 2019).In summary, the induced culture was centrifuged to isolate the cell pellet containing the protein target inclusion bodies.The pellet was washed with Buffer A [10 mM Tris-HCl (pH 7.5)/10 mM EDTA/100 mM NaCl] and resuspended in Buffer A containing 1 mM freshly prepared PMSF.After sonication, the lysate was stirred for 1 h at 4°C with an equal volume of 8 M urea solution.Following centrifugation at 10,000 rpm by a A650TC Cryste rotor, the IB pellet was washed with Buffer B [10 mM Tris-HCl (pH 7.5)/1 mM EDTA/1 M NaCl] and distilled water.The IBs were then re-suspended in solubilization buffer, [100 mM Tris-HCl (pH 7.5)/0.2mM EDTA/6 M GuHCl], stirred on a magnetic stirrer for 2 h at room temperature (RT), and clarified by centrifugation for 1 h at 4°C.The supernatant containing the solubilized IBs was collected by centrifugation at 17,000 rpm by a Labogene 1730R machine, and then, the refolding was initiated by rapid dilution of the denatured IBs in freshly prepared refolding buffer (100 mM Tris-HCl/0.5 M L-arginine/0.2mM EDTA, pH 7.5) and incubated for 36-48 h at 10°C.The refolded preparation was dialyzed against 20 mM phosphate buffer (pH 7.5), containing 300 mM NaCl, 10 mM imidazole, and 100 mM freshly prepared urea for 48 h, with buffer changes every 12 h.The dialyzed sample was centrifuged at 10,000 rpm and clarified further by filtration through a 0.45 μm membrane.Finally, batch purification on Ni-NTA resin was conducted for the His-tagged three recombinant proteins according to the manufacturer's instructions, followed by concentration using a Centricon filter unit.
Western blotting analysis was performed to confirm protein expression.The purified proteins were loaded onto a 10% SDS-PAGE gel and then transferred onto activated PVDF membranes.A blocking buffer (5% non-fat milk in PBS) was used to block the membrane for 2 h at RT.After washing the membrane with PBS-T (PBS containing 0.1% Tween 20), the first anti-mouse 6× His-tag antibody (dilution 1 http://tools.iedb.org/main/tcell/1:10000) was incubated with the membrane in a blocking buffer for 1 h at RT.The membranes were then washed three times with PBS-T and incubated with the secondary goat anti-mouse antibody conjugated with HRP (Ab97046, Abcam) in blocking buffer and diluted (1,30,000) for 45 min at RT.The protein bands were visualized using a Bio-Rad ChemiDoc XRS+.
Immunoblot analysis of rAgs in sera from mice infected with Toxoplasma gondii
The purified T. gondii proteins such as EF1γ, PGKI, and GAP50, in comparison to BSA, along with TLA of T. gondii RH tachyzoites in uninfected APRE19 cells as a reference, were separated on a 10% SDS-PAGE gel with 20 μg/lane.Afterward, the proteins were transferred onto activated PVDF membranes and then blocked with 5% skim milk at RT for 2 h.Following three washes with PBS-T, 10 6 T. gondii tachyzoites of mouse sera collected at 5 dpi (dilution 1:100 in 5% BSA) were used as the primary antibody, whereas normal mouse sera served as the control, and the membranes were incubated for 1 h at RT.The membrane was then washed again three times with PBS-T and incubated with goat anti-mouse IgM (heavy chain) secondary antibody HRP (Thermo Fisher 62-6820) and diluted 1:3000 in 5% BSA for 1 h at RT. Finally, the protein bands were visualized as described previously.
Production of europium nanoparticle (Eu NP) conjugates
To generate complexes of Eu NP and antibody, PS-COOH Eu NP beads with a size of 0.2 μm (#FCEU002, Bangs Lab) were conjugated with either goat anti-mouse IgM antibody (ARG21517, Arigo Biolaboratories) or goat anti-human IgM antibody (K0211481, KOMABIOTECH).Initially, 20 μL of Eu NPs was added to 754 μL of 0.05 M MES buffer (pH 6.1, Sigma-Aldrich) and then agitated for 1 h at 25°C in the presence of 26 μL of 5 mM EDC (Thermo #22980) and 200 μL of 50 mM sulfo-NHS (Sigma #56485-1G) to activate the COOH groups present on the surface of Eu NPs.The surplus EDC and sulfo-NHS were removed by subjecting the mixture to centrifugation at 27,237×g for 10 min at 4°C.Subsequently, the activated Eu NPs were combined with 60 μL of 1 mg/mL antibody in 940 μL of 0.1 M sodium phosphate at pH 8.This mixture was allowed to react for 2 h at 25°C.Following centrifugation at 27,237×g for 10 min, the precipitated bioconjugates were collected, washed once, and resuspended in 400 μL of 2 mM borax pH 9.0 containing 0.1% BSA.Finally, the bioconjugates were stored in the dark at 4°C for subsequent use (Duong et al., 2022).
Rapid FICT
To establish immunochromatographic test strips, the NC membrane (10 μM) was coated with antigen or antibody using the BioDot Dispenser machine (BioDot, California, US).For standardizing the lateral flow reaction, the membrane was coated with 0.05 mg/mL of rabbit anti-Goat IgG (H + L) antibody (ARG21945, Arigo Biolaboratories) to serve as the control line (CL).The test line (TL) was coated with 0.3 mg/mL purified rAg T. gondii-GAP50 to detect IgM in the serum of patients with toxoplasmosis.The conjugate, sample, and absorbent pads were then attached to a backing card to complete the process.
The strip was used for FICT after drying at 30°C for 2 days.In summary, 6 μL of Eu NP conjugated either with anti-mouse IgM or anti-human IgM was placed onto the conjugate pad.Then, a mixture of either mouse or human serum in 75 μL of distilled water (DW) was thoroughly diluted into 125 μL of diluent buffer (0.1 M Tris pH 9, 0.1% gelatin, and 0.5% Tween 20) and then applied to the sample pad.After 15-20 min, a portable fluorescent strip reader (MEDISENSOR, Daegu, Korea) was used to interpret the results at excitation and emission wavelengths of 365 and 610 nm, respectively.The TL/CL ratio was used to calculate the quantitative diagnostic parameters of the FICT.
Determination of the FICT cutoff value and the limit of detection (LOD)
The cutoff value of the FICT was determined by calculating the mean of the normal sera (n = 10) plus three times the standard deviation (SD) using the TL/CL value.Subsequently, to establish the LOD (Armbruster and Pry, 2008) for the FICT test involving the coating rAg Tg-GAP50, standard T. gondii-infected human sera (code 13/132) were prepared by spiking 10 μL of normal sera, which were then subjected to the FICT.
Statistical analysis
All graphs were generated using GraphPad Prism (version 9.0, La Jolla, CA, USA) and presented as the mean and standard deviations (SD) of biological replicates.One-way and two-way analysis of variance (ANOVA) were used to analyze the ELISA and FICT.
In vitro culture of Toxoplasma gondii tachyzoites and collection of sera from infected mice
To obtain highly pure protein samples, tachyzoites were masscultivated with APRE19 cells and subsequently purified to ensure no contamination, as shown in Supplementary Figures S1A,B.
Pure and fresh tachyzoites of the T. gondii RH strain were i.p. injected into the mouse belly, and blood was collected 2-10 days post-infection (Figure 1A).The RH strain of T. gondii was highly virulent in mice, resulting in the mortality of all mouse groups infected with tachyzoites ranging from 10 1 to 10 6 12 days post-infection (Figures 1C,D).
During this study, mice inoculated with 10 5 and 10 6 tachyzoites were survived for only 7 dpi and lost 20% of their body weight before death.However, these two groups showed a strong IgM immunological response to the tachyzoite TLA base-indirect ELISA response at day 5 post-infection (Figure 1B).Two-dimension electrophoresis (2DE) and immunoblot analysis of tachyzoite total lysate antigen (TLA).2DE of tachyzoites was conducted using IPG strip NL pH3-10 (A) and pH 5-8 (D) for analysis of the proteomics of T. gondii.Specific protein spots, indicated by red arrows, exhibited differential expression in tachyzoites compared with uninfected cells (Supplementary Figure S2).TLA proteins of tachyzoites (A,D) were transferred to a PVDF membrane and then normal sera (B,E) in comparison with T. gondii-infected mouse sera (C,F) diluted 1:100 were probed, followed by detection of anti-mouse IgM-HRP antibodies.
The group of mice that received 10 4 to 10 1 tachyzoites persisted longer, until 9, 10, 11, and 12 dpi, and experienced an approximately 10-15% decrease in body weight before eventually succumbing to the infection; however, the IgM responses were observed to be comparatively lower when mice were infected with higher tachyzoite doses (Figure 1).
Electrophoresis of 2DE
To analyze the proteomics of T. gondii at tachyzoite stages, a comparative assessment was conducted between tachyzoite TLA and uninfected cells using 2DE gel, separating the proteins based on their isoelectric points (pIs) and molecular weights.
Using the data obtained from the 2DE of the IPG strip with a pH range of 3-10, distinct differences were observed in the tachyzoite sample (Figure 2A) compared with uninfected cells (Supplementary Figure S2).This indicated major variations in protein expression and composition primarily in the pH range of 5-8 area.
To specifically target the antigen identified by IgM antibodies, a group of mice infected with 10 5 and 10 6 tachyzoites, known to exhibit an intense IgM immune response, was selected for Western blotting analysis of tachyzoite TLA in 2DE.Compared with the sera from normal mice, the sera of T. gondii-infected mice exhibited extensive reactivity of IgM antibodies toward protein spots within the 40-55 kDa range at pH 5-8 (Figures 2B,C).
To investigate these proteins more thoroughly, Western blotting analysis of tachyzoite TLA in 2DE probed with mouse serum was conducted using IPG strips with a narrow pH range of 5-8, maintaining that the spots within the 40-55 kDa range at approximately pH 7 were consistently visible (Figures 2D-F).
The same spot pattern was consistently observed in the Western blotting assessment of the TLA of tachyzoites in 2DE probed with serum from mice via IPG strips at pH 3-10 in triplicate (Supplementary Figure S3) and pH 5-8 (Supplementary Figure S4).
Determining the presence of certain proteins by incorporating LC/MS-MS
To get insights into the composition and identification and characterization of molecules based on their mass-to-charge ratio through MS analysis, the spots in question were revealed to be T. gondii EF1γ (44 kDa at pI 5.97), PGKI (44.6 kDa at pI 6.57), and GAP50 (46.6 kDa at pI 6.46), with high percentages of peptide definition rates (Table 1).
In particular, EF1γ had coverage of 74.9, and 64% of the identified peptides corresponded to this protein, while PGKI and GAP50 exhibited coverage of 63.3 and 49%, respectively, of these sequences, and 22% of the recorded peptides were similar to them (Table 1).
In addition, Figure 3, Tables 2, 3 highlighted the specific peptides associated with T-cell epitope processing, and MHC binding of T. gondii EF1γ, PGKI, and GAP50, respectively, is reliable between two out of three predictor systems.
The linear epitopes of GAP50 were mostly located on the outer surface, covering 5.57% of the sequence (see Table 2), representing the highest coverage compared with other types in the 3D model.In contrast, the linear epitopes of EF1γ and PGKI exhibited lower coverage, with 4.06 and 0.48% of the sequence, respectively.Regarding MHC-binding peptides, EF1γ, PGKI, and GAP50 exhibited total coverage percentages of 16.19, 21.35, and 20.61% of their respective sequences (Table 3).Notably, those of GAP50 were mainly exposed on the surface, while those of PGKI were hidden rather than surface-exposed.As a result, GAP50 seems to efficiently present strong epitopes and MHC-binding peptides on the surface compared with the others (Figure 3).
To verify the detectability of Toxoplasma IgM antibodies against recombinant antigens, Western blotting assays were performed using normal mouse sera and T. gondii-infected mouse sera as primary antibodies.The recombinant antigens EF1γ, PGKI, and GAP50 were compared with BSA, T. gondii tachyzoite TLA, and uninfected cells (Figures 4A,E,I).Figures 4B,F,J show that EF1γ was not recognized by IgM of T. gondii-infected mouse sera, whereas PGKI (Figures 4C,G,K) and GAP50 (Figures 4D,H,L) were extensively detected (Supplementary Figure S7).Frontiers in Microbiology frontiersin.orgSubsequently, PGKI and GAP50 were subjected to ELISA using mouse and human serum samples.The cross-reactivity with P. vivax and regular human serum was difficult to eliminate when PGKI was used as antigen coating (Supplementary Figure S8).
Rapid FICT
To further investigate the capacity of strip testing to detect antibodies against IgM in serum samples, purified rAg GAP50 was coated at a variety of concentrations (0.1, 0.3, 0.5, and 1 mg/mL) as a single TL on NC membranes.An immunochromatographic test strip (Figure 5A) utilizing Eu NP-conjugated anti-mouse IgM (diluted 80-fold) (Figure 5B) and/or anti-human IgM (diluted 320-fold) (Figure 5C) was used to detect mouse and human sera, respectively.With GAP50 coated at 0.3 mg/mL on the NC membrane, it was possible to reduce the non-specific fluorescence intensity when examining cross-interactions involving P. vivax, P. yoelii, or typical serum (raw data are shown in Supplementary Figure S9).
Furthermore, an FICT included the test line coated with 0.3 mg/ mL GAP50 and the control line coated with 0.05 mg/mL rabbit antigoat IgG (anti-gIgG) to quantify the conjugate required in each reaction.
The application of conjugated anti-mouse IgM (diluted 80-fold) (Figures 6A,C,E) and/or anti-human IgM (diluted 20-fold) (Figures 6B,D,F) efficiently minimized non-specific binding at the TL associated with Plasmodium or normal serum while maintaining an appropriate TL/CL ratio (raw data are shown in Supplementary Figure S10).
Determination of the FICT cutoff value and LOD
Supplementary Figure S11 reveals that the ideal volume of human sera per reaction for FICT was 2 μL when various quantities (1, 2, 4, 8, 10, and 15 μL) of sera were compared.In our system, the utilization of 1 μL sera (diluted 1:200) in 200 μL per reaction was insufficient to distinguish IgM in a standard T. gondii patient serum 13/132 from other samples, while an increase in 15 μL of sera (diluted 1:13.3) was unable to eliminate the non-specific reaction when measuring the TL area.The greatest fluorescent signal at the test line (TL) was noted when using 8 μL of sera (diluted 1:25).However, this also led to an overall increase in the intensity at the control line (CL), resulting in a less impressive TL/CL ratio compared with the sample from patients infected with P. vivax.Non-specific signals appeared at the test line (TL) when 4 μL of serum (diluted 1:50) was tested with P. vivax patient sera.The TL/CL ratio of standard T. gondii patient serum 13/132 was three and five times higher than that of P. vivax and normal serum, when 10 μL of sera were used per reaction.However, there were big differences in TL density or TL/CL ratios when only 2 μL of the sample was used.(Supplementary Figure S11).It aids in cost reduction and facilitates efficient human serum screening for various diseases and evaluation of diverse experimental conditions, a particularly Frontiers in Microbiology 11 frontiersin.orgcrucial consideration when using rodent models to align ethical and resource considerations.
To establish the FICT threshold value, 2 μL of sera per strip reaction was applied.For mouse sera, the cutoff value was determined by calculating the mean of normal sera (n = 10) plus three times the standard deviation (SD) based on the TL/CL ratio (Figure 7A and Supplementary Figure S12).These data provide support for the tachyzoite TLA-based ELISA, indicating that sera at 2 dpi from mice infected with T. gondii ranging from 10 1 to 10 6 exhibited fluorescence levels similar to the background signal.However, on days 5 and 7 pi, sera from mice infected with T. gondii ranging from 10 3 to 10 6 showed significantly higher detectability.In contrast, sera from T. gondiiinfected mice at densities of 10 1 and 10 2 only produced positive results on day 10 pi (Figures 7B-G and Supplementary Figure S13).
We conclude that the FICT approach using GAP50 was more sensitive than TLA-based ELISA for detecting IgM levels in sera from rodents infected with T. gondii at 5 dpi (10 3 , 10 4 ) and 7 dpi (10 3 ).
In the context of FICT for human sera, the highest ideal volume of sera per reaction (10 μL of each sera per strip) (Supplementary Figure S11) was applied to determine the cutoff value.It similarly determined as 677.28 by calculating the mean of human seronegative sera (n = 10) plus three times the standard deviation (SD) based on the TL/CL ratio (Figure 8A and Supplementary Figure S14).Data demonstrate that the standard human T. gondii sera (code 13/132) (n = 1) significantly differ from that of seronegative sera and human cross-reaction controls (n = 20).
Additionally, the OnSite commercial kit (REF: R0234CL) for IgM and IgG serum was employed to compare the results in parallel.According to the instructions of the kit, standard T. gondii 13/132 sera were tested in triplicate, with 10 μL utilized for each test, corresponding to 3.2 IU/strip.The outcome was visually inspected under the naked eye, with a faintly visible IgM-positive band (Supplementary Figure S16A).To assess the limit of detection (LOD) in parallel with FICT, the sera were similarly prepared by spiking infected mouse sera as the primary antibody of (E,I) of tachyzoites TLA in comparison with uninfected cells; (F,J) rAg EF1γ; (G,K) PGKI, and (H,L) GAP50 in comparison with BSA.Protein was run with 20 μg/lane.Anti-6× Hig-Tag mouse IgG-HRP was diluted 1:10,000 in 5% non-fat milk.Goat antimouse IgM (heavy chain)-HRP was diluted 1:3000 in 5% BSA.A total of 10 6 T. gondii tachzyoites of RH strain-infected mouse sera collected on day 5 post-infection were used.M, Marker PageRuler Prestained Protein Ladder (#26617-Thermo Scientific).The NC membrane was coated with 0.05 mg/mL of rabbit anti-goat IgG (H + L) as a control line (CL).The test line (TL) was coated with 0.3 mg/mL purified rAg T. gondii-GAP50 to detect IgM in patients with toxoplasmosis.The conjugate, sample, and absorbent pads were then attached to a backing card to complete the process.Overall, 6 μL of Eu NP-conjugated either with anti-mouse IgM or anti-human IgM was placed onto the conjugate pad.Then, a mixture of either mouse or human serum in 75 μL of distilled water (DW) was thoroughly diluted into 125 μL of diluent buffer and then applied to the sample pad.After 15-20 min, a portable fluorescent strip reader was used to interpret the results at excitation and emission wavelengths of 365 and 610 nm, respectively.The TL/CL ratio was used to calculate the quantitative diagnostic parameters of the FICT.(B,C) Optimization of the rAg concentration coating strip for the FICT to detect IgM in mouse and human sera.The NC membrane was coated with 0.1, 0.3, 0.5, or 1 mg/mL of purified rAg Tg-GAP50 as the TL.Mouse (B) and human sera (C) were analyzed by FICT using Eu-NP-conjugated anti-mouse IgM and/or anti-human IgM.The interaction of rAg with Abs present in sera was determined by measuring the fluorescence intensity (365 nm excitation and 610 nm emission).The rAg Tg-GAP50 coating was used to test cross-reactivity to P. vivax and P. yoelii.In total, 2 μL of serum was used per reaction.The TL fluorescent (n = 3) was shown as mean ± SD.Two-way analysis of variance (ANOVA) was used to analyze the FICT.ns: not statistically significant, *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001.
FIGURE 6
Evaluation of the quantity of conjugate required in each reaction FICT to detect IgM.An immunochromatographic test strip included the test line (TL) coated with 0.3 mg/mL rAg Tg-GAP50 and the control line (CL) coated with 0.05 mg/mL rapid anti-goat IgG (anti-gIgG).EuNP-conjugated antimouse IgM (A,C,E) or conjugated anti-human IgM (B,D,F) was dropped onto the conjugate pad, and the strip was dipped in a mixture of sera for 15 min.A portable fluorescence detector (excitation at 365 nm and emission at 610 nm) was used to measure the fluorescence signals of TL (A,B) and CL (C,D).TL/CL values were used to determine the quantitative diagnostic value of the FICT (E,F).Overall, 2 μL of serum was used per reaction.The fluorescent density of TL, CL, and ratio of TL/CL data (n = 3) was shown as means ± SD.Two-way analysis of variance (ANOVA) was used to analyze the FICT.ns, not statistically significant; *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001.them into 10 μL of normal sera and then analyzed in both the commercial strip and GAP50-FICT simultaneously.At each point of the LOD test, triplicated strips were evaluated.Inconsistent results were observed among them, while at least one strip exhibited a faint IgM band, not all strips showed the same result.This variation poses challenges in accurately determining the LOD when testing T. gondii 13/132 by OnSite commercial kit (Supplementary Figure S16A).In addition, the OnSite commercial kits were evaluated by seronegative sera and human cross-reaction controls (Supplementary Figures S16B-D).The specificity was 96.77% (n = 30) with one Dengue sample showing cross-reactivity.However, as only one positive T. gondii sample was available, the sensitivity could not be determined.
These findings highlight the distinct immunoreactivity of GAP50, emphasizing its potential as a specific diagnostic biomarker to enhance the sensitivity of the FICT in detecting IgM in rodent samples.
Additionally, novel GAP50 emerges as an attractive candidate antigen for integration in POCT for the IgM detection of T. gondii infections in patient samples.
Discussion
The commercial POC tests currently available in the market offer notable benefits, including their capacity to deliver immediate outcomes, affordability, and simplicity of use at onsite, and prolonged preservation period, collectively contributing to reducing the burden and impact of diseases on regions with limited resources.The majority of investigations have reported that T. gondii recombinant proteins can substitute natural tachyzoite antigen in IgG/IgM serological testing.The primary focus was the main surface antigen (SAG1).Sandwich ELISA utilizing purified SAG1 (P30) showed that all 37 acute toxoplasmosis 10.3389/fmicb.2024.1385582 Frontiers in Microbiology 13 frontiersin.orgpatients exhibited significant IgM anti-P30 antibody levels (Santoro al., 1985).Another research examined goat toxoplasmosis seroprevalence and found that the diagnostic rSAG1-ELISA had 92.66% sensitivity and 90.67% specificity (Bachan et al., 2018).
Another antigen of interest, known as GRA7 (dense granule antigen 7), exhibits a high abundance on the surface and within the cytosol of host cells, the lumen, and the membrane of the parasitophorous vacuole.GRA7 has been extensively studied since it elicits a strong immune response in both acute and chronic infections (Luo et al., 2019;Teimouri et al., 2019;Simon et al., 2020;Ybanez et al., 2020).
In addition, the full-length 65 kDa Matrix antigen 1-MAG1, predominantly localized in the matrix and wall of tissue cysts, is highly accurate and reliable indicator for detecting T. gondii IgG.The data obtained from the analysis of the mouse sera group and 105 patient human sera confirmed a sensitivity of 100 and 94.3%, respectively.However, the sensitivity for the assessment of IgM was found to be only 25.7% in the overall evaluation of the human sera.This is reinforced by the fact that MAG1 serves as a marker for bradyzoites while also being synthesized in tachyzoites (Gatkowska et al., 2015).
In a recent study (Ferra et al., 2020), based on the parasite surface receptor protein AMA1, which is correlated with host cell entry, ELISA Determination of the FICT threshold value and Limit of Detection for human serum using the TL/CL ratio.(A) The cutoff value of FICT was decided by calculating the mean of normal human sera (n = 10) plus three times the standard deviation (SD) using the TL/CL value when applying 10 μL sera per strip reaction.Dotted line indicated that the cutoff value was 677.28 for distinguishing standard T. gondii-infected serum 13/132 (n = 1) with seronegative sera, P. vivax, and dengue sera (each group, n = 10).(B) T. gondii 13/132 were prepared by spiking in 10 μL of normal sera, which were then subjected to FICT.Thus, LOD for FICT using a coating of rAg Tg-GAP50 was established at 1. assays exhibited a sensitivity of 99.4% for IgG and 80% for when testing T. gondii-infected human sera.Furthermore, the utilization of AMA1 virus-like particles (VLPs) by other research groups also demonstrated over 90% sensitivity and specificity, highlighting the potential of AMA1 as a valuable component in serological tests (Kim et al., 2022).Numerous studies have aimed to identify the ideal marker which can aid in distinguishing T. gondii acute or chronic infection, but there is a lack of investigation in OnSite tests.A study, utilizing 2D gel electrophoresis and IgA probing in human sera, identified the novel antigen-subtilisinlike protein (SUB1), which exhibited a higher likelihood of responding to specific IgA, IgM, and IgG in patients with acute rather than chronic Toxoplasma infection, as evidenced by a line blot test conducted on 80 human blood samples (Hruzik et al., 2011).In a separate study (Liu et al., 2012), screening toxoplasmosis human sera using 2D and LC/MS-MS revealed that 13 proteins recognized by IgG antibodies and 1 protein (ROP2) targeted IgM antibodies from group 1 (IgM negative/IgG positive) and group 2 (IgM positive/IgG negative), respectively.Afterward, by ELISA, the rROP2 fragment (186-533 aa) demonstrated specific capture of Toxo-IgM antibodies with 100% sensitivity (48/48) in group 2, while cross-reactivity was noted in 16% of individuals who were infected with Leishmania spp.and 10% of individuals infected with P. vivax.Recently, the well-established ELISA antigen GRA7 was assessed in an immunochromatographic test (ICT) involving 88 human sera samples.The ICT exhibited sensitivity ranging from 93.1 to 100% and specificity of 100% in detecting IgG and/or IgM antibodies, which was consistent with the performance observed in conventional ELISA methods (Ybanez and Nishikawa, 2020).Nevertheless, the GRA7-based ICT test did not independently examine IgM and IgG levels.
The immune system usually eliminates T. gondii tachyzoites rapidly, which results in prolonged encystment, a process favored in infections with low or moderately virulent strains in more resistant hosts (Lyons et al., 2002).As a result, cysts can maintain minimal immune system activation, allowing IgM antibodies to exist for an extended period.In contrast, the IgM response for the tachyzoite stage is short and diminishes rapidly.Thus, there may be some antigens that induce IgM during the tachyzoite stage but are quickly forgotten or induce IgM briefly.This circumstance could be one of the reasons why the sensitivity of one or several recombinant antigens does not fully match that of the tachyzoite whole lysate antigen in diagnostics.
In this study, utilizing the mouse sera collected from virulent tachyzoite RH strains, where the IgM response was robust but a shortlived, we successfully screened out three uncharacterized antigens of T. gondii, such as EF1γ, PGKI, and GAP50.Moreover, we established an on-site detection kit for IgM detection based on GAP50, demonstrating its effectiveness in both experimentally infected mice and human serum samples.It may raise concerns regarding the narrow detection window of several weeks, but it is important to emphasize that within this brief period, the robust IgM response provides a reliable indicator of recent T. gondii exposure.This plays a vital role in ensuring prompt treatment and illness prevention, especially for pregnant women.Although the IgM response generally declines quickly, there are reported cases of prolonged persistence.
The limitation of 2DE is that the presence of multiple proteins or multiple spots can correspond to a single protein caused by differential digestion or post-translational modifications.Therefore, three candidate antigens were examined for their ability to be used in diagnosis through in silico analysis and other complementary methods.
Based on a previous study (Tao et al., 2014), the elongation factor 1-gamma (EF1γ) was one of the 14 candidates who were highly expressed during T. gondii infection discovered by sera from infected pigs.In another publication, EF1γ also was found as a candidate biomarker for chronic periodontitis discovery via high-performance liquid chromatography and fragmentation using tandem MS of gingival crevicular fluid samples (Baliban et al., 2012).In addition, EF1γ is one of the four newly discovered immunogenic proteins of P. multocida, which is identified through 2-DE MALDI-TOF MS analysis with immune serum (Wang et al., 2021), or EF1γ mRNA overexpression was observed in biopsy specimens of esophageal carcinoma, suggesting its use as a possible indicator of tumor aggressiveness (Mimori et al., 1996).EF1γ was the most prominent intact protein identified by 2D Western blotting in our study, although this recombinant protein could not be detected by the IgM of T. gondiiinfected mouse sera that need to be further investigated.
PGKI is a glycolytic enzyme targeted to the cytosol of T. gondii (Fleige et al., 2007), but there is very little information about this protein for diagnosis.Phosphoglycerate kinase initiates ATP generation in glycolysis.It is found in numerous parasites and has been identified as a promising target for vaccine and therapeutic development due to its distinct characteristics compared with human enzymes (Timson, 2016).In a previous study, antibodies raised against recombinant C. sinensis PGK demonstrated specificity toward native PGK from C. sinensis and effectively localized it within the muscular tissue and tegument of adult flukes.This suggests the potential utility of C. sinensis PGK as an immunoreagent for the serodiagnosis of clonorchiasis (Hong et al., 2000).Additionally, vaccination with F. hepatica phosphoglycerate kinase may prevent illness by inhibiting the energy production of fluke and disrupting the interaction between the surface-expressed enzyme and the host.However, the efficacy of protection varies significantly, ranging from 0 to 69%, depending on the delivery method and vaccine formulation (Jaros et al., 2010).During our investigation, PGKI and GAP50 were minor proteins identified by LC-MS/MS during 2DE Western blotting but were strongly detected by IgM as recombinant forms.Furthermore, our study revealed that PGKI-based ELISA faced problems with cross-reactivity with P. vivax and common human serum compared with GAP50.
The T. gondii GAP50 is a glycoprotein that functions as an integral membrane protein.It serves as an anchor for many components, including myosin A (TgMyoA), accompanying light chain (TgMLC1), actin, and TgGAP45.Together, these components form a glideosome, which plays a crucial role in facilitating motility necessary for host cell invasion (Fauquenoy et al., 2008;Harding et al., 2016).In other apicomplexan parasites such as Plasmodium, Cryptosporidium, and several species, the glideosome-associated protein GAP50 is also essential for cell penetration and substrate gliding motility via an actin-myosin motor (Bosch et al., 2012;Dearnley et al., 2012).Interestingly, the transmembrane protein PfGAP50 on developing gametes has been identified as a receptor for the host complement regulator factor H (FH). Plasmodium uses the surface-bound FH to inactivate the complement protein C3b, allowing it to avoid being eliminated by the complement system in the mosquito midgut.Interfering with FH-mediated protection, whether by neutralizing FH or inhibiting PfGAP50, significantly impairs gametogenesis and limits parasite transmission.Antibodies targeting PfGAP50 can prevent FH binding to the surface parasites, thereby destroying their ability to resist human complement.Consequently, PfGAP50 antibodies lead to reduced zygote numbers and lower infection rates of Plasmodium in mosquitoes (Dearnley et al., 2012(Dearnley et al., ). 10.3389/fmicb.2024.1385582 .1385582Frontiers in Microbiology 15 frontiersin.orgHowever, unlike many major antigen used in which are typically surface or secretory antigens, GAP50 was found to be localized inside the parasite plasma membrane.The database shows ortholog patterns in three other apicomplexan parasites, P. falciparum, P. yoelii, and Eimeria tenella, sharing 41-58% of identity across their entire sequences, except for the amino-terminal signal peptides (Gaskins et al., 2004).The high conservation of Tg-GAP50 orthologs in other apicomplexan parasites implies diagnostic assay crossreactivity.Thus, it is imperative to assess GAP50 within the context of properly classified patient samples.In the future, there is potential to use GAP50 in combination with multiple rAgs to improve the accuracy and sensitivity of IgM detection in patient specimens.
In this study, GAP50 was analyzed for its potential application in rapid FICT.It is an efficient antigen for the detection of IgM antibodies in sera compared with the conventional ELISA based on T. gondii TLA.In contrast to the commercial OnSite kit, which utilizes rAgs for the detection of IgG and IgM separately, our system focused solely on IgM detection on a single strip.While the preliminary results are encouraging, further investigation is required to explore the potential incorporation of IgG detection within a strip.Although gold-based materials are durable and easily visible, it is important to note that relying on a visual assessment of the very faint strip may lead to subjective and varying interpretations, potentially causing inconsistent results.While fluorescence-based assays such as the FICT-GAP50 may necessitate specialized equipment such as fluorescence readers, they offer higher sensitivity, allowing for the detection of lower concentrations.
Conclusion
We have thoroughly characterized the T. gondii GAP50 protein, identifying it as a potential novel target for the specific detection of IgM antibodies in both rodent models and standard human patient sera when compared with EF1γ and PGKI.Subsequently, we successfully developed a rapid diagnostic fluorescent test capable of distinguishing IgM levels in T. gondii-infected samples from seronegative and cross-reactive samples.Notably, the FICT approach using GAP50 was more sensitive than TLA-based ELISA for detecting IgM levels in sera from rodents infected with T. gondii.The application of the novel GAP50 protein in the FICT holds promise for facilitating a rapid 20-min Point-of-Care Test (POCT) for the detection of Toxoplasma gondii IgM in clinical specimens in the future.However, it is essential to acknowledge that the lack of comprehensive assessment using properly classified patient sample in our study is one of the limitations of our research.
FIGURE 1
FIGURE 1 Collection of sera from mice infected with T. gondii.(A) Scheme of the animal model: pure tachyzoites of T. gondii RH strain were intraperitoneally (i.p.) injected into the mouse belly, and blood was collected at 2-10 days post-infection.(B) TLA-based ELISA determination of IgM in the sera of BALB/c mice immunized with different number of tachyzoite.The ELISA data (each group n = 3) was shown as means ± SD.The dot-line indicated the cutoff value of ELISA which determined as mean of normal sera plus three times of SD. (C) Body weight change and (D) survival rate of infected mouse groups (each group n = 5) monitored every day.Two-way analysis of variance (ANOVA) was used to analyze the ELISA.ns, not statistically significant; *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001.
FIGURE 3
FIGURE 3 Predicted common linear epitopes and potential MHC-I and MHC-II binding peptide sequences.(A) EF1γ, (B) PGKI, and (C) GAP50 were analyzed and presented as 3D modeling.Red indicates common linear epitopes and purple and cyan indicate MHC I and II, respectively.
FIGURE 4
FIGURE 4 SDS-PAGE and Western (A) SDS-PAGE of T.gondii TLA in comparison with uninfected cell total lysate.Western blotting using anti-6× Hig-Tag to confirm the expression of the purified recombinant antigen (B,B1) EF1γ -46 kDa; (C,C1) PGKI-48.5 kDa, and (D,D1) GAP50-50 kDa in comparison with BSA as the negative antigen.Western blotting to detect IgM antibodies probed by (E-H) normal mouse sera and (I-L) T. gondii 10 6 tachyzoite-(Continued)
FIGURE 7
FIGURE 7Determination of the FICT threshold value for mouse serum using the TL/CL ratio.(A) Normal; P. yoelii sera (each group, n = 10) and T. gondii infected 10 6 tachyzoites at 5 dpi (n = 3) were shown with cutoff line.(B-G) T. gondii (10 1 -10 6 )-infected serum of BALB/c mice (each group, n = 3) at 2, 5, 7, and 10 dpi were subjected to the FICT.The cutoff value of the FICT was decided by calculating the mean of the normal sera (n = 10) plus three times the standard deviation (SD) using the TL/CL value when applying 2 μL of sera per strip reaction.Dotted line indicated that the cutoff value was 861.7 base on the TL/CL ratio.The ratio of TL/CL data was shown as means ± SD.Two-way analysis of variance (ANOVA) was used to analyze the FICT.ns, not statistically significant; *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001.
TABLE 1
Protein identification analysis using LC-MS/MS.
TABLE 2
Potential linear epitopes.
TABLE 3
Potential MHC-I and MHC-II binding peptide sequences.
|
2024-06-06T15:10:44.870Z
|
2024-06-04T00:00:00.000
|
{
"year": 2024,
"sha1": "9c9676677efe1d7c4690e155c6fe4861f4c11f2d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/microbiology/articles/10.3389/fmicb.2024.1385582/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "687598efa2749469fa08487a2e92804606398a36",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
214263009
|
pes2o/s2orc
|
v3-fos-license
|
Low-cost and versatile electrodes for extracellular chronic recordings in rodents
Electrophysiological data are used to investigate fundamental properties of brain function, its relation to cognition, and its dysfunction in diseases. The development of reliable and open-source systems for electrophysiological data acquisition is decreasing the total cost of constructing and operating an electrophysiology laboratory, and facilitates low-cost methods to extract and analyze the data (Siegle et al., 2017). Here we detail our method of building custom-designed low-cost electrodes. These electrodes can be customized and manufactured by any researcher to address a broad set of research questions, further decreasing the final cost of an implanted animal. Finally, we present data showing such an electrode has a good signal quality to record LFP.
Introduction
The extraction of extracellular voltage fluctuations, from spiking activity to neuronal oscillations, is one of the most widely used techniques in neuroscience . Electrophysiological recordings are crucial to relating dynamics across different spatial scales -from individual spiking to the activity of thousands of neurons in the electroencephalogram -and across different temporal scales -from milliseconds to hours. Such type of data has been instrumental in insights and discoveries about cognition and brain functions (Cohen, 2017).
A modern electrophysiology recording system must allow for multichannel recordings with adaptable geometry for recording from multiple brain regions of varying depth and curvature, and must be easily integrated into an optogenetics or chemogenetics delivery system. Two classical ways to approach this are (1) "hyperdrives" -carrier structures that enable the movement of individual electrodes or bundles of electrodes such as tetrodes (Battaglia et al., 2009;Bragin et al., 2000;Brunetti et al., 2014;Kloosterman et al., 2009;Liang et al., 2017;McNaughton, 1999;Michon et al., 2016), and (2) silicon probes (Bragin et al., 2000;Jun et al., 2017;Lopes-dos-Santos et al., 2018;Ulyanova et al., 2019). Both approaches have several advantages including high-density recordings or layer-specific profiles. Another approach is recording electroencephalograms or electrocorticograms, which a higher potential in translational neuroscience (Insanally et al., 2016;Lee et al., 2011). However, these existing methods also have several disadvantages, including the size of the structure in and on the animal's head, limited number of simultaneously accessible brain regions, a significant amount of time to build the apparatus, or the high price of commercially available electrodes. Electrodes on or under the skull have additional limitations of the impossibility of extracting data from deep anatomical regions or local activity.
We aim to describe a low-cost and versatile electrode system for extracellular chronic recordings in rodents. The electrodes described here are optimal for recording the Local Field Potential (LFP) and multiunit activity across multiple brain regions simultaneously. Our design is small and light-weight and does not impair movement, animal welfare, or behavioral tasks. In this paper, we describe systematically how to manufacture each component, including video and options for customization of the electrodes to target any specific research question. The approximate cost per 32-channel electrode implant is 50 euros and takes around 3 h to build. A chronic implant will continue to provide highquality LFP data for months.
Animals
We have developed and implanted our electrodes in mice and rats, and we continue to record high-quality LFP data up to 3 months after implant. The data shown in this paper are from Black57 mice, recorded using implants in different brain regions for verification of the quality of signal to noise. All animals were recorded in their home cages, having free access to food and water.
Ethical standards agreement: The authors also certify that formal approval to conduct the experiments described has been obtained from the animal subjects review board of their institution and could be provided upon request. All experiments were approved by the Centrale Commissie Dierproeven (CCD) and it is according to all indications of animal welfare body [Approval number 2016-0079].
Wires
Several types of materials can be used to manufacture electrodes. Here we chose to use Tungsten (99.5%) for the electrode due to the high resistance with very thin diameters, and stainless steel for the ground wire for both resistance and low cost. We use and recommend the usage of 50μm and 35μm tungsten wire (rods, not rolls). Wires smaller than 35μm increase the risk of bending during the implantation surgery. For the option of tetrodes, silica capillary (75μm internal diameter -150μm external diameter) and 12.5 μm wires should be purchased. We purchased wires in bulk from <California Fine Wire Company>, although any other supplier would be suitable.
Grid
The individual wires need to be aligned, and for this we use an alignment grid. The idea is to place two such grids with holes drilled at the same locations, one 3cm on top of the other. The individual wires are then fed through these holes. The grids can be manufactured using any micro-manipulator bench with micrometer precision (e.g. a multifunction milling machine bench -e.g. MINIQ BG6300-or a stereotax). A drill that supports micro drills bit (~100μm -Carbide micro drill bit -Jiahongda) for drilling holes in plastic paper is also necessary (e.g. PVC sheet commonly used to cover notepads). In case of tetrode usage, the drill tip has to be adjusted to fit the silica cannulas used for tetrode construction (~160 μm tip). We recommend a spacing of at least 250μm between the holes of the grid. If the spacing is lower than 200μm, the electrodes may form a compact surface, increasing the risk of damage to the brain during implantation surgery. A combination of two grids with the same specifications is used to align the electrodes (Video 1).
Connectors
The manufactured electrodes presented here are based in the Intan/ Open-ephys recording systems. The headstages used by these companies can be coupled with the Omnetics connector. It is therefore necessary to investigate which connectors are compatible with the headstage to be used. We use the connector A79026-001 (Omnetics Connector Corporation-USA). These connectors are compatible with the headstage RHD2132 for 32 channels.
Printed circuit board (PCB)
For fixed implants, we use PCBs to link the electrodes to the connectors. The PCBs were designed using the free version of the software Eagle (https://www.autodesk.com/products/eagle/free-download). During the design of the PCB, the measure between the claws of connectors and also the space between the two lines of the connector have to be considered to avoid displacement of the Omnetic claws with the soldering points of PCB (See Video 1). Details of the connectors and its measurements can be found on the Omnetics website. The PCBs were printed by Eurocircuits (https://www.eurocircuits.com/). Any company specialized in PCBs should be suitable. To increase reproducibility and facilitate electrode manufacture, the file of the PCB 3D model with all specifications (compatible with Eagle software) is provided as supplementary material.
Single wire arrays manufacturing
The process described below was video-recorded to facilitate visualization (Video 1 and 2). For single wire arrays, we used the components 2.1.1 to 2.1.4 described above. The fixed electrodes array can be implanted with or without the usage of PCBs. If Tungsten wires are used, we recommend PCBs due to the technical difficulties of soldering Tungsten directly to the connector. Therefore, we divide the manufacturing process into 3 steps: 1. Electrode Alignment: The electrode alignment is completely customized to any experiment to be planned. We here describe an implant optimized for multisite recordings in the medial prefrontal cortex. We designed the electrode to contain three rows of electrodes that would be implanted into one hemisphere. Using the grid of 250μm we designed the array to have a 3 Â 10 geometry. The electrodes are laid into the grid in each row of 3, repeating 5 or 10 times, depending on whether a 32 or 16 channel connector is used. The tips of the electrodes can be aligned using a calypter (Video 1). It is important to consider the dorsal-ventral coordinate of the target site (in our example: 1.5mm), and add 2 or 3mm to prevent the PCB from touching the skull during the implant (Video 1). For the usage of tetrodes the silica capillary has laid into the grid. 2. Soldering: The PCB and Omnetics connectors should be placed in an aligned position considering the claws of the connector and the Soldermask of the PCB (Video 1). Due to the very small components to be soldered, we use a soldering tip of 200μm. The ground wire can be soldered directly to the PCB, using an appropriate flux for soldering stainless steel. Note that the ground wire can be directly soldered to a screw, or left to be connected to the screw during surgery. 3. Assembling: To finalize the electrode, the two processes mentioned above must be assembled. The aligned electrodes were glued to the PCB/connector. We recommend using photo-activated glue to accelerate the process. After that, each electrode is placed in the individual holes of PCB. At this point, it is important to keep track of the placement of each electrode for proper mapping of data channel to physical location. The excess wire is cut and the wire coat is removed with a surgical blade. Silver paint is applied to connect the tungsten wire to the PCB (Figure 1). The electrode is finalized using Epoxy glue to cover and protect the arrays and the connection between the electrodes and PCB is made by the silver paint (Video 2).
After finishing the procedure with 32 electrodes, the construction weighs 1.0-1.2g, corresponding on average less than 4% of the body weight of the mice or 0.5% of the rat body weight. The weight will be less for 16 electrodes.
Surgical procedure
The surgical procedure varies according to the type of electrode and target brain regions. We here describe a general procedure that should be adapted to the specific needs of the project.
1 -Pre-implantation: In the pre-implantation phase we used the standard procedure of stereotaxic surgery. 10-16 week old mice were anesthetized with Isoflurane (induction at 5% Isoflurane in 0.5L/min O2; maintenance at 1-2% Isoflurane in 0.5L/min O2) [Teva]. For surgery, mice were fixed in the stereotaxic instrument [Neurostar Stereotaxic]. After shaving, the skin was disinfected with ethanol (70%). The local anesthetic Xylocaine (2%, Adrenaline 1:200.000 [AstraZeneca]) was injected subcutaneously at the incision site before exposing the skull. Peroxide (10-20% H2O2; [Sigma]) was applied to the skull with a cotton swab for cleaning and visualization of bregma and lambda. Holes for support screws are drilled in the edges of the skull (note that the placement of screws is dependent on the position of electrodes). 2 -The window(s) in the skull through which the electrodes are lowered into the brain are drilled specifically to accommodate the type of arrays to be implanted. These windows should be as small as possible, because more bone removed leads to less remaining skull surface area for anchoring the electrodes ( Figure 1A). For example, two electrode arrays should be done with two small windows rather than one larger window ( Figure 1A). To avoid contact between the dental cement and the brain, vaseline can be applied to the windows after the implant ( Figure 1B). Electrodes and screws are fixated onto the skull with dental cement (Super-Bond C&B) ( Figure 1C). In the case of multiple implants ( Figure 1D), a similar sequence of procedures as described above should be followed. Approximately 40 min prior to the end of the surgery, saline and analgesic (Carprofen injected subcutaneous) were injected to facilitate recovery of the animal.
Data acquisition
Electrophysiology data were acquired using Open Ephys with a sampling rate of 30 kHz. Animals were recorded in their home cage for 10 min, they were submitted to tethered recordings. However, this is only a matter of the resources available in the lab. The electrodes detailed in this paper are suitable to any system that use Omnetics connectors, therefore the usage of wireless system that is compatible with these connectors is applicable for these electrodes. Video recording was performed via web-camera (Fosmon USB 6), the video was synchronized with electrophysiological data using simultaneous TTL marks for both video and ephys data. During preprocessing, the data were downsampled to 1000 Hz, and EEGlab (Delorme et al., 2011) was used for visual inspection and cleaning artifacts. One mouse was kept implanted for 3 months to evaluate any changes in the signal characteristics.
Data analysis
The data analysis was performed using Matlab (R2015b). Power spectrum density and the spectrogram representation was computed by the pwelch.m and spectrogram.m routines from the Signal Processing Toolbox (Parameters: 50% overlapping, Hamming window of 6 s and Number of DFT points [2^13]). The spike sorting was performed using unsupervised spiking detection (Quiroga et al., 2004). We filtered the signal between 300Hz and 6000Hz and used the threshold of 8σn (Quiroga et al., 2004). The spike sorting was performed using Gaussian mixture models in two steps, extracting relevant features (principal components, wavelets) and GMM fitting parameters (e.g., Gaussian distances - Souza et al., 2019). We use the graphical user interface provided by Souza et al. (2019).
Recycling materials
There is one component of the present electrode arrays that can be recycled: the connector, which is also the most expensive component of the array. After perfusion and brain extraction, the animal cap can be kept in acetone for 2-4 days. The acetone will dissolve the dental cement and epoxy used during the manufacturing. Finally, the soldering iron can be used to remove the connections between the connector and the PCB. In our experience, the connector can be re-used twice without loss in the quality of the signal.
Electrode final cost
The price variation of each component used in the present manuscript can change according to the market value, the quantity of pieces purchased at a time (e.g., in bulk or individually), and the recycling capacity of the components. Therefore, the exact cost per electrode can only be approximated.
We order in bulk to produce around 60 electrodes arrays. The final cost for our array of 32 electrodes was 90 Euros, and the cost for a 16 channel array was 75 Euros. However, the most expensive component of the electrodethe connectorcan be recycled. This factor brings the final cost to approximately 50 Euros/32Ch and 35 Euros/16Ch.
Manufacturing time
An inexperienced researcher or student might take 5-6 h to build one electrode array. However, the learning curve is steep, and after some practice, it should take around 2-3 h to construct a 32-channel array.
Results
To evaluate the quality of the signal that can be extracted with this type of electrode arrays, we recorded spontaneous LFP and spiking activity from an animal in their homecage in two different moments, day1 (First Day of recording after post-surgery recovery time) and 90 days after the first recording session to show that the implant sustains high signal quality for long periods of chronic recordings. We recorded from 3 independent regions of the brain to show the flexibility of the arrays (see video 3). Figure 2 presents raw signal traces (blue line) of different regions recorded in the same animal: hippocampus, mid-frontal cortex. Notice that the amplitude of the signal recorded in these channels is maintained after 90 days of recording. In the bottom panel, as expected, theta oscillatory activity can be verified in the hippocampus in the raw signal, which is translated in the power spectral density (PSD) and timefrequency spectrogram along the entire session of recording. Similar features can be verified after 90 days of recordings. Notice that after 90 days, different features can be observed in the LFP power; for example, by recording the animal in a new cage, novelty detection-related power in the beta2 band is apparent, as previously reported (Berke et al., 2008;França et al., 2014). The spike sorting performed in the representative channel exhibits three independent wave forms in the first day of recording; the same channel still presents spiking activity after 90 days of recording (Figure 3).
Discussion
The electrodes described here are part of an effort to develop low-cost components for electrophysiological data acquisition, from different types of electrodes (Insanally et al., 2016), headstages (Trumpis et al., 2017) or the entire recording system . The method described here allows for addressing a wide range of research questions involving extracellular recordings of electrophysiological data, with maximal flexibility at minimal cost. As demonstrated in the results section, we were able to extract LFP from several different regions simultaneously (Figure 2, Video 3) and also multi-unit activity (Figure 3). We use these electrodes in freely moving animals (video 3), but they can also be used in acute experiments. Lastly, the arrays maintain integrity for long periods of time, allowing for investigations into long-term behaviors, learning, ageing, development, disease progression, and so on.
The compact shape of these electrodes, as well as the shape of the final implant in the animal (Figure 1), are appropriate for recording during several types of behavioral experiments. Due to the light weight of the electrodes, even with multiple electrodes implanted in different brain regions, these electrode arrays do not interfere with animal welfare or behavioral performance (Video 3; França et al., 2014). Considering the use of larger animals, like rats, the characteristics mentioned above make it possible to upscale the number of electrode arrays implanted ( Figure 1D). Besides that, these electrodes were used for long recording sections (up to 12 h) without disturbing the sleep wake cycle of the animal (dos Santos Lima et al., 2019;França et al., 2015).
Similar to any type of investigation, the proper method has to be applied according to the research question. Although the electrode arrays are versatile to a range of applications, they do not cover all types of electrophysiological data. For example, research questions involving large-scale cortical activity of different spaced cortical areas are more suitable for EEG or ECoG electrodes (Lee et al., 2011;Insanally et al., 2016;Woods et al., 2018). Furthermore, a limitation of our electrodes compared to classical "hyperdrives" is that the arrays are not optimized for isolating spiking activity from a large number of individual units (although we can detect well-isolated units).This is partly due to the size of the electrode tip, and partly due to the fact that the electrodes are implanted without movable drives; thus, gliosis may prevent well-isolated spikes. This can be overcome by including micro-drives and tetrodes to move the electrodes, but this also increases the complexity of the implantation, increases the time of manufacture of the electrodes, decreases the total possible spatial sampling of the electrodes, and also increases the total weight and size of the electrodes.
In conclusion, we have outlined a procedure to make very low-cost multielectrode arrays that are optimal for research questions that are based in LFP recordings and multi-unit cell activity, have the advantage of easily record from multiple regions of the brain, and are light enough to be used during behavioral performance. The electrodes are easy to build and easy to integrate into other open-source hardware and tools such as Open-Ephys.
Author contribution statement
Arthur S.C. França: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.
Josephus A. van Hulten: Conceived and designed the experiments; Performed the experiments.
Michael X. Cohen: Contributed reagents, materials, analysis tools or data; Wrote the paper. Different colors represent independent wave-forms recorded in a single channel; the darker color in each panel represents the average of the wave-form. Note that after 90 days of implantation it is still possible to sort different independent wave forms.
|
2020-02-13T09:24:56.064Z
|
2020-02-07T00:00:00.000
|
{
"year": 2020,
"sha1": "65af8f7636c701be540dd65d8bfde0941aa4b31a",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2405844020317102/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ed602ee25c7e6c77322db8e8680643c84e5d0a2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Computer Science"
]
}
|
212644627
|
pes2o/s2orc
|
v3-fos-license
|
Convergence Guarantees for Non-Convex Optimisation with Cauchy-Based Penalties
In this paper, we propose a convex proximal splitting methodology with a non-convex penalty function based on the heavy-tailed Cauchy distribution. We first suggest a closed-form expression for calculating the proximal operator of the Cauchy prior, which then makes it applicable in generic proximal splitting algorithms. We further derive the required condition for minimisation problems with the Cauchy based penalty function that guarantees convergence to the global minimum even though it is non-convex. Setting the system parameters by satisfying the proposed condition keeps the overall cost function convex and it can be minimised via the forward-backward (FB) algorithm. The proposed method based on Cauchy regularisation is evaluated by solving two generic signal processing examples, i.e. 1D signal denoising in the frequency domain and two image reconstruction tasks including de-blurring and denoising. We experimentally verify the proposed convexity conditions for various cases, and show the effectiveness of the proposed Cauchy based non-convex penalty function over state-of-the-art penalty functions such as L1 and total variation (TV) norms.
I. INTRODUCTION
T HE problem of estimating unknown physical properties directly from observations (e.g. measurements, data) arises in almost all signal/image processing applications. Problems of this kind are referred to as inverse problems, since having the observations and the forward-model between the observations and the sources is generally not enough to obtain solutions to these problems directly, due to their ill-posed nature.
Indeed, unlike the forward-model which is well-posed every time (cf. Hadamard [1]), inverse problems are generally ill-posed [2]. Therefore, dealing with the prior knowledge about the object of interest plays a crucial role in reaching a stable/unique solution. This leads to regularisation based methods, which received great attention hitherto in the literature [3]- [10].
In most of these examples, the common choice of regularisation functions is based on the L 1 norm, due to its convexity and capability to induce sparsity effectively. Another important example of a convex regularisation function is the total variation (T V ) norm. It constitutes the state-of-the-art in denoising applications, due to its efficiency in smoothing. Despite their common usage, the L 1 norm penalty tends to underestimate high-amplitude/intensity values, whilst T V tends to over-smooth the data and may lead to loss of details. Non-convex penalty functions can generally lead to better and more accurate estimations [11]- [13] when compared to L 1 , T V , or some other convex penalty functions. Notwithstanding this, due to the non-convexity of the penalty functions, the overall cost function becomes non-convex, which implies a multitude of sub-optimal local minima.
Convexity preserving non-convex penalty functions are thus essential, the idea having been successfully applied by Blake, Zimmerman [14], and Nikolova [15], and further developed in [6], [11], [16]- [20]. Specifically, a convex denoising scheme is proposed with tight frame regularisation in [16], whilst [17] proposes the use of parameterised non-convex regularisers to effectively induce sparsity of the gradient magnitudes. In [6], the Moreau envelope is used for TV denoising in order to preserve the convexity of a TV denoising cost function. The non-convex generalised minimax concave (GMC) penalty function is proposed in [11] for convex optimisation problems.
Another important reason behind the appeal of the aforementioned penalty functions in applications, is the existence of closed-form expressions for their proximal operators. Specifically, the proximal operator of a regularisation function has been introduced in conjunction with inverse problems, to help solving various signal processing tasks. Proximal operators are powerful and flexible tools with attractive properties, which enable solutions to nondifferentiable optimisation problems, and make them suitable for iterative minimisation algorithms [21], such as forward-backward (FB), or the alternating direction method of multipliers (ADMM). Remarkably, many widespread regularisation functions have corresponding proximal operators available in closed form, or at least numerical methods to calculate them exist. For example, the soft thresholding function is the proximal operator for the L 1 norm, whereas the proximal operator is the generalised soft thresholding (GST) [8] for L p norm penalty function. It is efficiently computed by using Chambolle's method [22] for T V norm, whilst the GMC penalty only necessitates the use of soft-thresholding, or firm-thresholding in the case of diagonal forward operator A as shown in [11].
The quest for finding the most appropriate penalty function, eventually in relation to an explicit prior distribution characterising the data statistics, is far from being over. In this work, we consider the Cauchy distribution, a special member of the α-stable distribution family, which is known for its ability to model heavy-tailed data in various signal processing applications. As a prior in image processing applications, it behaves as a sparsity-enforcing one, similar to L 1 and L p norms [9]. It has already been used in denoising applications by modelling sub-band coefficients in transform domains [23]- [27]. Moreover, the Cauchy distribution was also used as a noise model in image processing applications, by employing it for the data fidelity term in combination with quadratic [28] and TV norm [29] based penalty terms.
The general approach involves the use of a variational Bayesian methodology to solve Cauchy regularised inverse problems due to its lack of a closed-form proximal operator. This prevents the Cauchy prior from being used in proximal splitting algorithms such as the FB, and ADMM. Moreover, having a proximal operator would also make the Cauchy based regularisation function applicable in advanced Bayesian signal/image processing methods, such as in uncertainty quantification (UQ) via proximal Markov Chain Monte Carlo (p-MCMC) algorithms [30], [31].
In this paper, we propose a convex proximal splitting methodology for solving inverse problems of the form where y ∈ R M denotes the observation (can be either an image or some other kind of signals), x ∈ R N is the unknown signal, which can also be referred to as target data (either an enhanced data or the raw data), A ∈ R M ×N is the forward model operator and n ∈ R M represents the additive noise. Specifically, we propose a number of original contributions, which include: 1) the use of a non-convex penalty function based on the Cauchy distribution, in order to capture the heavy-tailed and/or sparse characteristics of the target, x. 2) deriving a closed form expression for the Cauchy proximal operator inspired by [32], which makes Cauchy regularisation applicable in proximal splitting algorithms. 3) deriving the condition that guarantees convergence of the Cauchy proximal operator to the global minimum.
Even though the proposed Cauchy based penalty function is non-convex, satisfying the proposed condition keeps the overall problem strictly convex either (i) through the use of proximal splitting algorithms, or (ii) through convexity of the cost function itself when the forward operator A satisfies the assumptions of orthogonality or of being an over-complete tight frame. 4) investigating the performance of the proposed Cauchy-based penalty function in comparison to L 1 and T V norm penalty functions in two examples of 1D signal denoising and 2D image restoration including de-blurring and denoising. Furthermore, we study the effect of following/violating the proposed convexity conditions for the same examples. The rest of the paper is organised as follows: Section II presents the proposed Cauchy proximal operator. Convergence analysis of the proposed method is given in Section III along with the corresponding Cauchy proximal splitting method. In Section IV, the experimental validation on the proposed conditions, and an analysis on 1D and 2D inverse problems are presented. We conclude our study and describe future work directions in Section V.
II. THE CAUCHY PROXIMAL OPERATOR
Recalling the generic signal model in (1), a stable solution to this ill-posed inverse problem is obtained through an optimisation of the following form: where F : R N → R is the cost function to be minimised, Ψ : R N → R is a function which represents the data fidelity term and ψ : R N → R is the regularisation function (the penalty term). Under the assumption of an independent and identically distributed (iid) Gaussian noise, the data fidelity term can be expressed as where σ refers to the standard deviation of the noise level. Based on a prior probability density function (pdf) p(x), the problem of estimating x from the noisy observation y by using the signal model in (1) turns into the following minimisation problem in a variational framework where we define the penalty function ψ(x) as the negative logarithm of the prior knowledge − log p(x). The selection of ψ(x) (or equivalently p(x)) plays a crucial role in estimating x in order to overcome the ill-posedness of the problem and to obtain a stable/unique solution. In the literature, depending on the application, the penalty term ψ(x) has various forms, such as L 1 , L 2 , T V or L p norms, to name but a few possible choices.
In this study, we propose the use of a penalty function which is based on the Cauchy distribution. This is a special member of the α-stable family of distributions, which is known to be heavy-tailed and promote sparsity in various applications. Contrary to the general α-stable family, it has a closed-form probability density function, which is defined by [32] p where γ is the dispersion (scale) parameter, which controls the spread of the distribution. By replacing p(x) in (4) with the Cauchy prior given in (5), we obtain the following optimisation problem Using proximal splitting methods has numerous advantages when compared to classical methods. In particular, they (i) work under general conditions, e.g. for functions which are non-smooth and extended real-valued, (ii) generally have simple forms, so they are easy to derive and implement, (iii) can be used in large scale problems. In addition, most of the proximal splitting algorithms are generalisations of the classical approaches such as the projected gradient algorithm [33].
In order to solve the minimisation problem in (6) through efficient proximal algorithms such as forward-backward (FB) or the alternating direction of multipliers method (ADMM), the proximal operator of the Cauchy regularisation function should be defined. Proximal operators have been extensively used in solving inverse problems, whereby they can generally be computed efficiently using various algorithms for a given regularisation function, e.g. the soft thresholding function for L 1 norm, or Chambolle's method for the T V norm [22]. Besides, prox µ h has similar properties to the gradient mapping operators, which point in the direction of the minimum of h. Thus, for any function h(·) and µ > 0, the proximal operator, prox µ h : R → R is defined as [21], [33] For a Cauchy based penalty function, we recall that the function h(·) is given by which implies the Cauchy proximal operator is The solution to this minimisation problem can be obtained by taking the first derivative of (9) in terms of u and setting it to zero. Hence we have Wan et al. [32] proposed a Bayesian maximum a-posteriori (MAP) solution to the problem of denoising a Cauchy signal in Gaussian noise, and referred to this solution as "Cauchy shrinkage". Similarly, the minimisation problem in (9) can be solved with the same approach as in [32], using however a different parameterisation. Hence, following [32], the solution to the cubic function given in (10) can be obtained through Cardano's method, which is given in Algorithm 1.
III. CONVERGENCE ANALYSIS
In order to analyse the convergence properties of the proposed method, we start from the minimisation given in (6). Since we have a quadratic data fidelity term and a non-convex penalty function, the overall cost function in (6) will be non-convex. To benefit from convex optimisation principles in solving (6), we seek to ensure that the cost function in (6) is convex by controlling the general system parameters e.g. σ and γ. For this purpose, we start with the following lemma. Proof. In order to prove that the function h(x) is twice continuously differentiable, we need to show: Thus, the function h is obviously twice continuously differentiable.
Since γ generally takes relatively small values when compared to x, it is not practical to enforce this condition for convexity. Therefore, we assume that the function h is non-convex almost everywhere on the support of x. It offers a graphical confirmation of the proof to Lemma 1. Specifically, red and magenta dots in Figure 2 show limit values for the first and second derivatives, respectively. Besides, the horizontal dashed-line shows derivative value equals to zero, where the second derivative takes negative values outside of the interval −γ ≤ x ≤ γ, which demonstrates the non-convexity of the function h.
We now state the following theorem that establishes the condition to preserve the convexity of the cost function in (6).
Theorem 1.
Let h be the twice continuously differentiable and non-convex penalty function in (8) with γ > 0, and the forward operator A either orthogonal satisfying A T A = I, or an overcomplete tight frame satisfying A T A ≈ rI with r > 0 where I is the identity matrix. Then, the cost function F : is strictly convex if Proof. According to Lemma 1, the function F is twice continuously differentiable, and we further express the Hessian of F as This must be positive definite in order for the cost function F to be convex: Recalling that A T A ≈ rI, then we have To complete the square on the left-hand side, we add and subtract σ 4 r and 4σ 2 γ 2 . Then, we have It can be easily seen that the term is always positive as well as the noise standard deviation σ. Thus, for the inequality in (22) to hold, the simplified condition of should be satisfied. This leads to the condition required to ensure (strict) convexity of the function F : and the existance of a unique solution for the given cost function.
Theorem 1 provides the critical value for the scale parameter of the non-convex Cauchy-based penalty that ensures the whole cost function remains convex. As noted, this condition depends of the value of the noise standard deviation σ and the parameter r, which follows from the assumption that A T A has a diagonal form. In the following we make another remark.
Remark 2. To preserve the convexity of the problem overall, in spite of the non-convexity of the Cauchy based penalty function, requires to have a forward operator A, which is orthonormal (A T A = I) or constitutes an overcomplete tight frame with A T A ≈ rI. For applications such as denoising, where A = I and situations where
A is the Fourier or orthogonal wavelet transform, convergence is guaranteed according to Theorem 1. However, in cases where forward models do not satisfy the relation A T A ≈ rI, or estimating r is challenging, the condition given in Theorem 1 will not be suitable to ensure convergence.
For more general situations,which include the assumptions in Theorem 1 and beyond, we propose another solution which guarantees convergence provided that the solution is obtained via a proximal splitting algorithm even though A T A = rI. We start by another lemma, which states a condition to ensure that the Cauchy proximal operator cost function is convex, and converges to a global minimum even though it corresponds to a non-convex penalty function.
Lemma 2. The function
with γ > 0, µ > 0, is strictly convex if the following condition is obeyed: Proof. We first express the second derivation of J as Then, akin to the proof of Theorem 1, we continue with the convexity condition To complete the square on the left-hand side, we add and subtract µ 2 and 4γ 2 µ, which gives Since the term (u 2 − (µ − γ 2 )) 2 is always positive as well as the step size µ, for the inequality in (34) to hold, the condition should be satisfied. Hence, the cost function in the Cauchy proximal operator J becomes strictly convex if In Figure 3, we demonstrate the effect of the relationship between µ and γ on J(u) and its second derivative J ′′ (u). Both sub-figures in Figure 3 obviously show that violating the expression for convexity given in Lemma 2, makes the cost function non-convex. Remark 3. Instead of providing a condition to ensure that the Cauchy based penalty function remains convex, Lemma 2 provides a condition which preserves the convexity of the Cauchy proximal operator. Please note that a solution to the proximal operator prox µ Cauchy can always be computed since it has an explicit expression which is given in Algorithm 1. However, the convexity condition given in Lemma 2 leads on to the theorem in the following section, which provides the required condition to guarantee the convergence for the cost function in (6), when relaxing the assumption of orthogonality and over-completeness of the forward operator A in Theorem 1.
A. Cauchy proximal splitting
There are several proximal splitting algorithms that can be used to solve the optimisation problem in (4), including the forward-backward splitting, Douglas-Rachford (DR) splitting, or alternating direction method of multipliers (ADMM) [21] to name but a few. In this paper, we focus on the forward-backward algorithm to obtain efficient solutions to the inverse problem in (1). Indeed, an optimisation problem of the form arg min can be solved via the FB algorithm. Provided f 2 : R N → R is L-Lipchitz differentiable with Lipchitz constant L and f 1 : R N → R, then (37) is solved iteratively as [21] x where step size µ is set within the interval 0, 2 L . In this paper, the function f 2 is the data fidelity term and takes the form of y−Ax 2 2 2σ 2 from (6) whilst the function f 1 is the Cauchy based penalty function h. Following these preliminaries, we can now state the following: (6) is
holds, then the sub-solution of the FB algorithm is strictly convex, and the FB iteration in (39) converges to the global minimum.
Proof. At each iteration n, in order to obtain the iterative estimate x (n+1) , by comparing to (7) and (39), we solve where the function G : Guaranteeing a convex minimisation problem at each FB iteration will make the whole process convex. As a result, the iterative procedure in (39) converges to the global minimum of G.
Thus, for the cost function G to be convex, the condition ▽ 2 G(u) 0 should be satisfied. Calculating the Hessian of G, we have It is straightforward to show that the required condition to satisfy (43) can be obtained in the same way as in (29). Hence, the rest of the proof follows that of Lemma 2.
Consequently, despite having a non-convex penalty function, the FB sub-problem corresponding to the cost function G is strictly convex and converges to the global minimum, with the condition Remark 4. Note that satisfying the convexity condition for the Cauchy proximal operator via Lemma 2 guarantees the convexity of the general solution via the iterative algorithm (39). For this, either the step size µ can be set based on a γ value estimated directly from the observations, or alternatively, γ can be set in cases when the Lipchitz constant L is computed and/or estimating γ is ill-posed.
Remark 5. Since the data fidelity function f 2 is convex and L-Lipchitz differentiable, using ADMM or DR algorithms instead of FB in solving the minimisation problem in (37) for the non-convex Cauchy based penalty function whilst satisfying condition (44), will not change anything and therefore, their solutions converge to the global minimum. Thus, the FB based approach considered in this paper can be replaced with other splitting algorithms.
Remark 6. The non-convex Cauchy penalty function proposed in this paper guarantees convergence to a minimum by satisfying either (i) A T A ≈ rI (including r = 1) along with the condition in Theorem 1, or (ii) just the condition from Theorem 2 via a proximal splitting method such as the FB algorithm.
The FB-based convex proximal splitting algorithm for the Cauchy-based penalty function is given in Algorithm 2.
IV. EXPERIMENTAL ANALYSIS
We focus the experimental part of this paper on two separate applications. First, we evaluate the proposed approach on 1D signal denoising in the frequency domain. Secondly, we investigate it when applied to two classical image processing tasks, i.e. denoising and de-blurring.
A. Signal Denoising in Frequency Domain
The first example demonstrates the use of the non-convex Cauchy based penalty function in 1D signal denoising application. In particular, we consider the classical sinusoidal signal "Heavy Sine" containing 128 samples and included in Matlab distributions. This signal was analysed in additive white Gaussian noise (AWGN) of several levels, with signal-to-noise-ratio (SNR) values between 2 and 12 decibels (dB).
We synthesised the signal y ∈ R M via an over-sampled discrete inverse Fourier transform operator F −1 as y = F −1 x + n, where x ∈ C N and the number of points in the frequency domain was N = 512 > M = 128. The operator F is a normalised tight frame with F H F = I. We compared the performance of the Cauchy based penalty function with L 1 and T V norm penalty functions. The root-mean square error (RMSE) was used as evaluation metric in this case.
The first experiment is depicted in Figure 4, which shows the effect of the scale parameter γ on denoising results both when violating and when satisfying the conditions proposed for convexity. Specifically, the vertical red and black dotted-lines show the scale parameter value for γ = σ/2 √ r from Theorem 1 and γ = √ µ/2 from Theorem 2, respectively. A range of values for γ between 10 −2 and 10 2 was set, and denoised signals were obtained for each γ values by using the Algorithm 2. The error term ε was set to 10 −3 whilst the maximum number of iterations M axIter was set to 500. We follow [21] for the selection of the step size µ and then use Theorems 1 and 2 to decide the minimum value for γ that preserves convexity. From the definition [21], the data fidelity term y − F −1 x 2 2 is convex and differentiable with a L-Lipschitz continuous gradient, where L is the Lipschitz constant. Thus, we can select the step size µ within the range 0, 2 L . There is no strict rule in choosing the µ values, but the literature suggests that choosing µ close to 2 L is more efficient. Hence, for this example, we decided to set µ = 3 2L . On examining Figure 4, it is clear that the lowest RMSE value is achieved for a γ value higher than the critical values shown with red and black doted-lines. It can also be seen that γ values 2-3 times higher than both critical values give relatively good results when compared to those with γ values which are 20 times higher. In order to further compare the performance of the proposed Cauchy denoiser, we calculated RMSE values for initial SNR values between 2 and 12 dBs. For each noise level, simulations were repeated 100 times and corresponding average RMSE values for each penalty function and SNR values are presented in Figure 5. It can be seen that the lowest RMSE values are obtained when employing the Cauchy based penalty function for all SNR values. TV denoising performance gets closer to that of the proposed penalty function when increasing the noise level. For visual assessment, Fig. 6 shows denoising results corresponding to L 1 , T V and Cauchy based penalty functions for an SNR of 7 dBs. For all the penalty functions tested the denoising effect can be clearly seen but the proposed penalty function leads to the lowest RMSE.
B. 2D Image Reconstruction
In the second set of experiments, we investigated the influence of the proposed Cauchy-based regularisation on the classical 2D image reconstruction tasks of denoising and de-blurring. Specifically, we start by discussing the effects of the scale parameter γ on the reconstruction results depending on whether the conditions in Theorems 1 and 2 are violated or satisfied.
For the image de-blurring example, the forward operator A was selected as a 5×5 Gaussian point spread function (PSF) with standard deviation of 1. The noise is AWGN with blurred-signal to noise ratio (BSNR = 10 log 10 {var(Ax)/σ 2 }) of 40 dBs. For the denoising example, the forward operator A is the identity matrix I, the additive noise corresponds to an SNR of 20 decibels. We used the standard cameraman image for benchmarking for both examples. The analysis was performed in terms of the peak signal to noise ratio (PSNR) and RMSE. A range of values for γ between 10 −4 and 10 4 was set, and the reconstructed images were obtained for each γ values by using Algorithm 2. The error term ε was set to 10 −3 whilst the maximum number of iterations M axIter was set to 250. The step size µ was set to 3 2L for this example. Figure 7 shows the effect of γ values on reconstruction results. The left y-axes in both sub-figures show RMSE values whilst the right y-axes represent the PSNR values for different vaues of γ on the x-axes. As can clearly be seen from both sub-figures, reconstruction results are poor when the conditions in both Theorem 1 and 2 (left sides of the vertical dotted-lines) are violated. However, starting from either conditions and higher values of γ, we obtained better reconstruction results with an important reconstruction gain around 16dBs for denoising and 2 dBs for deblurring in terms of PSNR. This proves experimentally the correctness of the convexity conditions derived in Theorems 1 and 2. Unlike in the the 1D case, for image reconstruction, we observe a similar performance for higher values of γ .We conclude that there is no strict rule for choosing the optimum value of γ but we noticed that the best performance is generally achieved within a specific interval and hence we recommend using γ ∈ √ µ, 20 √ µ .
Please also note that we do not compare the two conditions proposed in Theorems 1 and 2. They are not antagonistic, but rather conditions that together provide solutions in various situations. Their usage depends on the problem at hand (cf. Remark 6), and both guarantee the convergence in specific circumstances. It can be seen that the Cauchy based penalty function has a poor denoising performance when γ = in Figure 7 (e). T V , L 1 and Cauchy-based results are visibly similar, but the Cauchy penalty determines the highest PSNR value.
V. CONCLUSIONS
In this paper, we investigated a non-convex penalty function based on the Cauchy distribution. We proposed a FB proximal splitting methodology that employs the Cauchy proximal operator. Furthermore, we derived a closed form expression for the Cauchy proximal operator. In order to guarantee the convexity of the overall cost function in spite of the non-convexity of the penalty term, we derived a condition relating the Cauchy scale parameter γ and the step size parameter µ of the FB algorithm. Moreover, in special cases where the forward operator is orthogonal (A T A = I), or an overcomplete tight frame (A T A = rI) with r > 0, we derived another condition for convexity that is independent on the proximal splitting algorithm employed.
In order to demonstrate the effectiveness of the proposed penalty function, we tested its performance in generic denoising and de-convolution examples in comparison to the L 1 and T V norm penalty functions. The Cauchy based penalty achieved better reconstruction results compared to both. We further showed the effect of violating the proposed convexity condition in both examples. We concluded that the best parameter set always lays in the correct side of the derived critical value (i.e. γ ≥ √ µ 2 ). Our current work is focussed on applications of the proposed penalty function in solving SAR imaging inverse problems and will be reported in a future communication. In addition, the existence of a closed-form expression for the Cauchy proximal operator makes is suitable for advanced Bayesian inferences, such as uncertainty quantification, e.g. via p-MCMC methods, which is another of our current endeavours.
|
2020-03-11T01:00:37.177Z
|
2020-03-10T00:00:00.000
|
{
"year": 2020,
"sha1": "922fdb1eb4b67ad77a143db2b4cff06627b4cf95",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2003.04798",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ec58f5f567afa68ce11074d12b1dc96fb46e9ea8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
29928415
|
pes2o/s2orc
|
v3-fos-license
|
Supersolitons: Solitonic excitations in atomic soliton chains
We show that, by appropriately tuning physically relevant interactions in two-component nonlinear Schrodinger equations, it is possible to achieve a regime with particle-like solitonic collisions. This allows us to construct an analogue of the Newton's cradle and also to create localized collective excitations in solitary-wave chains which are quasi-integrable solitons, i.e. supersolitons. We give a physical explanation of the phenomenon, support it with a perturbative analysis, and confirm our predictions by direct simulations.
Introduction and model.-One of the most successful concepts of nonlinear science with applications to a great variety of physical contexts is that of solitons, i.e., selflocalized nonlinear waves, sustained by the balance between dispersion and nonlinearity. Many types of solitons have been studied, from the classical examples found in integrable models, such as the Korteweg -de Vries, sine-Gordon, Toda-lattice (TL), nonlinear Schrödinger, and other celebrated equations, and extending into the realm of realistic non-integrable nonlinear-wave models.
Solitons are usually expected to be robust against collisions, which is a trademark feature of integrable equations. A lot of activity has been directed at the study of soliton collisions and interactions in non-integrable systems. Recent advances include the analysis of chaotic scattering [1], the formation of soliton bound states and soliton clusters [2], and the studies of soliton collisions in vector systems [1,3,4], to name just a few.
While it is customary to speak of solitons as elastically colliding quasi-particles, most solitons, specifically in integrable systems, pass through each other, thus clearly featuring their wave nature. On the other hand, elastic collisions between classical particles lead to momentum exchange between them, and rebound, due to the nonpenetrability of classical particles.
In this paper we discuss a particular soliton collision scenario of physical relevance, where truly elastic particle-like soliton collisions can be achieved. We will show how this can be used to build a vector-soliton version of the Newton's cradle, and to build supersolitons, i.e. collective soliton-like excitations in arrays of solitary waves, leading to a remarkable conjunction of emergent phenomena: the former one representing the formation of robust soliton trains, and the latter effect implying the emergence of an effectively quasi-discrete soliton at a higher level of organization.
The basic model which allows to implement the abovementioned effects is based on the two-component (vecto-rial) nonlinear Schrödinger equation (NLSE), that arises in sundry contexts [5]. Its normalized form is An important physical realization of this model is a multicomponent Bose-Einstein condensate (BEC), where u j are wave functions of two atomic states under the action of a strong transverse trap with frequency ν ⊥ [6]. The variables x and t are measured, respectively, in units of a 0 = /mν ⊥ and 1/ν ⊥ , and g ij ≡ 2a ij /a 0 , with a ij the respective s−wave scattering lengths. The normalization integral for u j gives the number of atoms in the respective species, +∞ −∞ |u j | 2 d 3 x = N j . Solitons in BECs have been created experimentally [7] and their interactions studied theoretically in many papers (see e.g. [8,9,10]).
Particle-like elastic collisions and the solitonic Newton's cradle.-We will consider Eqs. (1) with intracomponent attraction (g 11 , g 22 < 0) and inter-component repulsion (g 12 , g 21 > 0). To fix ideas we will choose g 11 = g 22 = −g 12 = −g 21 (= 1 without loss of generality) for which case the coupled NLSE are not integrable [11]. In this situation the solitons, that may be formed in both components independently, interact incoherently with a repulsive force. The basic physical feature underlying our analysis is that the dynamics of those solitons may be similar to that of elastic beads. We will explore the cases of harmonic longitudinal confinement V (x) = Ω 2 x 2 /2, and ring-shaped configurations [12].
Let us consider soliton trains built as follows: with alternation of the soliton species in the train, i.e., ...ξ n−1 < ζ n−1 < ξ n < ζ n < ξ n+1 < ζ n+1 < ..., v n and w n being initial velocities of the solitons. In Fig. 1, where the trap is absent, panel (a) displays a single collision event (N = 1). It is noteworthy that, because of the repulsive inter-component interaction, the incident soliton (in field u 1 ) transfers all of its momentum to the initially static soliton (in u 2 ), in full compliance with the behavior of elastic particles and contrary to the typical behavior of nontopological solitons in integrable systems. The dynamics of a train of eight alternating solitons in a ring configuration demonstrates the periodic transfer of the momentum through the train, see Fig 1 By adding the parabolic trapping potential we urge the solitons to oscillate around the equilibrium position.
In the quantum interpretation of our model this setup would provide a way to construct a quantum Newton's cradle with atomic solitons, as illustrated in Fig. 2. Unlike other settings explored in BEC [13], the cradle configuration does not require a lattice potential to create effective particles, which are here created by purely nonlinear interactions.
The Toda-lattice limit: supersolitons -We now consider two alternating chains of solitons set along a ring. Within the effective-particle approach, we assume that all solitons, which have identical amplitudes in each component, η (for u 1 ) and θ (for u 2 ), behave like rigid particles and thus do not suffer conspicuous deformation, i.e., each soliton may be approximated by with the initial positions arranged as per Eq. (3). A straightforward analysis based on the perturbation theory for solitons [14] yields the following system of equations of motion for the soliton coordinates: These equations are derived under the assumption that adjacent solitons are well separated, i.e., (η, θ) (ξ n − ζ n−1 ) , (η, θ) (ζ n − ξ n ) ≫ 1, although a strong inequality is not really necessary here. Similar ideas have been used to derive equations for the interaction of other elementary nonlinear structures, which gives rise to different equations at a higher level of organization [3,10,15]. Equations (5) with Ω = 0 reduce to the so-called diatomic TL, which is not integrable, although some solutions are known. With η = θ and Ω = 0 in Eqs. (5) and defining q 2n (t) = 2ηξ n (t), q 2n+1 (t) = 2ηζ n (t) and α = 32η 4 , we arrive at the integrable TL model [16], This model describes a dynamical lattice with the exponential potential of the nearest-neighbor interaction. However, potentials of the interaction between adjacent atoms in real condensed-matter systems are never exponential, being closer to those of nonlinear anharmonic oscillators. This is why the only experimental realization of the integrable TL was realized in electric transmission lines [18], that may be readily designed in exact correspondence to Eqs. (6). Our analysis suggests a possibility to create Toda solitons, of both mono-and diatomic types, as excitations in interwoven arrays of multicomponent NLSE solitary waves. We name these excitations supersolitons since they occur on top of an array of "elementary" solitons, and are expected to be as robust as solitons in integrable models. The same name was previously applied to solitons in supersymmetric models [17], and, in a completely different context, to localized topological collective excitations in chains of fluxons trapped in periodically inhomogeneous Josephson junctions and in layered superconducting structures [15]. The appearance of TL supersolitons represents a remarkable phenomenon at a higher-organization level, using, as building blocks, solitary waves of the multicomponent NLSE, i.e. a strongly nonintegrable model.
In the monoatomic lattice, Eq. (6) has an obvious equilibrium solution with q n = L/ (2N ), where N is the number of solitons in each subchain, and L the total length of the system. For small perturbations with frequency ω and wavenumber k around this configuration, the dispersion relation is ω 2 = 128η 4 e −ηL/N sin 2 (k/2). With respect to the quantization imposed by the boundary conditions for the ring-shaped soliton chain, k = πm/N , m = 0, ±1, ±2, ... , this yields If a wave in the lattice is excited by kicking one soliton and lending it velocity v, the wave will hit solitons with period T = L/(2N v), which corresponds to an effective excitation frequency ω exc ≡ 2π/T = 4πN v/L. Thus, resonant excitations may be expected under the condition P ω exc = Q|ω m |, or, in other words, at values of the kick velocity belonging to the following resonant spectrum, (8) where integers Q and P stand for the order of the resonance and subresonance (P = Q = 1 correspond to the fundamental resonance). Another interpretation of this resonance condition (cf. Ref. [19]) is that the kick velocity coincides with the phase velocity of linear waves. Numerical studies of supersolitons -To verify our predictions based on Eq. (6), we have performed numerical simulations of Eq. (1). First, in Fig. 3 we have generated a single supersoliton by kicking one of the most external solitons in one of the components. Since this excitation does not correspond exactly to a supersoliton we also obtain a small ammount of radiation which is seen as small remnant oscillations of the individual solitary waves. Apart from this efect due to the excitation procedure, the propagation of the supersoliton is perfect as seen both in the amplitude [Fig 3(a)] and pseudocolor [ Fig. 3(b)] plots. Another effect seen in Fig. 3(a) and not considered in our model is the small compression of the individual solitary waves when they are hit by the supersoliton (the model assumes equal amplitude individual solitary waves). However this small effect does not affect our conclusions and can be minimized by considering smaller energy collisions (i.e. incident speeds). Fig. 4 shows the collisional behavior for head on collisions of equal speed solitons [ Fig. 4(a)] and the overtaking of a slow supersoliton by a faster one [Fig. 4(b)]. In both cases the supersolitonic excitations behave as true solitons, what it is justified by the integrability of our simple model given by Eqs. (6). We want to emphasize again that these behaviors, typical of integrable systems, arise on top of a strongly nonintegrable model. Can scalar models support supersolitons?-Soliton collisions in the framework of scalar NLSEs have been studied in various contexts , and equations similar to Eqs. (5) have been derived using different approaches [3,10], leading to the so-called complex TL. Despite the formal similarities, the ensuing dynamics is not robust, and solitonic solutions turn out to be unstable because of the phase dependence of the interactions. An example is displayed in Fig. 5(a), which shows that the phase shifts induced by the initial kick velocity lead to an unstable dynamics of the single-component chain, whereas in its alternating two-component counterpart the system does not display any instability, as shown in Fig. 5(b); in particular, the configuration shown in Fig. 5(b) periodically recovers its shape. Thus, the vectorial system with incoherent interactions is free of the instability of soliton chains in single-component models with coherent (phasedependent) interactions [20]. Other parameters are as in Fig. 3.
Experimental realization.-The creation of TL supersolitons in BECs would depend on the use Feshbach resonance techniques to get an atomic mixture with attractive intra-species and repulsive inter-species interactions. Atomic mixtures with controllable interspecies interactions have already been reported in Ref. [21]. Our initial state of alternating solitons may be created by the modulational instability and segregation from an initially stable two-component mixture [22].
Conclusions.-We have explored a physical model based on the vectorial NLSE, in which hard-particle-like (bouncing) elastic collisions between solitons belonging to different species are possible. These interactions allow building an analogue of the Newton's cradle using solitary waves, and to supersolitons in a chain of alternating solitons. The existence of these robust localized collective excitations on top of arrays of nonintegrable solitons represents a remarkable emergent phenomenon.
|
2018-04-03T01:31:12.738Z
|
2008-04-11T00:00:00.000
|
{
"year": 2008,
"sha1": "47b8889164357006551c3954b5bc880f5a9df053",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0804.1927",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6ee1e5e396a77f704be80ba3945d785a18400296",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.